mozilla

Mozilla Nederland LogoDe Nederlandse
Mozilla-gemeenschap

Matt Thompson: What we’re working on this Heartbeat

Mozilla planet - ti, 20/01/2015 - 17:11

Transparency. Agililty. Radical participation. That’s how we want to work on Webmaker this year. We’ve got a long way to go,  but we’re building concrete improvements and momentum — every two weeks.

We work mostly in two-week sprints or “Heartbeats.” Here’s the priorities we’ve set together for the current Heartbeat ending January 30.

Questions? Want to get involved? Ask questions in any of the tickets linked below, say hello in #webmaker IRC, or get in touch with @OpenMatt.

What we’re working on now

See it all (always up to date): http://build.webmaker.org/now 

Or see the work broken down by:

Learning Networks
  • Design & test new teach.webmaker.org wireframes
  • Get the first Webmaker Club curriculum module ready for testing
  • Finalize our documentation for Badges / Credentialing
  • Document our Q1 / Q2 plan for Training

Learning Products

Desktop / Tablet:

  • Improve user on-boarding (Phase II)
  • Improve our email communications after users sign up
  • Create better moderation functionality for webmaker.org/explore (formerly known as “the gallery”)
  • Build a unified tool prototype (Phase II)

 Mobile

  • Draft demo script and plan our marketing activities for Mobile World Congress
  • Make localization improvements to the Webmaker App
  • Build and ship device integrations and a screenshot service for Webmaker App
  • Distribute the first draft of our Kenya Field Report

 Engagement
  • Prep and execute Data Privacy Day campaign (Jan 28)
  • Prep for Net Neutrality Campaign (Feb 5)
  • Draft a branding plan for Learning Products and Learning Networks
  • Design a splash page for Mobile World Congress

 Planning & Process
  • Design and execute a communications plan on our overall 2015 plan
  • Document all our Q1 goals and KPIs in one spot
  • Add those quarterly goals to our dashboard
  • Ship updated documentation to build.webmaker.org (including: “How we do Heartbeats” & “How to use Git Hub Issues”)
Categorieën: Mozilla-nl planet

Air Mozilla: Martes mozilleros

Mozilla planet - ti, 20/01/2015 - 17:00

Martes mozilleros Reunión bi-semanal para hablar sobre el estado de Mozilla, la comunidad y sus proyectos.

Categorieën: Mozilla-nl planet

Raniere Silva: Mathml January Meeting

Mozilla planet - ti, 20/01/2015 - 03:00
Mathml January Meeting

This is a report about the Mozilla MathML January IRC Meeting (see the announcement here). The topics of the meeting can be found in this PAD (local copy of the PAD) and the IRC log (local copy of the IRC log) is also available.

The next meeting will be in March 11th at 8pm UTC (check the time at your location here). Please add topics in the PAD.

Note

Our February meeting was cancelled. =(

Leia mais...

Categorieën: Mozilla-nl planet

Cameron Kaiser: Upgrading the unupgradeable: video card options for the Quad G5

Mozilla planet - mo, 19/01/2015 - 22:01
Now that the 2015 honeymoon and hangovers are over, it's back to business, including the annual retro-room photo spread (check out the new pictures of the iMac G3, the TAM and the PDP-11/44). And, as previously mentioned on my ripping yarn about long-life computing -- by this way, this winter the Quad G5's cores got all the way down to 30 C on the new CPU assembly, which is positively arctic -- 2015 is my year for a hard disk swap. I was toying with getting an apparently Power Mac compatible Seagate hybrid SSHD that Martin Kuka&ccaron was purchasing (perhaps he'll give his capsule review in the comments or on his blog?), but I couldn't find out if it failed gracefully to the HD when the flash eventually dies, and since I do large amounts of disk writes for video and development I decided to stick with a spinning disk. The Quad now has two 64MB-buffer 7200rpm SATA II Western Digital drives and the old ones went into storage as desperation backups; while 10K or 15Krpm was a brief consideration, their additional heat may be problematic for the Quad (especially with summers around here) and I think I'll go with what I know works. Since I'm down to only one swap left I think I might stretch the swap interval out to six years, and that will get me through 2027.

At the same time I was thinking of what more I could do to pump the Quad up. Obviously the CPU is a dead-end, and I already have 8GB of RAM in it, which Tiger right now indicates I am only using 1.5GB of (with TenFourFox, Photoshop, Terminal, Texapp, BBEdit and a music player open) -- I'd have to replace all the 1GB sticks with 2GB sticks to max it out, and I'd probably see little if any benefit except maybe as file cache. So I left the memory alone; maybe I'll do it for giggles if G5 RAM gets really cheap.

However, I'd consolidated the USB and FireWire PCIe cards into a Sonnet combo card, so that freed up a slot and meant I could think about the video card. When I bought my Quad G5 new I dithered over the options: the 6600LE, 7800GT and 2-slot Quadro FX 4500, all NVIDIA. I prefer(red) ATIAMD in general because of their long previous solid support for the classic Mac OS, but Apple only offered NVIDIA cards as BTO options at the time. The 6600LE's relatively anaemic throughput wasn't ever in the running, and the Quadro was incredibly expensive (like, 4x the cost!) for a marginal increase in performance in typical workloads, so I bought the 7800GT. Overall, it's been a good card; other than the fan failing on me once, it's been solid, and prices on G5-compatible 7800GTs are now dropping through the floor, making it a reasonably inexpensive upgrade for people still stuck on a 6600. (Another consideration is the aftermarket ATI X1900 GT, which is nearly as fast as the 7800GT.)

However, that also means that prices on other G5-compatible video cards are also dropping through the floor. Above the 7800GT are two options: the Quadro FX 4500, and various third-party hacked video cards, most notably the 2-slot 7800GTX. The GTX is flashed with a hacked Mac 7800GT ROM but keeps the core and memory clocks at the same high speed, yielding a chimera card that's anywhere between 15-30% faster than the Quadro. I bought one of these about a year and a half ago as a test, and while it was noticeably faster in certain tasks and mostly compatible, it had some severe glitchiness with older games and that was unacceptable to me (for example, No One Lives Forever had lots of flashing polygons and bad distortion). I also didn't like that it didn't come with a support extension to safely anchor it in the G5's card guide, leaving it to dangerously flex out of the card slot, so I pulled it and it's sitting in my junk box while I figure out what to do with it. Note that it uses a different power adapter cable than the 7800 or Quadro, so you'll need to make sure it's included if you want to try this card out, and if you dislike the lack of a card guide extension as much as I do you'll need a sacrificial card to steal one from.

Since then Quadro prices plummeted as well, so I picked up a working-pull used Apple OEM FX 4500 on eBay for about $130. The Quadro has 512MB of GDDR3 VRAM (same as the 7800GTX and double the 7800GT), two dual-link DVI ports and a faster core clock; although it also supports 3D glasses, something I found fascinating, it doesn't seem to work with LCD panels, so I can't evaluate that. Many things are not faster, but some things are: 1080p video playback is now much smoother because the Quadro can push more pixels, and high end games now run more reliably at higher resolutions as you would expect without the glitchiness I got in older titles with the 7800GTX. Indeed, returning to the BareFacts graph, the marginal performance improvement and the additional hardware rendering support is now at least for me worth $130 (I just picked up a spare for $80), it's a fully kitted and certified OEM card (no hacks!), and it uses the same power adapter cable as the 7800GT. One other side benefit is that, counterintuitively, the GPU is several degrees cooler (despite being bigger and beefier) and the fan is nearly inaudible, no doubt due to that huge honking heatsink.

It's not a big bump, but it's a step up, and I'm happy. I guess all that leaves is the RAM ...

In TenFourFox news, I'm done writing IonPower (phase 1). Phase 2 is compilation. That'll be some drudgery, but I think we're on target for release with 38ESR.

Categorieën: Mozilla-nl planet

Gervase Markham: Credit as Currency

Mozilla planet - mo, 19/01/2015 - 19:21

Credit is the primary currency of the free software world. Whatever people may say about their motivations for participating in a project, I don’t know any developers who would be happy doing all their work anonymously, or under someone else’s name. There are tangible reasons for this: one’s reputation in a project roughly governs how much influence one has, and participation in an open source project can also indirectly have monetary value, because some employers now look for it on resumés. There are also intangible reasons, perhaps even more powerful: people simply want to be appreciated, and instinctively look for signs that their work was recognized by others. The promise of credit is therefore one of best motivators the project has. When small contributions are acknowledged, people come back to do more.

— Karl Fogel, Producing Open Source Software

Categorieën: Mozilla-nl planet

Christian Heilmann: Be my eyes, my brain, my second pair of eyes…

Mozilla planet - mo, 19/01/2015 - 17:05

(cross published on Medium, in case you want to comment on paragraphs).

In the last few days, the “Be My Eyes” App made quite a splash. And with good reason, as it is a wonderful idea.

Be my eyes

The app plans to connect non-sighted people with sighted ones when they are stuck with a certain task. You ask for a pair of eyes, you connect over a smart phone, video the problem you have and get a volunteer human to help you out with a video call. Literally you offer to be the eyes for another person.

This is not that new, for example there were services that allow for annotation of inaccessible web content (WebVisum, IBM’s (now defunct) social accessibility project) before. But, be my eyes is very pretty and makes it much easier to take part and help people.

Only for the richer eyes…

Right now the app is iOS only, which is annoying. Whilst the accessibility features of iOS used to be exceptional it seems to be losing in quality with iOS8. Of course, the other issue is the price. Shiny Apple things are expensive, Android devices and computers with built-in cameras less so. The source code of be my eyes is on GitHub which is a great start. We might be able to see versions of it on Android and WebRTC driven versions for the web and mobile soon.

Concerns mentioned

As with any product of this ilk, concerns and criticism happen quickly:

  • This may portrait people with disabilities as people who are dependent on others to work. In essence, all you need to do is remove barriers. I know many, very independent blind people and it is depressing how many prejudices are still there that people with disabilities need our help for everything. They don’t. What they need is less people who make assumptions about abilities when building products.
  • There is a quality concern here. We assume that people signing up want to help and have good intentions. However, nothing stops trolls from using this either and deliberately giving people wrong advice. There are people who post seizure-inducing GIFs on epilepsy forums, for example. For a sociopath who wants to hurt people this could be “fun” to abuse. Personally, I want to believe that people are better than that, but only one incident where a blind user gets harmed “for the lulz” might be enough to discredit the whole product.

Extending the scope of this app

I don’t see why this app could not become more than it is now. We all could need a second pair of eyes from time to time. For example:

  • to help with some translation,
  • to recognise what breed a certain puppy is,
  • to help us find inspiration for a painting,
  • to learn how to fix a certain appliance in my kitchen without destroying it
  • to have some locals show us which roads are easier to walk,
  • to have an expert eye tell me if my makeup looks good and what could be done
  • to get fashion advice on what I could mix and match in my closet to look great

Some of those have great potential for monetisation, others were done before and died quickly (the local experts one was a product I was involved in at Yahoo called Yocal, which never saw the light of day and could have been foursquare years before foursquare).

Again, this would be nothing new: expert peer to peer systems have come and gone before. When I worked on Yahoo Answers there were discussions to allow for video upload for questions and answers. A prospect that scared the hell out of me seeing that “is my penis big enough” was one of the most asked questions in the Yahoo Answers Men’s health section (and any other, to be fair).

The defunct Google Answers had the idea to pay experts to answer your questions quickly and efficiently. Newer services like LiveNinja and AirPair do this with video chats (and Google, of course may want Hangouts to be a player in that space).

The issues that all of these services face is quality control and safety. Sooner or later any of the original attempts at this failed because of these. Skype services to pay for audio or video advice very quickly became camsex hangouts or phonesex alternatives. This even happens in the offline world – my sister used to run a call centre and they found out that one of their employees offered her phonesex services to eligible men on the line. Yikes.

Another issue is retain-ability and re-use. It is not fun to try to find a certain part of a video without a timed transcript. This can be automated to a degree – YouTube’s subtitling is a good start – but that brings up the question who else reads the private coaching session you had?

Can this be the start or will hype kill it again?

If anything, the user interface and interaction pattern of Be my Eyes is excellent, and the availability of video phones and chat abilities like WebRTC make it possible to have more of these services soon.

In the coding world, real live interaction is simple these days. JSFiddle’s collaboration button allows you to code together, JSBin allows people to watch you while you code and Mozilla’s together.js allows you to turn any web page into a live audio and video chat with multiple cursors.

We use Google Docs collaboratively, we probably have some live chat going with our colleagues. The technology is there. Firefox now has a built-in peer to peer chat system called Hello. Wouldn’t it be cool to have an API for that to embed it in your products?

The thing that might kill those is hype and inflated demands. Yahoo Answers was an excellent idea to let the human voice and communication patterns prevail over algorithmic results. It failed when all that it was measured against was the amount of users and interactions in the database. This is when the quality was thrown out the window and How is babby formed got through without a blip on the QA radar.

Let’s hope that Be my Eyes will survive the first spike of attention and get some support of people who are OK with a small amount of users who thoroughly want to help each other. I’d like to see that.

Categorieën: Mozilla-nl planet

Andrea Marchesini: RequestSync API

Mozilla planet - mo, 19/01/2015 - 17:01

Last week a new API just for B2G certified apps is landed in mozilla-central: RequestSync API. The basic purpose of this API is to allow apps to schedule tasks, while also letting the user decide when these tasks have to be executed.

Consider the following example: your mail app wants to synchronize your mailbox regularly.

Here is what it does:

navigator.sync.register('mail-synchronizer', { minInterval: 120 /* 2 minutes */, oneShot: false, data: { accountID: 123 }, wifiOnly: true, wakeUpPage: location.href }).then( function() { console.log("mail-synchronizer task has been registered"); }, function() { console.log("Something bad happened."); });

Through this process, the app has registered a new task, called ‘mail-synchronizer’. It will be scheduled every 2 minutes if the device is connected to the wifi. As you can see, the second parameter of the register method is a configuration object. Here are some more details:

  • minInterval is the number of seconds between 1 execution of the task and the next. This is not entirely precise, but it’s an indication for the RequestSyncService. It can happen that the device may be busy doing something, and this task may be postponed for a while.

  • oneShot: boolean. If we want just 1 execution of the task, we should set it to “true”.

  • data: this can be anything. It’s something useful for the app.

  • wifiOnly: by default it is true and it informs the RequestSyncService about the fact that this task must be executed only if we are using a wifi connection.

  • wakeUpPage: this is the page that will be activated when the task is about to be executed.

Register() method returns a Promise object and if called more than once, the app overwrites the previous task configuration.

In the navigator.sync object there are other methods: unregister(), registrations() and registration(), but we can skip them for now.

When the task is executed, the wakeUpPage is activated and it receives a message in this way:

navigator.mozSetMessageHandler('request-sync', function(e) { ... });

the message contains this attributes:

  • task - the task name. In our example this will be ‘mail-synchronizer’.

  • data - what we sent in the registration: { accountID: 123 }.

  • lastSync - This is a DOMTimeStamp containing the date/time of the last execution of this task.

  • minInterval, oneShot, wifiOnly, wakeUpPage - these attributes will be the same as what we set during the registration.

Now, back to our example, the synchronization of the mailbox may take a while. In order to help RequestSyncService to schedule tasks correctly, and to keep the device alive for the all operation (we internally use a CPU wakelock) the mail app may do an operation like this:

navigator.mozSetMessageHandler('request-sync', function(e) { // The synchornization of the mailbox will take a while. // Let's set a promise object and resolve/reject it when needed. navigator.mozSetMessageHandlerPromise(new Promise(function(resolve, reject) { do_the_magic_synchronization(resolve, reject); })); });

Setting a message handler promise, we know that the device will be kept alive until the promise is not resolved or rejected. Furthermore, no other tasks will be executed in the meantime (this is not properly correct, because if the promise takes more than a few minutes to be resolved/rejected, RequestSyncService will continue scheduling other tasks).

As far as the settings app is concerned, the RequestSync API has also a manager of tasks available only for certified apps with a particular permission, but reality just the settings app will have this permission.

Using the requestSync API, the settings app is able to do:

navigator.syncManager.registrations().then( function(results) { ... } );

In this piece of code, results is an array of RequestSyncTask objects. Each object is an active task with attributes such as lastSync, wakeUpPage, oneShot, minInterval, wifiOnly, data, and task, the name. From this object the settings-app can change the policy to this task:

for (var i = 0; i < results.length; ++i) { if (results[i].task == 'mail-synchronizer' && resukts[i].app.manifestURL == 'http://the/mail/app/manifest.url') { results[i].setPolicy('disabled'); } }

setPolicy receives 2 parameters, the second one is optional:

  • state, this is the new state of the task. It can be enabled, disabled or wifiOnly.

  • overwrittenMinInterval: a new minInterval value.

This is all. Any feedback is welcome!

Categorieën: Mozilla-nl planet

Axel Hecht: On the process to code

Mozilla planet - mo, 19/01/2015 - 17:00

gps blogs a bunch about his vision on how coding on Firefox should evolve. Sadly there’s a bunch of process debt in commenting, so I’ll put my comments in a medium where I already have an account.

A lot of his thinking is based on Code First. There’s an earlier post with that title, but with less content on it, IMHO.

In summary, code first capitalizes on the fact that any change to software involves code and therefore puts code front and center in the change process.

I strongly disagree.

Code is really just one artifact of many of creating and maintaining a software product.

Or, as Shane blogged,

First rate hackers need process too

Bugzilla is much more than a review tool, it’s a store of good and bad ideas. It’s where we document which of our ideas are bad, or just not good enough. It’s where we build out our shared understanding of where we are, and where we want to be.

I understand how gps might come to his conclusions. Mostly because I find myself in a bucket that might be like his. Where your process is really just documenting that you agree with yourself, and that you’re going forward with it. And then that you did. Yeah, that’s tedious and boring. Which is why I took my hat as module owner more often lately, and landed changes without a bug, and with rs=foopy. Old guard.

Without applying my basic disagreement to several paragraphs over several posts, some random notes on a few of gps’ thinking and posts:

I agree that we need to do something about reviews. splinter’s better than plain text, but not quite there. Github PRs feel pretty horrible at least in the way we use them for gaia. As soon as things get slightly interesting, no status in either the PR nor bugzilla makes any sense anymore. I’m curious on MozReview/reviewboard, though the process to get things hooked up there is steep. I’ll give it a few more tries for the few things I do on Firefox, and improvements on code and docs. The one time I used it, I felt very new.

Much of the list gps has on process debt are things he cares about, and very few other people. In particular newcomers won’t even know to bother, unless they happen to across that post by accident. The other questions are not process debt, but good things to learn, and to learn early.

Categorieën: Mozilla-nl planet

Gervase Markham: The Zeroth Human Freedom

Mozilla planet - mo, 19/01/2015 - 16:10

We who lived in concentration camps can remember those who walked through the huts comforting others, giving away their last piece of bread. They may have been few in number, but they offer sufficient proof that everything can be taken from a person but the last of the human freedoms – to choose one’s attitude to any set of circumstances – to choose our own way.

This quote is from From Death-Camp to Existentialism (a.k.a. Man’s Search for Meaning) by Victor Frankl. Frankl was an Austrian Jew who spent four years in concentration camps, and afterwards wrote a book about his experiences which has sold over 10 million copies. This quote was part of a sermon yesterday (on contentment) but I share it here because it’s very powerful, and I think it’s also very relevant to how communities live together – with Mozilla being a case in point.

Choosing one’s attitude to a set of circumstances – of which “someone has written something I disagree with and I have become aware of it” is but a small example – is an ability we all have. If someone even in the unimaginable horror of a concentration camp can still retain it, we should all be able to exercise it too. We can choose to react with equanimity… or not. We can choose to be offended and outraged and angry… or not. To say that we cannot do this is to say that we have lost the most basic of human freedoms. No. We are all more than the sum of our circumstances.

Categorieën: Mozilla-nl planet

Adam Lofting: The week ahead: 19 Jan 2015

Mozilla planet - mo, 19/01/2015 - 12:06

January

If all goes to plan, I will:

  • Write a daily working process
  • Use a public todo list, and make it work
  • Catch up on more email from time off
  • Ship V1 of Webmaker Metrics retention dashboard
  • Work out a plan for aligning metrics work with dev team heartbeats
  • Don’t let the immediate todo list get in the way of planning long term processes
  • Invest time in working open
  • Wrestle with multiple todo list systems until they (or I) work together nicely
  • Survive a 5 day week (it’s been a while)
  • Write up final testing blog posts from EOY before those tests are forgotten
  • Book data practices kick-off meetings with all teams

To try and solve some of the process challenges, I’ve gone back to a tool I built a couple of years ago (Done by When) and I’m breaking it a little bit to make it useful to me again. This might end up being an evening time project to learn about some of the new tech the Webmaker team are using this year (particularly rewriting the front end with React). I find it useful to have a side-project to use as a playground for learning new things.

Anyway, have a great week. I’ll try and write up some more notes at the end.

Categorieën: Mozilla-nl planet

Mozilla Firefox 36.0 bèta - Tweakers

Nieuws verzameld via Google - mo, 19/01/2015 - 09:44

Tweakers

Mozilla Firefox 36.0 bèta
Tweakers
Mozilla Firefox 2013 logo (75 pix) Zoals we inmiddels van Mozilla gewend zijn, is kort na het uitkomen van een stabiele versie van webbrowser Firefox de eerste bètarelease van de volgende versie alweer verschenen. In versie 36, die als alles goed gaat ...

Categorieën: Mozilla-nl planet

Mozilla Hadirkan Layanan Video Chat di Firefox - Info Komputer

Nieuws verzameld via Google - mo, 19/01/2015 - 03:14

Info Komputer

Mozilla Hadirkan Layanan Video Chat di Firefox
Info Komputer
Mozilla bekerjasama dengan Telefonica (operator telekomunikasi asal Spanyol) menghadirkan fitur baru yang terintegrasi di peramban Firefox dengan nama Firefox Hello. Firefox Hello merupakan layanan tambahan di Firefox yang memungkinkan ...
Mozilla Firefox Miliki Aplikasi Video ChattingWow Keren
Video Chat Dari Peramban Menggunakan Firefox HelloPCplus

alle 3 nieuwsartikelen »Google Nieuws
Categorieën: Mozilla-nl planet

Mark Côté: BMO 2014 Statistics

Mozilla planet - mo, 19/01/2015 - 02:09

Everyone loves statistics! Right? Right? Hello?

tap tap

feedback screech

Well anyway, here are some numbers from BMO in 2014:

BMO Usage:

33 243 new users registered
45 628 users logged in
23 063 users performed an action
160 586 new bugs filed
138 127 bugs resolved
100 194 patches attached

BMO Development:

1 325 bugs filed
1 214 bugs resolved

Conclusion: there are a lot of dedicated Mozillians out there!

Categorieën: Mozilla-nl planet

Chris Ilias: My Installed Add-ons – Clippings

Mozilla planet - snein, 18/01/2015 - 22:06

I love finding new extensions that do things I never even thought to search for. One of the best ways to find them is through word of mouth. In this case, I guess you can call it “word of blog”. I’m doing a series of blog posts about the extensions I use, and maybe you’ll see one that you want to use.

The first one is Context Search, which I’ve already blogged about.

The second is Clippings. Clippings allows you to keep pieces of text to paste on demand. If you frequently answer email messages with one of a set of replies, you can paste which reply you want to use using the context menu. In my case, I take part in support forums, which means I have to respond to frequently asked questions, typing the same answers frequently. Clippings allows me to have canned responses, so I can answer more support questions in less time.

To save a piece of text as a clipping, select it, then right-click and go to “Clippings”, then “New from Selection“. You’ll then be asked to name the clipping and where to save it among your list of clippings. It supports folders too.

When you want to use that clipping just right-click on the text area, then go to “Clippings” and select the clipping you want to paste.

Clippings is also very useful in Mozilla Thunderbird.

You can install it via the Mozilla Add-ons site.

Categorieën: Mozilla-nl planet

Tess John: Data Migration in Django

Mozilla planet - snein, 18/01/2015 - 19:24

Changing the database is one side of the equation, but often a migration involves changing data as well;

Consider this case

Bug 1096431 - Able to create tasks with duplicate names.Task names should be unique

class Task:

     name = models.CharField(max_length=255, verbose_name=‘title‘)

    start_date = models.DateTimeField(blank=True, null=True)
Solution

Schema migration of course. ie Add unique = True. If you apply this migration to production it will cause an IntergrityError because you make a database column unique, while it’s contents are not unique. To solve this you need to find all tasks with duplicate names and rename them to something unique. This will be a datamigration instead of a schemamigration. Thanks to Giorgos Logiotatidis for the guidance,

Step1 : Create a few task with same names say ‘task’

Step2 : This is datamigration part

python manage.py datamigration Tasks _make_taskname_unique.py

Step 3: When you Open up the newly created file you can see the skeleton  of forwards and backwards function.I wrote a code to rename the duplicate taskname.

Step4 : Now add the unique keyword to the field and apply a schemamigration

Step5 :Finally migrate the model tasks.Now you can the the duplicate tasks gets renamed as  task 2 task 3 etc


Categorieën: Mozilla-nl planet

K Lars Lohn: The Smoothest Migration

Mozilla planet - snein, 18/01/2015 - 14:09
I must say that it was the smoothest migration that I have ever witnessed. The Socorro system data has left our data center and taken up residence at Amazon.

Since 2010, HBase has been our primary storage for Firefox crash data.  Spread across something like 70 machines, we maintained a constant cache of at least six months of crash data.  It was never a pain free system.  Thrift, the system through which Socorro communicated with HBase, seemed to develop a dislike for us from the beginning.  We fought it and it fought back.

Through the adversity that embodied our relationship with Thrift/HBase, Socorro evolved fault tolerance and self healing.  All connections to external resources in Socorro are wrapped with our TransactionExecutor.  It's a class that recognizes certain types of failures and executes a backing off retry when a connection fails.  It's quite generic, as it wraps our connections to HBase, PostgreSQL, RabbitMQ, ElasticSearch and now AmazonEC2.  It ensures that if an external resource fails with a temporary problem, Socorro doesn't fail, too.

Periodically, HBase would become unavailable. The Socorro system, detecting the problem, would back down, biding its time while waiting for the failed resource to recover.  Eventually, after probing the failed resource, Socorro detects recovery and picks up where it left off.

Over the years, we realized that one of the major features that originally attracted us to HBase was not giving us the payoff that we had hoped.  We just weren't using the MapReduce capabilities and found the HBase maintenance costs were not worth the expense.

Thus came the decision that we were to migrate away.  Initially, we considered moving to Ceph and began a Ceph implementation of what we call our CrashStorage API.

Every external resource in Socorro lives encapsulated in a class that implements the Crash Storage API.  Using the Python package Configman, crash storage classes can be loaded at run time, giving us a plugin interface.  Ceph turned out to be a bust when the winds of change directed us to move to AmazonS3. Because we implemented the CrashStorage API using the Boto library, we were able to reuse the code.

Then began the migration.  Rather than just flipping a switch, our migration was gradual.  We started 2014 with HBase as primary storage:


Then, in December, we started running HBase and AmazonS3 together.   We added the new AmazonS3 CrashStorage classes to the Configman managed Socorro INI files.  While we likely restarted the Socorro services, we could have just sent SIGHUP, prompting them to reread their config files, load the new Crash Storage modules and continue running as if nothing had happened.



After most of a month, and completing a migration of old data from HBase to  Amazon, we were ready to cut HBase loose.

I was amused by the non-event of the severing of Thrift from Socorro.  Again, it was a matter of editing HBase out of the configuration, sending a SIGHUP, causing HBase to fall silent. Socorro didn't care.  Announced several hours later on the Socorro mailing list, it seem more like a footnote than an announcement: "oh, by the way, HBase is gone".



Oh, the migration wasn't completely perfect, there were some glitches.  Most of those were from minor cron jobs that were used for special purposes and inadvertently neglected.

The primary datastore migration is not the end of the road.  We still have to move the server processes themselves to Amazon system.  Because everything is captured in the Socorro configuration, however, we do not anticipate that this will an onerous process.

I am quite proud of the success of Socorro's modular design.  I think we programmers only ever really just shuffle complexity around from one place to another.  In my design of Socorro's crash storage system, I have swung a pendulum far to one side, moving the complexity into the configuration.  That has disadvantages.  However, in a system that has to rapidly evolve to changing demands and changing environments, we've just demonstrated a spectacular success.

Credit where credit is due: Rob Helmer spearheaded this migration as the DevOp lead. He pressed the buttons and reworked the configuration files.  Credit also goes to Selena Deckelmann, who lead the way to Boto for Ceph that gave us Boto for Amazon.  Her contribution in writing the Boto CrashStorage class was invaluable.  Me?  While I wrote most of the Boto CrashStorage class and I'm responsible for the overall design, I was able to mainly just be a witness to this migration.  Kind of like watching my children earn great success, I'm proud of the Socorro team and look forward to the next evolutionary steps for Socorro.
Categorieën: Mozilla-nl planet

Henri Sivonen: If You Want Software Freedom on Phones, You Should Work on Firefox OS, Custom Hardware and Web App Self-Hostablility

Mozilla planet - snein, 18/01/2015 - 13:55
TL;DR

To achieve full-stack Software Freedom on mobile phones, I think it makes sense to

  • Focus on Firefox OS, which is already Free Software above the driver layer, instead of focusing on removing proprietary stuff from Android whose functionality is increasingly moving into proprietary components including Google Play Services.
  • Commission custom hardware whose components have been chosen such that the foremost goal is achieving Software Freedom on the driver layer.
  • Develop self-hostable Free Software Web apps for the on-phone software to connect to and a system that makes installing them on a home server as easy as installing desktop or mobile apps and connecting the home server to the Internet as easy as connecting a desktop.
Inspiration

Back in August, I listened to an episode of the Free as in Freedom oggcast that included a FOSDEM 2013 talk by Aaron Williamson titled “Why the free software phone doesn’t exist”. The talk actually didn’t include much discussion of the driver situation and instead devoted a lot of time to talking about services that phones connect to and the interaction of the DMCA with locked bootloaders.

Also, I stumbled upon the Indie Phone project. More on that later.

Software Above the Driver Layer: Firefox OS—Not Replicant

Looking at existing systems, it seems that software close to the hardware on mobile phones tends to be more proprietary than the rest of the operating system. Things like baseband software, GPU drivers, touch sensor drivers and drivers for hardware-accelerated video decoding (and video DRM) tend to be proprietary even when the Linux kernel is used and substantial parts of other system software are Free Software. Moreover, most of the mobile operating systems built on the Linux kernel are actually these days built on the Android flavor of the Linux kernel in order to be able to use drivers developed for Android. Therefore, the driver situation is the same for many of the different mobile operating systems. For these reasons, I think it makes sense to separate the discussion of Software Freedom on the driver layer (code closest to hardware) and the rest of the operating system.

Why Not Replicant?

For software above the driver layer, there seems to be something of a default assumption in the Free Software circles that Replicant is the answer for achieving Software Freedom on phones. This perception of mine probably comes from Replicant being the contender closest to the Free Software Foundation with the FSF having done fundraising and PR for Replicant.

I think betting on Replicant is not a good strategy for the Free Software community if the goal is to deliver Software Freedom on phones to many people (and, therefore, have more of a positive impact on society) instead of just making sure that a Free phone OS exists in a niche somewhere. (I acknowledge that hardline FSF types keep saying negative things about projects that e.g. choose permissive licenses in order to prioritize popularity over copyleft, but the “Free Software, Free Society” thing only works if many people actually run Free Software on the end-user devices, so in that sense, I think it makes sense to think of what has a chance to be run by many people instead of just the existence of a Free phone OS.)

Android is often called an Open Source system, but when someone buys a typical Android phone, they get a system with substantial proprietary parts. Initially, the main proprietary parts above the driver layer where the Google applications (Gmail, Maps, etc.) but the non-app, non-driver parts of the system were developed as Open Source / Free Software in the Android Open Source Project (AOSP). Over time, as Google has realized that OEMs don’t care to deliver updates for the base system, Google has moved more and more stuff to the proprietary Google application package. Some apps that were originally developed as part of AOSP no longer are. Also, Google has introduced Google Play Services, which is a set of proprietary APIs that keeps updating even when the base system doesn’t.

Replicant takes Android and omits the proprietary parts. This means that many of the applications that users expect to see on an Android phone aren’t actually part of Replicant. But more importantly, Replicant doesn’t provide the same APIs as a normal Android system does, because Google Play Services are missing. As more and more applications start relying on Google Play Services, Replicant and Android-as-usually-shipped diverge as development platforms. If Replicant was supposed to benefit from the network effects of being compatible with Android, these benefits will be realized less and less over time.

Also, Android isn’t developed in the open. The developers of Replicant don’t really get to contribute to the next version of AOSP. Instead, Google develops something and then periodically throws a bundle of code over the wall. Therefore, Replicant has the choice of either having no say over how the platform evolves or has the option to diverge even more from Android.

Instead of the evolution of the platform being controlled behind closed doors and the Free Software community having to work with a subset of the mass-market version of the platform, I think it would be healthier to focus efforts on a platform that doesn’t require removing or replacing (non-driver) system components as the first step and whose development happens in public repositories where the Free Software community can contribute to the evolution of the platform.

What Else Is There?

Let’s look at the options. What at least somewhat-Free mobile operating systems are there?

First, there’s software from the OpenMoko era. However, the systems have no appeal to people who don’t care that much about the Free Software aspect. I think it would be strategically wise for the Free Software community to work on a system that has appeal beyond the Free Software community in order to be able to benefit from contributions and network effects beyond the core Free Software community.

Open webOS is not on an upwards trajectory (on phones despite there having been a watch announcement at CES). Tizen (on phones) has been delayed again and again and became available just a few days ago, so it’s not (at least quite yet) a system with demonstrated appeal (on phones) beyond the Free Software community, and it seems that Tizen has substantial non-Free parts. Jolla’s Sailfish OS is actually shipping on a real phone, but Jolla keeps some components proprietary, so the platform fails the criterion of not having to remove or replace (non-driver) system components as the first step (see Nemo). I don’t actually know if Ubuntu Touch has proprietary non-driver system components. However, it does appear to have central components to which you cannot contribute on an “inbound=outbound” licensing basis, because you have to sign a CLA that gives Canonical rights to your code beyond the Free Software license of the project as a condition of your patch getting accepted. In any case, Ubuntu Touch is not shipping yet on real phones, so it is not yet demonstratably a system that has appeal beyond the Free Software community.

Firefox OS, in contrast, is already shipping on multiple real phones (albeit maybe not in your country) demonstrating appeal beyond the Free Software community. Also, Mozilla’s leverage is the control of the trademark—not keeping some key Mozilla-developed code proprietary. The (non-trademark) licensing of the project works on the “inbound=outbound” basis. And, importantly, the development repositories are visible and open to contribution in real time as opposed to code getting thrown over the wall from time to time. Sure, there is code landing such that the motivation of the changes is confidential or obscured with codenames, but if you want to contribute based on your motivations, you can work on the same repositories that the developers who see the confidential requirements work on.

As far as I can tell, Firefox OS has the best combination of not being vaporware, having appeal beyond the Free Software appeal and being run closest to the manner a Free Software project is supposed to be run. So if you want to advance Software Freedom on mobile phones, I think it makes the most sense to put your effort into Firefox OS.

Software Freedom on the Driver Layer: Custom Hardware Needed

Replicant, Firefox OS, Ubuntu Touch, Sailfish OS and Open webOS all use an Android-flavored Linux kernel in order to be able to benefit from the driver availability for Android. Therefore, the considerations for achieving Software Freedom on the driver layer apply equally to all these systems. The foremost problems are controlling the various radios—the GSM/UMTS radio in particular—and the GPU.

If you consider the Firefox OS reference device for 2014 and 2015, Flame, you’ll notice that Mozilla doesn’t have the freedom to deliver updates to all software on the device. Firefox OS is split into three layers: Gonk, Gecko and Gaia. Gonk contains the kernel, drivers and low-level helper processes. Gecko is the browser engine and runs on top of Gonk. Gaia is the system UI and set of base apps running on top of Gecko. You can get Gecko and Gaia builds from Mozilla, but you have to get Gonk builds from the device vendor.

If Software Freedom extended to the whole stack—including drivers—Mozilla (or anyone else) could give you Gonk buids, too. That is, to get full-stack Software Freedom with Firefox OS, the challenge is to come up with hardware whose driver situation allows for a Free-as-in-Freedom Gonk.

As noted, Flame is not that kind of hardware. When this is lamented, it is typically pointed out that “not even the mighty Google” can get the vendors of all the hardware components going into the Nexus devices to provide Free Software drivers and, therefore, a Free Gonk is unrealistic at this point in time.

That observation is correct, but I think it lacks some subtlety. Both Flame and the Nexus devices are reference devices on which the software platform is developed with the assumption that the software platform will then be shipped on other devices that are sufficiently similar that the reference devices can indeed serve as reference. This means that the hardware on the reference devices needs to be reasonably close to the kind of hardware that is going to be available with mass-market price/performance/battery life/weight/size characteristics. Similarity to mass-market hardware trumps Free Software driver availability for these reference devices. (Disclaimer: I don’t participate in the specification of these reference devices, so this paragraph is my educated guess about what’s going on—not any sort of inside knowledge.)

I theorize that building a phone that puts the availability of Free Software drivers first is not impossible but would involve sacrificing on the current mass-market price/performance/battery life/weight/size characteristics and be different enough from the dominant mass-market designs not to make sense as a reference device. Let’s consider how one might go about designing such a phone.

In the radio case, there is proprietary software running on a baseband processor to control the GSM/UMTS radio and some regulatory authorities, such as the FCC, require this software to be certified for regulatory purposes. As a result, the chances of gaining Software Freedom relative to this radio control software in the near term seem slim. From the privacy perspective, it is problematic that this mystery software can have DMA access to the memory of the application processor-i.e. the processor that runs the Linux kernel and the apps.

Technically, data transfer between the application processor and various radios does not need to be fast enough to require DMA access or other low-level coupling. Indeed, for desktop computers, you can get UMTS, Wi-Fi, Bluetooth and GPS radios as external USB devices. It should be possible to document the serial protocol these devices use over USB such that Free drivers can be written on the Linux side while the proprietary radio control software is embedded on the USB device.

This would solve the problem of kernel coupling with non-free drivers in a way that hinders the exercise of Software Freedom relative to the kernel. But wouldn’t the radio control software embedded on the USB device still be non-free? Well, yes it would, but in the current regulatory environment it’s unrealistic to fix that. Moreover, if the software on the USB devices is truly embedded to the point where no one can update it, the Free Software Foundation considers the bundle of hardware and un-updatable software running on the hardware as “hardware” as a whole for Software Freedom purposes. So even if you can’t get the freedom to modify the radio control software, if you make sure that no one can modify it and put it behind a well-defined serial interface, you can both solve the problem of non-free drivers holding back Software Freedom relative to the kernel and get the ideological blessing.

So I think the way to solve the radio side of the problem is to license circuit designs for UMTS, Wi-Fi, Bluetooth and GPS USB dongles and build those devices as hard-wired USB devices onto the main board of the phone inside the phone’s enclosure. (Building hard-wired USB devices into the device enclosure is a common practice in the case of laptops.) This would likely result in something more expensive, more battery draining, heavier and larger than the usual more integrated designs. How much more expensive, heavier, etc.? I don’t know. I hope within bounds that would be acceptable for people willing to pay some extra and accept some extra weigh and somewhat worse battery life and performance in order to get Software Freedom.

As for the GPU, there are a couple of Free drivers: There’s Freedreno for Adreno GPUs. There is the Lima driver for Mali-200 and Mali-400, but a Replicant developer says it’s not good enough yet. Intel has Free drivers for their desktop GPUs and Intel is trying to compete in the mobile space so, who knows, maybe in the reasonably near future Intel manages to integrate GPU design of their own (with a Free driver) with one of their mobile CPUs.

The current Replicant way to address the GPU driver situation is not to have hardware-accelerated OpenGL ES. I think that’s just not going to be good enough. For Firefox OS (or Ubuntu Touch or Sailfish OS or a more recent version of Android) to work reasonably, you have to have hardware-accelerated OpenGL ES. So I think the hardware design of a Free Software phone needs to grow around a mobile GPU that has a Free driver. Maybe that means using a non-phone (to put radios behind USB) QUALCOMM SoC with Adreno. Maybe that means pushing Lima to good enough a state and then licensing Mali-200 or Mali-400. Maybe that means using x86 and waiting for Intel to come up with a mobile GPU. But it seems clear that the GPU is the big constraint and the CPU choice will have to follow from the GPU solution.

For the encumbered codecs that everyone unfortunately needs to have in practice, it would be best to have true hardware implementations that are so complete that the drivers wouldn’t contain parts of the codec but would just push bits to the hardware. This way, the encumberance would be limited to the hardware. (Aside: Similarly, it would be possible to design a hardware CDM for EME. In that case, you could have DRM without it being a Software Freedom problem.)

So I think that in order to achieve Software Freedom on the driver layer, it is necessary to commission hardware that fits Free Software instead of trying to just write software that fits the hardware that’s out there. This is significantly different from how software freedom has been achieved on desktop. Also, the notion of making a big upfront capital investment in order to achieve Software Freedom is rather different from the notion that you only need capital for a PC and then skill and time.

I think it could be possible to raise the necessary capital through crowdfunding. (Purism is trying it with the Librem laptop, but, unfortunately, the rate of donations looks bad as of the start of January 2015.) I’m not going to try to organize anything like that myself-I’m just theorizing. However, it seems that developing a phone by crowdfunding in order to get characteristics that the market isn’t delivering is something that is being attempted. The Indie Phone project expresses intent to crowdfund the development of a phone designed to allow users to own their own data. Which brings us to the topic of the services that the phone connects to.

Freedom on the Service Side: Easy Self-Hostability Needed

Unfortunately, Indie Phone is not about building hardware to run Firefox OS. The project’s Web site talks about an Indie OS but intentionally tries to make the OS seem uninteresting and doesn’t explain what existing software the small team is intending to build upon. (It seems implausible that such a small team could develop an operating system from scratch.) Also, the hardware intentions are vague. The site doesn’t explain if the project is serious about isolating the baseband processor from the application processor out of privacy concerns, for example. But enough about the vagueness of what the project is going to do. Let’s look at the reasons the FAQ gave against Firefox OS (linking to version control, since the FAQ appears to have been removed from the site between the time I started writing this post and the time I got around to publishing):

“As an operating system that runs web applications but without any applications of its own, Firefox OS actually incentivises the use of closed silos like Google. If your platform can only run web apps and the best web apps in town are made by closed silos like Google, your users are going to end up using those apps and their data will end up in these closed silos.”

The FAQ then goes on to express angst about Mozilla’s relationship with Google (the Indie Phone FAQ was published before Mozilla’s seach deal with Yahoo! was announced) and Telefónica and to talk about how Mozilla doesn’t control the hardware but Indie will.

I think there is truth to Web technology naturally having the effect of users gravitating towards whatever centralized service provides the best user experience. However, I think the answer is not to shun Firefox OS but to make de-centralized services easy to self-host and use with Firefox OS.

In particular, it doesn’t seem realistic that anyone would ship a smart phone without a Web browser. In that sense, any smartphone is susceptible to the lure of centralized Web-based services. On the other hand, Google Play and the iOS App Store contain plenty of applications whose user interface is not based on HTML, CSS and JavaScript but still those applications put the users’ data into centralized services. On the flip side, it’s not actually true that Firefox OS only runs Web apps hosted on a central server somewhere. Firefox OS allows you to use HTML, CSS and JavaScript to build apps that are distributed as a zip file and run entirely on the phone without a server component.

But the thing is that, these days, people don’t want even notes or calendar entries that are intended for their own eyes only to stay on the phone only. Instead, even for data meant for the user’s own eyes only, there is a need to have the data show up on multiple devices. I very much doubt that any underdog effort has the muscle to develop a non-Web decentralized network application platform that allows users to interact with their data from all the devices that they want to use to interact with their data. (That is, I wouldn’t bet on e.g Indienet that is going to launch with “with a limited release on OS X Yosemite”.)

I think the answer isn’t fighting the Web Platform but using the only platform that already has clients for all the devices that users want to use-in addition to their phone-to interact with their data: the Web Platform. To use the Web Platform as the application platform such that multiple devices can access the apps but also such that users have Software Freedom, the users need to host the Web apps themselves. Currently, this is way too difficult. Hosting Web apps at home needs to become at least as easy as maintaining a desktop computer at home-preferably easier.

For this to happen, we need:

  • Small home server hardware that is powerful enough to host Web apps for family, that consumes negligible energy (maybe in part by taking the place of the home router that people have always-on consuming electricity today), that is silent and that can boot a vanilla kernel that gets security updates.
  • A Free operating system that runs in such hardware, makes it easy to install Web apps and makes it easy for the apps to become securely reachable over the network.
  • High-quality apps for such a platform.

(Having Software Freedom on the server doesn’t strictly require the server to be placed in your home, but if that’s not a realistic option, there’s clearly a practical freedom deficit even if not under the definition of Free Software. Also, many times the interest in Software Freedom in this area is motivated by data privacy reasons and in the case of Web apps, the server of the apps can see the private data. For these reasons, it makes sense to consider home-hostability.)

Hardware

In this case, the hardware and driver side seems like the smallest problem. At least if you ignore the massive and creepy non-Free firmware, the price of the hardware and don’t try to minimize energy consumption particularly aggressively, suitable x86/x86_64 hardware already exists e.g. from CompuLab. To get the price and energy consumption minimized, it seems that ARM-based solutions would be better, but the situation with 32-bit ARM boards requiring per-board kernel builds and most often proprietary blobs that don’t get updated makes the 32-bit ARM situation so bad that it doesn’t make sense to use 32-bit ARM hardware for this. (At FOSDEM 2013, it sounded like a lot of the time of the FreedomBox project has been sucked into dealing with the badness of the Linux on 32-bit ARM situation.) It remains to be seen whether x86/x86_64 SoCs that boot with generic kernels reach ARM-style price and energy consumption levels first or whether the ARM side gets their generic kernel bootability and Free driver act together (including shipping) with 64-bit ARM first. Either way, the hardware side is getting better.

Apps

As for the apps, PHP-based apps that are supposed to be easy-ish to deploy as long as you have an Apache plus PHP server from a service provider are plentiful, but e.g. Roundcube is no match for Gmail in terms of user experience and even though it’s theoretically possible to write quality software in PHP, the execution paradigm of PHP and the culture of PHP don’t really guide things to that direction.

Instead of relying on the PHP-based apps that are out there and that are woefully uncompetitive with the centralized proprietary offerings, there is a need for better apps written on better foundations (e.g. Python and Node.js). As an example, Mailpile (Python on the server) looks very promising in terms of Gmail-competitive usability aspirations. Unfortunately, as of December 2014, it’s not ready for use yet. (I tried and, yes, filed bugs.) Ethercalc and Etherpad (Node.js on the server) are other important apps.

With apps, the question doesn’t seem to be whether people know how to write them. The question seems to be how to fund the development of the apps so that the people who know how to write them can devote a lot of time to these projects. I, for one, hope that e.g. Mailpile’s user-funded development is sustainable, but it remains to be seen. (Yes, I donated.)

Putting the Apps Together

A crucial missing piece is having a system that can be trivially installed on suitable hardware (or, perhaps in the future, can be pre-installed on suitable hardware) that allows users who want to get started without exercising their freedom to modify the software but provides the freedom to install modified apps if the user so chooses and-perhaps most importantly-makes the networking part very easy.

There are a number of projects that try to aggregate self-hostable apps into a (supposedly at least) easy to install and manage system. However, it seems to me that they tend to be of the PHP flavor, which I think fundamentally disadvantages them in terms of becoming competitive with proprietary centralized Web apps. I think the most promising project in the space that deals with making the better (Python and Node.js-based among others) apps installable with ease is Sandstorm.io, which unfortunately, like Mailpile, doesn’t seem quite ready yet. (Also, in common with Mailpile: a key developer is an ex-Googler. Looks like people who’ve worked there know what it takes to compete with GApps…)

Looking at Sandstorm.io is instructive in terms of seeing what’s hard about putting it all together. On the server, Sandstorm.io runs each Web app in a Linux container that’s walled off from the other apps. All the requests go through a reverse proxy that also provides additional browser-site UI for switching between the apps. Instead of exposing the usual URL structure of each app, Sandstorm.io exposes “grain” URLs, which are unintelligible random-looking character sequences. This design isn’t without problems.

The first problem is that the apps you want to run like Mailpile, Etherpad and Ethercalc have been developed to be deployed on a vanilla Linux server by using application-specific manual steps that puts hosting these apps on a server out of the reach of normal users. (Mailpile is designed to be run on localhost by normal users, but that doesn’t make it reachable from multiple devices, which is what you want from a Web app.) This means that each app needs to be ported to Sandstorm.io. This in turn means that compared to going to upstream, you get stale software, because except for Ethercalc, the maintainer of the Sandstorm.io port isn’t the upstream developer of the app. In fairness, though, the software doesn’t seem to be as a stale as it would be if you installed a package from Debian Stable… Also, as the platform and the apps mature, it’s possible that various app developers start to publish for Sandstorm.io directly on one hand and with more mature apps it’s less necessary to have the latest version (except for security fixes).

Unlike in the case getting a Web app as a Debian package, the URL structure and, it appears, in some cases the storage structure is different in a Sandstorm.io port of an app and in a vanilla upstream version of the app. Therefore, even though avoiding lock-in is one of the things the user is supposed to be able to accomplish by using Sandstorm.io, it’s non-trivial to migrate between the Sandstorm.io version and a non-Sandstorm.io version of a given app. It particularly bothers me that Sandstorm.io completely hides the original URL structure of the app.

Networking

And that leads to the last issue of self-hosting with the ease of just plugging a box into home Ethernet: Web security and Web addressing are rather unfriendly to easy self-hosting.

First of all, there is the problem of getting basic incoming IPv4 connectivity to work. After all, you must be able to reach port 443 (https) of your self-hosting box from all your devices-including reaching the box that’s on your wired home Internet connection from the mobile connection of your phone. Maybe your own router imposes a NAT between your server and the Internet and you’d need to set up port forwarding, which makes things significantly harder than just instructing people to plug stuff in. This might be partially alleviated by making the self-hosting box contain NAT functionality itself so that it could take the place of the NATting home router, but even then you might have to configure something like a cable modem to a bridging mode or, worse, you might be dealing with an ISP who doesn’t actually sell you neutral end-to-end Internet routing and blocks incoming traffic to port 443 (or detects incoming traffic to port 443 and complains to you about running a server even if it’s actually for your personal use so you aren’t violating any clause that prohibits you from using a home connection to offer a service to the public).

One way to solve this would be standardizing simple service were a service provider takes your credit card number and an ssh public key and gives you an IP address. The self-hosting system you run at home would then have a configuration interface that gives you an ssh public key and takes an IP address. The self-hosting box would then establish an ssh reverse tunnel to the IP address with 443 as the local target port and the service provided by the service provider would be sending port 443 of the IP address to this tunnel. You’d still own your data and your server and you’d terminate TLS on your server even though you’d rent an IP address from a data center.

(There are efforts to solve this by giving the user-hosted devices host names under the domain of a service that handles the naming, such as OPI giving the each user a hostname under the op-i.me domain, but then the naming service—not the user—is presumptively the one eligible to get the necessary certificates signed, and delegating away the control of the crypto defeats an important aspect of self-hosting. As a side note, one of the reasons I migrated from hsivonen.iki.fi to hsivonen.fi was that even though I was able to get the board of IKI to submit iki.fi to the Public Suffix List, CAs still seem to think that IKI, not me, is the party eligible for getting certificates signed for hsivonen.iki.fi.)

But even if you solved IPv4-level reachability of the home server from the public Internet as a turn-key service, there are still more hurdles on the way of making this easy. Next, instead of the user having to use an IP address, the user should be able to use a memorable name. So you need to tell the user to go register a domain name, get DNS hosting and point an A record to the IP address. And then you need a certificate for the name you chose for the A record, which at the moment (before Let’s Encrypt is operational) is another thing that makes things too hard.

And that brings us back to Sandstorm.io obscuring the URLs. Rather paradoxically, even though Sandstorm.io is really serious about isolating apps from each other on the server, Sandstorm.io gives up the browser-side isolation of the apps that you’d get with a typical deployment of the upstream apps. The only true way to have browser-enforced privilege separation of the client-side JavaScript parts of the apps is for different apps to have different Origins. An Origin is a triple of URL scheme, host name and port. For the apps not to be ridiculously insecure, the scheme has to be https. This means that you either have to give each app a distinct port number or a distinct host name. On surface, it seems that it would be easy to mint port numbers, but users are not used to typing URLs with non-default port numbers and if you depend on port forwarding in a NATting home router or port forwarding through an ssh reverse tunnel, minting port numbers on-demand isn’t that convenient anymore.

So you really want a distinct host name for each app to have a distinct Origin for browser-enforced privilege separation of JavaScript on the client. But the idea was that you could install new apps easily. This means that you have to be able to generate a new working host name at the time of app installation. So unless you have a programmatic way to configure DNS on the fly and have certificates minted on the fly, neither of which you can currently realistically have for a home server, you need a wildcard in the DNS zone and you need a wildcard TLS certicate. Sandstorm.io instead uses one hostname and obscure URLs, which is understandable. Despite being understandable, it is sad, since it loses both human-facing semantics of the URLs and browser-enforced privilege separation between the apps. (Instead of https://etherpad.example.org/example-document-title and https://ethercalc.example.org/example-spreadsheet-title, you get https://example.org/grain/FcTdrgjttPbhAzzKSv6ESD and https://example.org/grain/o96ouPLKQMEMZkFxNKf2Dr.) Fortunately, Let’s Encrypt seems to be on track to solving the certificate side of this problem by making it easy to get a cert for a newly-minted hostname signed automatically. Even so, the DNS part needs to be made easy enough that it doesn’t remain a blocker for self-hosting a box that allows on-demand Web app installation with browser-side app privilege separation.

Conclusion

There are lots of subproblems to work on, but, fortunately, things don’t seem fundamentally impossible. Interestingly, the problem with software that resides on the phone may be the relatively easy part to solve. That is not to say that it is easy to solve, but once solved, it can scale to a lot of users without the users having to do special things to get started in the role of a user who does not exercise the freedom to modify the system. However, since users these days are not satisfied by merely device-resident software but want things to work across multiple devices, the server-side part is relevant and harder to scale. Somewhat paradoxically, the hardest thing to scale in a usable way seems like a triviality on surface: the addressing of the server-side part in a way that gives sovereignty to users.

Categorieën: Mozilla-nl planet

Mozilla Firefox 35 Is Now Operational - Guardian Liberty Voice

Nieuws verzameld via Google - sn, 17/01/2015 - 20:29

Guardian Liberty Voice

Mozilla Firefox 35 Is Now Operational
Guardian Liberty Voice
Two days ago, Mozilla officially launched the newest version of their browser software. Mozilla Firefox 35 is now operational and open to everyone. Other than the standard minor updates between old and new versions, Firefox 35 now supports MP4 videos ...

Categorieën: Mozilla-nl planet

Marco Zehe: Blog change: Now using encrypted connections

Mozilla planet - sn, 17/01/2015 - 10:11

This is just a quick note to let you all know that this blog has switched over to using encrypted connections. The URLs (web site addresses) are now redirected to their encrypted counterparts, starting with https instead of http. For links to posts you may have bookmarked, it means that they’ll be automatically redirected to their encrypted counterparts, too, so you don’t need to do anything, and permalinks will still work.

For you, this means two main things:

First, you can check in your browser’s address bar that this is indeed my blog you’re on, and not some fraud site which may have copied my content.

Second, when you comment, the data you send to my blog is now encrypted during transport, so your e-mail address, which you may not want everybody to see, is now not readable for everyone sitting by the sidelines of the internet.

This is my contribution to making encrypted communication over the internet the norm rather than the exception. The more people do it, the less likely it is that one becomes a suspect for some security agencies just because one uses encryption.

Please let me know should you run into any problems!

Categorieën: Mozilla-nl planet

Pages