mozilla

Mozilla Nederland LogoDe Nederlandse
Mozilla-gemeenschap

Abonneren op feed Mozilla planet
Planet Mozilla - http://planet.mozilla.org/
Bijgewerkt: 11 uur 1 min geleden

Mark Côté: BMO 2014 Statistics

ma, 19/01/2015 - 02:09

Everyone loves statistics! Right? Right? Hello?

tap tap

feedback screech

Well anyway, here are some numbers from BMO in 2014:

BMO Usage:

33 243 new users registered
45 628 users logged in
23 063 users performed an action
160 586 new bugs filed
138 127 bugs resolved
100 194 patches attached

BMO Development:

1 325 bugs filed
1 214 bugs resolved

Conclusion: there are a lot of dedicated Mozillians out there!

Categorieën: Mozilla-nl planet

Chris Ilias: My Installed Add-ons – Clippings

zo, 18/01/2015 - 22:06

I love finding new extensions that do things I never even thought to search for. One of the best ways to find them is through word of mouth. In this case, I guess you can call it “word of blog”. I’m doing a series of blog posts about the extensions I use, and maybe you’ll see one that you want to use.

The first one is Context Search, which I’ve already blogged about.

The second is Clippings. Clippings allows you to keep pieces of text to paste on demand. If you frequently answer email messages with one of a set of replies, you can paste which reply you want to use using the context menu. In my case, I take part in support forums, which means I have to respond to frequently asked questions, typing the same answers frequently. Clippings allows me to have canned responses, so I can answer more support questions in less time.

To save a piece of text as a clipping, select it, then right-click and go to “Clippings”, then “New from Selection“. You’ll then be asked to name the clipping and where to save it among your list of clippings. It supports folders too.

When you want to use that clipping just right-click on the text area, then go to “Clippings” and select the clipping you want to paste.

Clippings is also very useful in Mozilla Thunderbird.

You can install it via the Mozilla Add-ons site.

Categorieën: Mozilla-nl planet

Tess John: Data Migration in Django

zo, 18/01/2015 - 19:24

Changing the database is one side of the equation, but often a migration involves changing data as well;

Consider this case

Bug 1096431 - Able to create tasks with duplicate names.Task names should be unique

class Task:

     name = models.CharField(max_length=255, verbose_name=‘title‘)

    start_date = models.DateTimeField(blank=True, null=True)
Solution

Schema migration of course. ie Add unique = True. If you apply this migration to production it will cause an IntergrityError because you make a database column unique, while it’s contents are not unique. To solve this you need to find all tasks with duplicate names and rename them to something unique. This will be a datamigration instead of a schemamigration. Thanks to Giorgos Logiotatidis for the guidance,

Step1 : Create a few task with same names say ‘task’

Step2 : This is datamigration part

python manage.py datamigration Tasks _make_taskname_unique.py

Step 3: When you Open up the newly created file you can see the skeleton  of forwards and backwards function.I wrote a code to rename the duplicate taskname.

Step4 : Now add the unique keyword to the field and apply a schemamigration

Step5 :Finally migrate the model tasks.Now you can the the duplicate tasks gets renamed as  task 2 task 3 etc


Categorieën: Mozilla-nl planet

K Lars Lohn: The Smoothest Migration

zo, 18/01/2015 - 14:09
I must say that it was the smoothest migration that I have ever witnessed. The Socorro system data has left our data center and taken up residence at Amazon.

Since 2010, HBase has been our primary storage for Firefox crash data.  Spread across something like 70 machines, we maintained a constant cache of at least six months of crash data.  It was never a pain free system.  Thrift, the system through which Socorro communicated with HBase, seemed to develop a dislike for us from the beginning.  We fought it and it fought back.

Through the adversity that embodied our relationship with Thrift/HBase, Socorro evolved fault tolerance and self healing.  All connections to external resources in Socorro are wrapped with our TransactionExecutor.  It's a class that recognizes certain types of failures and executes a backing off retry when a connection fails.  It's quite generic, as it wraps our connections to HBase, PostgreSQL, RabbitMQ, ElasticSearch and now AmazonEC2.  It ensures that if an external resource fails with a temporary problem, Socorro doesn't fail, too.

Periodically, HBase would become unavailable. The Socorro system, detecting the problem, would back down, biding its time while waiting for the failed resource to recover.  Eventually, after probing the failed resource, Socorro detects recovery and picks up where it left off.

Over the years, we realized that one of the major features that originally attracted us to HBase was not giving us the payoff that we had hoped.  We just weren't using the MapReduce capabilities and found the HBase maintenance costs were not worth the expense.

Thus came the decision that we were to migrate away.  Initially, we considered moving to Ceph and began a Ceph implementation of what we call our CrashStorage API.

Every external resource in Socorro lives encapsulated in a class that implements the Crash Storage API.  Using the Python package Configman, crash storage classes can be loaded at run time, giving us a plugin interface.  Ceph turned out to be a bust when the winds of change directed us to move to AmazonS3. Because we implemented the CrashStorage API using the Boto library, we were able to reuse the code.

Then began the migration.  Rather than just flipping a switch, our migration was gradual.  We started 2014 with HBase as primary storage:


Then, in December, we started running HBase and AmazonS3 together.   We added the new AmazonS3 CrashStorage classes to the Configman managed Socorro INI files.  While we likely restarted the Socorro services, we could have just sent SIGHUP, prompting them to reread their config files, load the new Crash Storage modules and continue running as if nothing had happened.



After most of a month, and completing a migration of old data from HBase to  Amazon, we were ready to cut HBase loose.

I was amused by the non-event of the severing of Thrift from Socorro.  Again, it was a matter of editing HBase out of the configuration, sending a SIGHUP, causing HBase to fall silent. Socorro didn't care.  Announced several hours later on the Socorro mailing list, it seem more like a footnote than an announcement: "oh, by the way, HBase is gone".



Oh, the migration wasn't completely perfect, there were some glitches.  Most of those were from minor cron jobs that were used for special purposes and inadvertently neglected.

The primary datastore migration is not the end of the road.  We still have to move the server processes themselves to Amazon system.  Because everything is captured in the Socorro configuration, however, we do not anticipate that this will an onerous process.

I am quite proud of the success of Socorro's modular design.  I think we programmers only ever really just shuffle complexity around from one place to another.  In my design of Socorro's crash storage system, I have swung a pendulum far to one side, moving the complexity into the configuration.  That has disadvantages.  However, in a system that has to rapidly evolve to changing demands and changing environments, we've just demonstrated a spectacular success.

Credit where credit is due: Rob Helmer spearheaded this migration as the DevOp lead. He pressed the buttons and reworked the configuration files.  Credit also goes to Selena Deckelmann, who lead the way to Boto for Ceph that gave us Boto for Amazon.  Her contribution in writing the Boto CrashStorage class was invaluable.  Me?  While I wrote most of the Boto CrashStorage class and I'm responsible for the overall design, I was able to mainly just be a witness to this migration.  Kind of like watching my children earn great success, I'm proud of the Socorro team and look forward to the next evolutionary steps for Socorro.
Categorieën: Mozilla-nl planet

Henri Sivonen: If You Want Software Freedom on Phones, You Should Work on Firefox OS, Custom Hardware and Web App Self-Hostablility

zo, 18/01/2015 - 13:55
TL;DR

To achieve full-stack Software Freedom on mobile phones, I think it makes sense to

  • Focus on Firefox OS, which is already Free Software above the driver layer, instead of focusing on removing proprietary stuff from Android whose functionality is increasingly moving into proprietary components including Google Play Services.
  • Commission custom hardware whose components have been chosen such that the foremost goal is achieving Software Freedom on the driver layer.
  • Develop self-hostable Free Software Web apps for the on-phone software to connect to and a system that makes installing them on a home server as easy as installing desktop or mobile apps and connecting the home server to the Internet as easy as connecting a desktop.
Inspiration

Back in August, I listened to an episode of the Free as in Freedom oggcast that included a FOSDEM 2013 talk by Aaron Williamson titled “Why the free software phone doesn’t exist”. The talk actually didn’t include much discussion of the driver situation and instead devoted a lot of time to talking about services that phones connect to and the interaction of the DMCA with locked bootloaders.

Also, I stumbled upon the Indie Phone project. More on that later.

Software Above the Driver Layer: Firefox OS—Not Replicant

Looking at existing systems, it seems that software close to the hardware on mobile phones tends to be more proprietary than the rest of the operating system. Things like baseband software, GPU drivers, touch sensor drivers and drivers for hardware-accelerated video decoding (and video DRM) tend to be proprietary even when the Linux kernel is used and substantial parts of other system software are Free Software. Moreover, most of the mobile operating systems built on the Linux kernel are actually these days built on the Android flavor of the Linux kernel in order to be able to use drivers developed for Android. Therefore, the driver situation is the same for many of the different mobile operating systems. For these reasons, I think it makes sense to separate the discussion of Software Freedom on the driver layer (code closest to hardware) and the rest of the operating system.

Why Not Replicant?

For software above the driver layer, there seems to be something of a default assumption in the Free Software circles that Replicant is the answer for achieving Software Freedom on phones. This perception of mine probably comes from Replicant being the contender closest to the Free Software Foundation with the FSF having done fundraising and PR for Replicant.

I think betting on Replicant is not a good strategy for the Free Software community if the goal is to deliver Software Freedom on phones to many people (and, therefore, have more of a positive impact on society) instead of just making sure that a Free phone OS exists in a niche somewhere. (I acknowledge that hardline FSF types keep saying negative things about projects that e.g. choose permissive licenses in order to prioritize popularity over copyleft, but the “Free Software, Free Society” thing only works if many people actually run Free Software on the end-user devices, so in that sense, I think it makes sense to think of what has a chance to be run by many people instead of just the existence of a Free phone OS.)

Android is often called an Open Source system, but when someone buys a typical Android phone, they get a system with substantial proprietary parts. Initially, the main proprietary parts above the driver layer where the Google applications (Gmail, Maps, etc.) but the non-app, non-driver parts of the system were developed as Open Source / Free Software in the Android Open Source Project (AOSP). Over time, as Google has realized that OEMs don’t care to deliver updates for the base system, Google has moved more and more stuff to the proprietary Google application package. Some apps that were originally developed as part of AOSP no longer are. Also, Google has introduced Google Play Services, which is a set of proprietary APIs that keeps updating even when the base system doesn’t.

Replicant takes Android and omits the proprietary parts. This means that many of the applications that users expect to see on an Android phone aren’t actually part of Replicant. But more importantly, Replicant doesn’t provide the same APIs as a normal Android system does, because Google Play Services are missing. As more and more applications start relying on Google Play Services, Replicant and Android-as-usually-shipped diverge as development platforms. If Replicant was supposed to benefit from the network effects of being compatible with Android, these benefits will be realized less and less over time.

Also, Android isn’t developed in the open. The developers of Replicant don’t really get to contribute to the next version of AOSP. Instead, Google develops something and then periodically throws a bundle of code over the wall. Therefore, Replicant has the choice of either having no say over how the platform evolves or has the option to diverge even more from Android.

Instead of the evolution of the platform being controlled behind closed doors and the Free Software community having to work with a subset of the mass-market version of the platform, I think it would be healthier to focus efforts on a platform that doesn’t require removing or replacing (non-driver) system components as the first step and whose development happens in public repositories where the Free Software community can contribute to the evolution of the platform.

What Else Is There?

Let’s look at the options. What at least somewhat-Free mobile operating systems are there?

First, there’s software from the OpenMoko era. However, the systems have no appeal to people who don’t care that much about the Free Software aspect. I think it would be strategically wise for the Free Software community to work on a system that has appeal beyond the Free Software community in order to be able to benefit from contributions and network effects beyond the core Free Software community.

Open webOS is not on an upwards trajectory (on phones despite there having been a watch announcement at CES). Tizen (on phones) has been delayed again and again and became available just a few days ago, so it’s not (at least quite yet) a system with demonstrated appeal (on phones) beyond the Free Software community, and it seems that Tizen has substantial non-Free parts. Jolla’s Sailfish OS is actually shipping on a real phone, but Jolla keeps some components proprietary, so the platform fails the criterion of not having to remove or replace (non-driver) system components as the first step (see Nemo). I don’t actually know if Ubuntu Touch has proprietary non-driver system components. However, it does appear to have central components to which you cannot contribute on an “inbound=outbound” licensing basis, because you have to sign a CLA that gives Canonical rights to your code beyond the Free Software license of the project as a condition of your patch getting accepted. In any case, Ubuntu Touch is not shipping yet on real phones, so it is not yet demonstratably a system that has appeal beyond the Free Software community.

Firefox OS, in contrast, is already shipping on multiple real phones (albeit maybe not in your country) demonstrating appeal beyond the Free Software community. Also, Mozilla’s leverage is the control of the trademark—not keeping some key Mozilla-developed code proprietary. The (non-trademark) licensing of the project works on the “inbound=outbound” basis. And, importantly, the development repositories are visible and open to contribution in real time as opposed to code getting thrown over the wall from time to time. Sure, there is code landing such that the motivation of the changes is confidential or obscured with codenames, but if you want to contribute based on your motivations, you can work on the same repositories that the developers who see the confidential requirements work on.

As far as I can tell, Firefox OS has the best combination of not being vaporware, having appeal beyond the Free Software appeal and being run closest to the manner a Free Software project is supposed to be run. So if you want to advance Software Freedom on mobile phones, I think it makes the most sense to put your effort into Firefox OS.

Software Freedom on the Driver Layer: Custom Hardware Needed

Replicant, Firefox OS, Ubuntu Touch, Sailfish OS and Open webOS all use an Android-flavored Linux kernel in order to be able to benefit from the driver availability for Android. Therefore, the considerations for achieving Software Freedom on the driver layer apply equally to all these systems. The foremost problems are controlling the various radios—the GSM/UMTS radio in particular—and the GPU.

If you consider the Firefox OS reference device for 2014 and 2015, Flame, you’ll notice that Mozilla doesn’t have the freedom to deliver updates to all software on the device. Firefox OS is split into three layers: Gonk, Gecko and Gaia. Gonk contains the kernel, drivers and low-level helper processes. Gecko is the browser engine and runs on top of Gonk. Gaia is the system UI and set of base apps running on top of Gecko. You can get Gecko and Gaia builds from Mozilla, but you have to get Gonk builds from the device vendor.

If Software Freedom extended to the whole stack—including drivers—Mozilla (or anyone else) could give you Gonk buids, too. That is, to get full-stack Software Freedom with Firefox OS, the challenge is to come up with hardware whose driver situation allows for a Free-as-in-Freedom Gonk.

As noted, Flame is not that kind of hardware. When this is lamented, it is typically pointed out that “not even the mighty Google” can get the vendors of all the hardware components going into the Nexus devices to provide Free Software drivers and, therefore, a Free Gonk is unrealistic at this point in time.

That observation is correct, but I think it lacks some subtlety. Both Flame and the Nexus devices are reference devices on which the software platform is developed with the assumption that the software platform will then be shipped on other devices that are sufficiently similar that the reference devices can indeed serve as reference. This means that the hardware on the reference devices needs to be reasonably close to the kind of hardware that is going to be available with mass-market price/performance/battery life/weight/size characteristics. Similarity to mass-market hardware trumps Free Software driver availability for these reference devices. (Disclaimer: I don’t participate in the specification of these reference devices, so this paragraph is my educated guess about what’s going on—not any sort of inside knowledge.)

I theorize that building a phone that puts the availability of Free Software drivers first is not impossible but would involve sacrificing on the current mass-market price/performance/battery life/weight/size characteristics and be different enough from the dominant mass-market designs not to make sense as a reference device. Let’s consider how one might go about designing such a phone.

In the radio case, there is proprietary software running on a baseband processor to control the GSM/UMTS radio and some regulatory authorities, such as the FCC, require this software to be certified for regulatory purposes. As a result, the chances of gaining Software Freedom relative to this radio control software in the near term seem slim. From the privacy perspective, it is problematic that this mystery software can have DMA access to the memory of the application processor-i.e. the processor that runs the Linux kernel and the apps.

Technically, data transfer between the application processor and various radios does not need to be fast enough to require DMA access or other low-level coupling. Indeed, for desktop computers, you can get UMTS, Wi-Fi, Bluetooth and GPS radios as external USB devices. It should be possible to document the serial protocol these devices use over USB such that Free drivers can be written on the Linux side while the proprietary radio control software is embedded on the USB device.

This would solve the problem of kernel coupling with non-free drivers in a way that hinders the exercise of Software Freedom relative to the kernel. But wouldn’t the radio control software embedded on the USB device still be non-free? Well, yes it would, but in the current regulatory environment it’s unrealistic to fix that. Moreover, if the software on the USB devices is truly embedded to the point where no one can update it, the Free Software Foundation considers the bundle of hardware and un-updatable software running on the hardware as “hardware” as a whole for Software Freedom purposes. So even if you can’t get the freedom to modify the radio control software, if you make sure that no one can modify it and put it behind a well-defined serial interface, you can both solve the problem of non-free drivers holding back Software Freedom relative to the kernel and get the ideological blessing.

So I think the way to solve the radio side of the problem is to license circuit designs for UMTS, Wi-Fi, Bluetooth and GPS USB dongles and build those devices as hard-wired USB devices onto the main board of the phone inside the phone’s enclosure. (Building hard-wired USB devices into the device enclosure is a common practice in the case of laptops.) This would likely result in something more expensive, more battery draining, heavier and larger than the usual more integrated designs. How much more expensive, heavier, etc.? I don’t know. I hope within bounds that would be acceptable for people willing to pay some extra and accept some extra weigh and somewhat worse battery life and performance in order to get Software Freedom.

As for the GPU, there are a couple of Free drivers: There’s Freedreno for Adreno GPUs. There is the Lima driver for Mali-200 and Mali-400, but a Replicant developer says it’s not good enough yet. Intel has Free drivers for their desktop GPUs and Intel is trying to compete in the mobile space so, who knows, maybe in the reasonably near future Intel manages to integrate GPU design of their own (with a Free driver) with one of their mobile CPUs.

The current Replicant way to address the GPU driver situation is not to have hardware-accelerated OpenGL ES. I think that’s just not going to be good enough. For Firefox OS (or Ubuntu Touch or Sailfish OS or a more recent version of Android) to work reasonably, you have to have hardware-accelerated OpenGL ES. So I think the hardware design of a Free Software phone needs to grow around a mobile GPU that has a Free driver. Maybe that means using a non-phone (to put radios behind USB) QUALCOMM SoC with Adreno. Maybe that means pushing Lima to good enough a state and then licensing Mali-200 or Mali-400. Maybe that means using x86 and waiting for Intel to come up with a mobile GPU. But it seems clear that the GPU is the big constraint and the CPU choice will have to follow from the GPU solution.

For the encumbered codecs that everyone unfortunately needs to have in practice, it would be best to have true hardware implementations that are so complete that the drivers wouldn’t contain parts of the codec but would just push bits to the hardware. This way, the encumberance would be limited to the hardware. (Aside: Similarly, it would be possible to design a hardware CDM for EME. In that case, you could have DRM without it being a Software Freedom problem.)

So I think that in order to achieve Software Freedom on the driver layer, it is necessary to commission hardware that fits Free Software instead of trying to just write software that fits the hardware that’s out there. This is significantly different from how software freedom has been achieved on desktop. Also, the notion of making a big upfront capital investment in order to achieve Software Freedom is rather different from the notion that you only need capital for a PC and then skill and time.

I think it could be possible to raise the necessary capital through crowdfunding. (Purism is trying it with the Librem laptop, but, unfortunately, the rate of donations looks bad as of the start of January 2015.) I’m not going to try to organize anything like that myself-I’m just theorizing. However, it seems that developing a phone by crowdfunding in order to get characteristics that the market isn’t delivering is something that is being attempted. The Indie Phone project expresses intent to crowdfund the development of a phone designed to allow users to own their own data. Which brings us to the topic of the services that the phone connects to.

Freedom on the Service Side: Easy Self-Hostability Needed

Unfortunately, Indie Phone is not about building hardware to run Firefox OS. The project’s Web site talks about an Indie OS but intentionally tries to make the OS seem uninteresting and doesn’t explain what existing software the small team is intending to build upon. (It seems implausible that such a small team could develop an operating system from scratch.) Also, the hardware intentions are vague. The site doesn’t explain if the project is serious about isolating the baseband processor from the application processor out of privacy concerns, for example. But enough about the vagueness of what the project is going to do. Let’s look at the reasons the FAQ gave against Firefox OS (linking to version control, since the FAQ appears to have been removed from the site between the time I started writing this post and the time I got around to publishing):

“As an operating system that runs web applications but without any applications of its own, Firefox OS actually incentivises the use of closed silos like Google. If your platform can only run web apps and the best web apps in town are made by closed silos like Google, your users are going to end up using those apps and their data will end up in these closed silos.”

The FAQ then goes on to express angst about Mozilla’s relationship with Google (the Indie Phone FAQ was published before Mozilla’s seach deal with Yahoo! was announced) and Telefónica and to talk about how Mozilla doesn’t control the hardware but Indie will.

I think there is truth to Web technology naturally having the effect of users gravitating towards whatever centralized service provides the best user experience. However, I think the answer is not to shun Firefox OS but to make de-centralized services easy to self-host and use with Firefox OS.

In particular, it doesn’t seem realistic that anyone would ship a smart phone without a Web browser. In that sense, any smartphone is susceptible to the lure of centralized Web-based services. On the other hand, Google Play and the iOS App Store contain plenty of applications whose user interface is not based on HTML, CSS and JavaScript but still those applications put the users’ data into centralized services. On the flip side, it’s not actually true that Firefox OS only runs Web apps hosted on a central server somewhere. Firefox OS allows you to use HTML, CSS and JavaScript to build apps that are distributed as a zip file and run entirely on the phone without a server component.

But the thing is that, these days, people don’t want even notes or calendar entries that are intended for their own eyes only to stay on the phone only. Instead, even for data meant for the user’s own eyes only, there is a need to have the data show up on multiple devices. I very much doubt that any underdog effort has the muscle to develop a non-Web decentralized network application platform that allows users to interact with their data from all the devices that they want to use to interact with their data. (That is, I wouldn’t bet on e.g Indienet that is going to launch with “with a limited release on OS X Yosemite”.)

I think the answer isn’t fighting the Web Platform but using the only platform that already has clients for all the devices that users want to use-in addition to their phone-to interact with their data: the Web Platform. To use the Web Platform as the application platform such that multiple devices can access the apps but also such that users have Software Freedom, the users need to host the Web apps themselves. Currently, this is way too difficult. Hosting Web apps at home needs to become at least as easy as maintaining a desktop computer at home-preferably easier.

For this to happen, we need:

  • Small home server hardware that is powerful enough to host Web apps for family, that consumes negligible energy (maybe in part by taking the place of the home router that people have always-on consuming electricity today), that is silent and that can boot a vanilla kernel that gets security updates.
  • A Free operating system that runs in such hardware, makes it easy to install Web apps and makes it easy for the apps to become securely reachable over the network.
  • High-quality apps for such a platform.

(Having Software Freedom on the server doesn’t strictly require the server to be placed in your home, but if that’s not a realistic option, there’s clearly a practical freedom deficit even if not under the definition of Free Software. Also, many times the interest in Software Freedom in this area is motivated by data privacy reasons and in the case of Web apps, the server of the apps can see the private data. For these reasons, it makes sense to consider home-hostability.)

Hardware

In this case, the hardware and driver side seems like the smallest problem. At least if you ignore the massive and creepy non-Free firmware, the price of the hardware and don’t try to minimize energy consumption particularly aggressively, suitable x86/x86_64 hardware already exists e.g. from CompuLab. To get the price and energy consumption minimized, it seems that ARM-based solutions would be better, but the situation with 32-bit ARM boards requiring per-board kernel builds and most often proprietary blobs that don’t get updated makes the 32-bit ARM situation so bad that it doesn’t make sense to use 32-bit ARM hardware for this. (At FOSDEM 2013, it sounded like a lot of the time of the FreedomBox project has been sucked into dealing with the badness of the Linux on 32-bit ARM situation.) It remains to be seen whether x86/x86_64 SoCs that boot with generic kernels reach ARM-style price and energy consumption levels first or whether the ARM side gets their generic kernel bootability and Free driver act together (including shipping) with 64-bit ARM first. Either way, the hardware side is getting better.

Apps

As for the apps, PHP-based apps that are supposed to be easy-ish to deploy as long as you have an Apache plus PHP server from a service provider are plentiful, but e.g. Roundcube is no match for Gmail in terms of user experience and even though it’s theoretically possible to write quality software in PHP, the execution paradigm of PHP and the culture of PHP don’t really guide things to that direction.

Instead of relying on the PHP-based apps that are out there and that are woefully uncompetitive with the centralized proprietary offerings, there is a need for better apps written on better foundations (e.g. Python and Node.js). As an example, Mailpile (Python on the server) looks very promising in terms of Gmail-competitive usability aspirations. Unfortunately, as of December 2014, it’s not ready for use yet. (I tried and, yes, filed bugs.) Ethercalc and Etherpad (Node.js on the server) are other important apps.

With apps, the question doesn’t seem to be whether people know how to write them. The question seems to be how to fund the development of the apps so that the people who know how to write them can devote a lot of time to these projects. I, for one, hope that e.g. Mailpile’s user-funded development is sustainable, but it remains to be seen. (Yes, I donated.)

Putting the Apps Together

A crucial missing piece is having a system that can be trivially installed on suitable hardware (or, perhaps in the future, can be pre-installed on suitable hardware) that allows users who want to get started without exercising their freedom to modify the software but provides the freedom to install modified apps if the user so chooses and-perhaps most importantly-makes the networking part very easy.

There are a number of projects that try to aggregate self-hostable apps into a (supposedly at least) easy to install and manage system. However, it seems to me that they tend to be of the PHP flavor, which I think fundamentally disadvantages them in terms of becoming competitive with proprietary centralized Web apps. I think the most promising project in the space that deals with making the better (Python and Node.js-based among others) apps installable with ease is Sandstorm.io, which unfortunately, like Mailpile, doesn’t seem quite ready yet. (Also, in common with Mailpile: a key developer is an ex-Googler. Looks like people who’ve worked there know what it takes to compete with GApps…)

Looking at Sandstorm.io is instructive in terms of seeing what’s hard about putting it all together. On the server, Sandstorm.io runs each Web app in a Linux container that’s walled off from the other apps. All the requests go through a reverse proxy that also provides additional browser-site UI for switching between the apps. Instead of exposing the usual URL structure of each app, Sandstorm.io exposes “grain” URLs, which are unintelligible random-looking character sequences. This design isn’t without problems.

The first problem is that the apps you want to run like Mailpile, Etherpad and Ethercalc have been developed to be deployed on a vanilla Linux server by using application-specific manual steps that puts hosting these apps on a server out of the reach of normal users. (Mailpile is designed to be run on localhost by normal users, but that doesn’t make it reachable from multiple devices, which is what you want from a Web app.) This means that each app needs to be ported to Sandstorm.io. This in turn means that compared to going to upstream, you get stale software, because except for Ethercalc, the maintainer of the Sandstorm.io port isn’t the upstream developer of the app. In fairness, though, the software doesn’t seem to be as a stale as it would be if you installed a package from Debian Stable… Also, as the platform and the apps mature, it’s possible that various app developers start to publish for Sandstorm.io directly on one hand and with more mature apps it’s less necessary to have the latest version (except for security fixes).

Unlike in the case getting a Web app as a Debian package, the URL structure and, it appears, in some cases the storage structure is different in a Sandstorm.io port of an app and in a vanilla upstream version of the app. Therefore, even though avoiding lock-in is one of the things the user is supposed to be able to accomplish by using Sandstorm.io, it’s non-trivial to migrate between the Sandstorm.io version and a non-Sandstorm.io version of a given app. It particularly bothers me that Sandstorm.io completely hides the original URL structure of the app.

Networking

And that leads to the last issue of self-hosting with the ease of just plugging a box into home Ethernet: Web security and Web addressing are rather unfriendly to easy self-hosting.

First of all, there is the problem of getting basic incoming IPv4 connectivity to work. After all, you must be able to reach port 443 (https) of your self-hosting box from all your devices-including reaching the box that’s on your wired home Internet connection from the mobile connection of your phone. Maybe your own router imposes a NAT between your server and the Internet and you’d need to set up port forwarding, which makes things significantly harder than just instructing people to plug stuff in. This might be partially alleviated by making the self-hosting box contain NAT functionality itself so that it could take the place of the NATting home router, but even then you might have to configure something like a cable modem to a bridging mode or, worse, you might be dealing with an ISP who doesn’t actually sell you neutral end-to-end Internet routing and blocks incoming traffic to port 443 (or detects incoming traffic to port 443 and complains to you about running a server even if it’s actually for your personal use so you aren’t violating any clause that prohibits you from using a home connection to offer a service to the public).

One way to solve this would be standardizing simple service were a service provider takes your credit card number and an ssh public key and gives you an IP address. The self-hosting system you run at home would then have a configuration interface that gives you an ssh public key and takes an IP address. The self-hosting box would then establish an ssh reverse tunnel to the IP address with 443 as the local target port and the service provided by the service provider would be sending port 443 of the IP address to this tunnel. You’d still own your data and your server and you’d terminate TLS on your server even though you’d rent an IP address from a data center.

(There are efforts to solve this by giving the user-hosted devices host names under the domain of a service that handles the naming, such as OPI giving the each user a hostname under the op-i.me domain, but then the naming service—not the user—is presumptively the one eligible to get the necessary certificates signed, and delegating away the control of the crypto defeats an important aspect of self-hosting. As a side note, one of the reasons I migrated from hsivonen.iki.fi to hsivonen.fi was that even though I was able to get the board of IKI to submit iki.fi to the Public Suffix List, CAs still seem to think that IKI, not me, is the party eligible for getting certificates signed for hsivonen.iki.fi.)

But even if you solved IPv4-level reachability of the home server from the public Internet as a turn-key service, there are still more hurdles on the way of making this easy. Next, instead of the user having to use an IP address, the user should be able to use a memorable name. So you need to tell the user to go register a domain name, get DNS hosting and point an A record to the IP address. And then you need a certificate for the name you chose for the A record, which at the moment (before Let’s Encrypt is operational) is another thing that makes things too hard.

And that brings us back to Sandstorm.io obscuring the URLs. Rather paradoxically, even though Sandstorm.io is really serious about isolating apps from each other on the server, Sandstorm.io gives up the browser-side isolation of the apps that you’d get with a typical deployment of the upstream apps. The only true way to have browser-enforced privilege separation of the client-side JavaScript parts of the apps is for different apps to have different Origins. An Origin is a triple of URL scheme, host name and port. For the apps not to be ridiculously insecure, the scheme has to be https. This means that you either have to give each app a distinct port number or a distinct host name. On surface, it seems that it would be easy to mint port numbers, but users are not used to typing URLs with non-default port numbers and if you depend on port forwarding in a NATting home router or port forwarding through an ssh reverse tunnel, minting port numbers on-demand isn’t that convenient anymore.

So you really want a distinct host name for each app to have a distinct Origin for browser-enforced privilege separation of JavaScript on the client. But the idea was that you could install new apps easily. This means that you have to be able to generate a new working host name at the time of app installation. So unless you have a programmatic way to configure DNS on the fly and have certificates minted on the fly, neither of which you can currently realistically have for a home server, you need a wildcard in the DNS zone and you need a wildcard TLS certicate. Sandstorm.io instead uses one hostname and obscure URLs, which is understandable. Despite being understandable, it is sad, since it loses both human-facing semantics of the URLs and browser-enforced privilege separation between the apps. (Instead of https://etherpad.example.org/example-document-title and https://ethercalc.example.org/example-spreadsheet-title, you get https://example.org/grain/FcTdrgjttPbhAzzKSv6ESD and https://example.org/grain/o96ouPLKQMEMZkFxNKf2Dr.) Fortunately, Let’s Encrypt seems to be on track to solving the certificate side of this problem by making it easy to get a cert for a newly-minted hostname signed automatically. Even so, the DNS part needs to be made easy enough that it doesn’t remain a blocker for self-hosting a box that allows on-demand Web app installation with browser-side app privilege separation.

Conclusion

There are lots of subproblems to work on, but, fortunately, things don’t seem fundamentally impossible. Interestingly, the problem with software that resides on the phone may be the relatively easy part to solve. That is not to say that it is easy to solve, but once solved, it can scale to a lot of users without the users having to do special things to get started in the role of a user who does not exercise the freedom to modify the system. However, since users these days are not satisfied by merely device-resident software but want things to work across multiple devices, the server-side part is relevant and harder to scale. Somewhat paradoxically, the hardest thing to scale in a usable way seems like a triviality on surface: the addressing of the server-side part in a way that gives sovereignty to users.

Categorieën: Mozilla-nl planet

Pascal Finette: Weekend Link Pack

za, 17/01/2015 - 15:31
Categorieën: Mozilla-nl planet

Marco Zehe: Blog change: Now using encrypted connections

za, 17/01/2015 - 10:11

This is just a quick note to let you all know that this blog has switched over to using encrypted connections. The URLs (web site addresses) are now redirected to their encrypted counterparts, starting with https instead of http. For links to posts you may have bookmarked, it means that they’ll be automatically redirected to their encrypted counterparts, too, so you don’t need to do anything, and permalinks will still work.

For you, this means two main things:

First, you can check in your browser’s address bar that this is indeed my blog you’re on, and not some fraud site which may have copied my content.

Second, when you comment, the data you send to my blog is now encrypted during transport, so your e-mail address, which you may not want everybody to see, is now not readable for everyone sitting by the sidelines of the internet.

This is my contribution to making encrypted communication over the internet the norm rather than the exception. The more people do it, the less likely it is that one becomes a suspect for some security agencies just because one uses encryption.

Please let me know should you run into any problems!

Categorieën: Mozilla-nl planet

Hannah Kane: Teach.webmaker.org: Initial Card Sorting Results

za, 17/01/2015 - 00:56

This past week I conducted a small user research project to help inform the IA of the new teach.webmaker.org site.

I chose a card sorting activity, which is a common research method for IA projects. In a card sorting activity, you give members of your target audience a stack of cards, each of which has one of the site content areas printed on it. You ask the participants to group items together and explain their thought process. In this way, you gain an understanding of the participants mental models. This is helpful for avoiding a common pitfall in site design, which is organizing content in a way that make sense to you but not your users.

Big Giant Caveat

This study was flawed in a couple ways. First, Jacob Nielsen (who is generally considered to be a real smartypants when it comes to usability and user research) recommends that you do card sorting with 15 users. I’ve only been able to get 11 to do the activity so far, though I think a few more are pending.

Another flaw is that I deviated from a common best practice of running these activities in person. A lot of the insights are gained by listening to the person think aloud. There are some tools for running an online card sorting activity, but they’re largely for what’s called “closed” card sorts, where you pre-determine the categories and the person’s task is to sort cards within those categories. Since one of my goals with this activity was to generate a better understanding of what terminology to use, I wanted to do an “open” sort, where the participants name their groupings themselves.

All that’s to say that we shouldn’t take these results or my analysis as gospel. I do think the participant responses will be useful as we move forward with designing some wireframes to user test in the next heartbeat.

Participant Demographics and Background Information

There were a range of ages and locations represented in the study.

Four participants are between 18 and 24 years old, three are between 25 and 34, two between 35 and 44, one between 45 and 54, and one between 55 and 64.

Four participants are from the United States, three from India, and one each from Colombia, Bangladesh, Canada, and the United Kingdom.

Participants were asked to rate their level of familiarity with the Webmaker Mentors program on a scale of 1 to 5, with 5 being the most familiar. Again, there was a range. Four participants rated themselves a 5, two a 4 or 4.5, two a 3, one a 2, and two a 1.

Initial Findings

The participants in the study had a range of different mental models they used to organize the content. Those models were:

  1. Grouping by program offering—that is, organizing by specific programs, concepts, or offerings, typically expressed as nouns (e.g. Web Literacy, Teaching Kits, Webmaker Clubs, Trainings, Activities, Resources, Social, Learning, Philosophy, Mentoring, Research, Events, Supportive Team)Five participants used a model like this as their primary model. The average familiarity level with Webmaker Mentoring for these participants matches the average for the entire sample (3.7 on a five-point scale).
  2. Grouping by functional area—that is, actions that a user might take, typically expressed as verbs (e.g. participate, learn, market/promote, meet others, do, lead, get involved, collaborate, organize, develop yourself, teach, experiment, host, attend).Four participants used a model like this as their primary model. Notably, all of the participants are from the United States, Canada, or the United Kingdom, and their average familiarity with Webmaker Mentoring is below the average of the entire sample (2.75 as compared to 3.67).
  3. Grouping by role or identity—some study participants organized the content by the type of user who would be interested in it (e.g. Learner, Mentor).One participant  used this as their primary model. Another made a distinction between Learning and Teaching, but it was framed more like the functional areas described above. One more used “Learning Geeks” as a topic area.
  4. Level of expertise—in this model, there is a pathway through the content based on level of expertise (e.g. intermediate, advanced).One participant used this as their primary model.

Other patterns, themes, and notable terminology:

  • Seven participants grouped together content related to hosting or attending events, and three participants made references to face-to-face communication. Of the seven who grouped content into the “Events” topic, five of them included the one item that referenced “Maker Party” (including two participants who rated their level of familiarity with the program at a 1), indicating a strong understanding of “Maker Party” as a type of event.
  • Five participants made references to the broader community. Three of them are from the United States, one from Canada, and one from India. (The specific terminology used were “Meet others,” “Social,” “Webmaker Community,” “Collaborate,” and “Supportive team”).
  • Four participants used the word “Webmaker” in their groupings, which gives us some insight into how they understand the brand. In each case, participants seem to connect the term to either teaching and teaching kits, or to the community of interested people.
  • Three participants used the term “Leading.”
  • One participant referenced a particular context (“Webmaker for Schools”).
  • One participant distinguished Mozilla-produced content (as “Mozilla Outputs”).
  • We included the term “Peer Learning Networks” in the content list to represent Hives (we assumed the meaning of “Hive” would be difficult to intuit for those unfamiliar). While we can’t draw any conclusions based on this data, it’s notable that this term was grouped into a wide variety of topics, including community (“Meet others,” “Social,” and “Collaborate”), “Get Involved,” “Intermediate,” “Mozilla Outputs,” and “Learning Geeks.” Three participants felt it didn’t fit under any category.
  • We tested both “Professional Development” and “Trainings” to see if we could understand how people interpret those terms. The results are fairly ambiguous. Both terms were associated with “Activities for teachers & mentors”, “Leading,” “Get Involved,” and “Research (things you learn on your own).” “Professional Development” was also associated with “Learning,” “Develop Yourself,” and “Learning Geeks”. “Trainings” was associated with “Intermediate,” “Mentoring,” “Organize in person events,” and “Supportive team.” For both terms, three participants could not categorize this term.

Let me know if you’re interested in seeing the raw data.


Categorieën: Mozilla-nl planet

Air Mozilla: Webdev Beer and Tell: January 2015

vr, 16/01/2015 - 23:00

 January 2015 Web developers across the Mozilla community get together (in person and virtually) to share what side projects or cool stuff we've been working on.

Categorieën: Mozilla-nl planet

Dave Townsend: Welcome the new Toolkit peers

vr, 16/01/2015 - 20:23

I have been a little lax in my duty of keeping the list of peers for Toolkit up to date and there have been a few notable exceptions. Thankfully we’re good about disregarding rules when it makes sense to and so many people who should be peers have already been doing reviews. Of course that’s no use to new contributors trying to figure out who should review what so I am grateful to someone who prodded me into updating the list.

As I was doing so I came to the conclusion that there is a lot of overlap between Firefox code and Toolkit code. Lots of patches touch both at the same time and it often doesn’t make a lot of sense to require different reviewers there. I also couldn’t think of a reason why someone would be a trusted reviewer of Firefox code and yet not be trusted to review Toolkit code. Looking at the differences in the lists of peers confirmed that all those missing really should be Toolkit peers too.

So going forwards I have decided that Firefox peers will automatically be considered to be Toolkit peers. That means I can announce a whole bunch of new people who are now Toolkit peers, please congratulate them in the usual way, by flooding their review queue:

  • Ehsan Akhgari
  • Mike de Boer
  • Mike Conley
  • Georg Fritzsche
  • Mark Hammond
  • Felipe Gomes
  • Gijs Kruitbosch
  • Florian Quèze
  • Tim Taubert

You might ask if the reverse should hold true, should all Toolkit peers be Firefox peers? i.e. should we just merge the lists. I leave that to the Firefox owner to decide but I will say that there are a few pieces of Toolkit that are very much not front-end and so in some cases I could see a reviewer for that area not needing to be listed in the Firefox list, not because they wouldn’t be trusted to turn down the patches they couldn’t review but just because there would be almost no patches in their area in Firefox. Maybe that case is so rare that it isn’t worth the hassle of two lists though.

Categorieën: Mozilla-nl planet

Air Mozilla: Webmaker Demos January 16 2015

vr, 16/01/2015 - 19:00

Webmaker Demos January 16 2015 Webmaker Demos January 16 2015

Categorieën: Mozilla-nl planet

Giorgio Maone: Both Your Cheeks

vr, 16/01/2015 - 18:53

Pope Punch

Dear pope Francis,

Thank you for for this chance to punch your face (both cheeks, the way you christians please) because of the way your organization routinely defames and insults His Majesty Satan.

Sincerely,
Your friendly neighbourhood satanist

P.S.: a very good article about this from The Guardian.

P.P.S.: Yes, I think free thinking, free speech and censorship are very relevant to the Open Web.

Categorieën: Mozilla-nl planet

Roberto A. Vitillo: Next-gen Data Analysis Framework for Telemetry

vr, 16/01/2015 - 16:40

The easier it is to get answers, the more questions will be asked

In that spirit me and Mark Reid have been working for a while now on a new analysis infrastracture to make it as easy as possible for engineers to get answers to data related questions.

Our shiny new analysis infrastructure is based primarily on IPython and Spark. I blogged about Spark before, I even gave a short tutorial on it at our last workweek in Portland (slides and tutorial); IPython might be something you are not familiar with unless you have a background in science. In a nutshell it’s a browser-based notebook with support for code, text, mathematical expressions, inline plots and other rich media.

IPythonAn IPython notebook in all its glory

The combination of IPython and Spark allows to write data analyses interactively from a browser and seemingly parallelize them over multiple machines thanks to a rich API with over 80 distributed operators! It’s a huge leap forward in terms of productivity compared to traditional batch oriented map-reduce frameworks. An IPython notebook contains both the code and the product of the execution of that code, like plots. Once executed, a notebook can simply be serialized and uploaded to Github. Then, thanks to nbviewer, it can be visualized and shared among colleagues.

In fact, the issue with sharing just the end product of an analysis is that it’s all too easy for bugs to creep in or to make wrong assumptions. If your end result is a plot, how do you test it? How do you know that what you are looking at does actually reflect the truth? Having the code side by side with its evaluation allows more people to  inspect it and streamlines the review process.

Here comes the fun part. This is what you need to do to start your IPython backed Spark cluster with access to Telemetry data:

  1. Visit the analysis provisioning dashboard at telemetry-dash.mozilla.org and sign in using Persona with an @mozilla.com email address.
  2. Click “Launch an ad-hoc Spark cluster”.
  3. Enter some details:
    • The “Cluster Name” field should be a short descriptive name, like “chromehangs analysis”.
    • Set the number of workers for the cluster. Please keep in mind to use resources sparingly; use a single worker to write and debug your job.
    • Upload your SSH public key
  4. Click “Submit”.
  5. A cluster will be launched on AWS preconfigured with Spark, IPython and some handy data analysis libraries like pandas and matplotlib.

Once the cluster is ready, you can tunnel IPython through SSH by following the instructions on the dashboard, e.g.:

ssh -i my-private-key -L 8888:localhost:8888 hadoop@ec2-54-70-129-221.us-west-2.compute.amazonaws.com

Finally, you can launch IPython in Firefox by visiting http://localhost:8888.

monitorCluster monitoring dashboard

Now what? Glad you asked. In your notebook listing you will see a Hello World notebook. It’s a very simple analysis that produces the distribution of startup times faceted by operating system for a small fraction of Telemetry submissions; let’s quickly review it here.

We start by importing a telemetry utility to fetch pings and some commonly needed libraries for analysis: a json parser, pandas and matplotlib.

import ujson as json import matplotlib.pyplot as plt import pandas as pd from moztelemetry.spark import get_pings

To execute a block of code in IPython, aka cell, press Shift-Enter. While a cell is being executed, a gray circle will appear in the upper right border of the notebook. When the circle is full, your code is being executed by the IPython kernel; when only the borders of the circle are visible then the kernel is idle and waiting for commands.

Spark exploits parallelism across all cores of your cluster. To see the degree of parallelism you have at your disposal simply yield:

sc.defaultParallelism

Now, let’s fetch a set of telemetry submissions and load it in a RDD using the get_pings utility function from the moztelemetry library:

pings = get_pings(sc, appName="Firefox", channel="nightly", version="*", buildid="*", submission_date="20141208", fraction=0.1)

That’s pretty much self documenting. The fraction parameter, which defaults to 1, selects a random subset of the selected submissions. This comes in handy when you first write your analysis and don’t need to load lots of data to test and debug it.

Note that both the buildid and submission_date parameters accept also a tuple specifying, inclusively, a range of dates, e.g.:

pings = get_pings(sc, appName="Firefox", channel="nightly", version="*", buildid=("20141201", "20141202"), submission_date=("20141202", ""20141208"))

Let’s do something with those pings. Since we are interested in the distribution of the startup time of Firefox faceted by operating system, let’s extract the needed fields from our submissions:

def extract(ping):     ping = json.loads(ping)     os = ping["info"]["OS"]     startup = ping["simpleMeasurements"].get("firstPaint", -1)     return (os, startup) cached = pings.map(lambda ping: extract(ping)).filter(lambda p: p[1] > 0).cache()

As the Python API matches closely the one used from Scala, I suggest to have a look at my older Spark tutorial if you are not familiar with Spark. Another good resource are the hands-on exercises from AMP Camp 4.

Now, let’s collect the results back and stuff it into a pandas DataFrame. This is a very common pattern, once you reduce your dataset to a manageable size with Spark you collect it back on your driver (aka the master machine) and finalize your analysis with statistical tests, plots and whatnot.

grouped = cached.groupByKey().collectAsMap() frame = pd.DataFrame({x: log(pd.Series(list(y))) for x, y in grouped.items()}) frame.boxplot() plt.ylabel("log(firstPaint)") plt.show() startup_helloStartup distribution by OS

Finally, you can save the notebook, upload it to Github or Bugzilla and visualize it on nbviewer, it’s that simple. Here is the nbviewer powered Hello World notebook. I warmly suggest that you open a bug report on Bugzilla for your custom Telemetry analysis and ask me or Vladan Djeric to review it. Mozilla has been doing code reviews for years and with good reasons, why should data analyses be different?

Congrats, you just completed your first Spark analysis with IPython! If you need any help with your custom job feel free to drop me a line in #telemetry.


Categorieën: Mozilla-nl planet

Mark Finkle: Firefox for Android: What’s New in v35

vr, 16/01/2015 - 16:28

The latest release of Firefox for Android is filled with new features, designed to work with the way you use your mobile device.

Search

Search is the most common reason people use a browser on mobile devices. To help make it easier to search using Firefox, we created the standalone Search application. We have put the features of Firefox’s search system into an activity that can more easily be accessed. You no longer need to launch the full browser to start a search.

When you want to start a search, use the new Firefox Widget from the Android home screen, or use the “swipe up” gesture on the Android home button, which is available on devices with software home buttons.

fennec-swipeup

Once started, just start typing your search. You’ll see your search history, and get search suggestions as you type.

fennec-search-1-crop

The search results are displayed in the same activity, but tapping on any of the results will load the page in Firefox.

fennec-search-3

Your search history is shared between the Search and Firefox applications. You have access to the same search engines as in Firefox itself. Switching search engines is easy.

Sharing

Another cool feature is the Sharing overlay. This feature grew out of the desire to make Firefox work with the way you use mobile devices. Instead of forcing you to switch away from applications when sharing, Firefox gives you a simple overlay with some sharing actions, without leaving the current application.

fennec-share-overlay

You can add the link to your bookmarks or reading list. You can also send the link to a different device, via Firefox Sync. Once the action is complete, you’re back in the application. If you want to open the link, you can tap the Firefox logo to open the link in Firefox itself.

Synced Tabs

Firefox Sync makes it easy to access your Firefox data across your different devices, including getting to the browser tabs you have open elsewhere. We have a new Synced Tabs panel available in the Home page that lets you easily access open tabs on other devices, making it simple to pick up where you left off.

Long-tap an item to easily add a bookmark or share to another application. You can expand/collapse the device lists to manage the view. You can even long-tap a device and hide it so you won’t see it again!

fennec-synctabs

Improved Error Pages

No one is happy when an error page appears, but in the latest version of Firefox the error pages try to be a bit more helpful. The page will look for WiFi problems and also allow you to quickly search for a problematic address.

fennec-errorpage-search

Categorieën: Mozilla-nl planet

Mozilla Release Management Team: Firefox 36 in beta

vr, 16/01/2015 - 16:09

Firefox 36 (Desktop and Mobile) is now available on the beta channel.

The release notes are published on the Mozilla website:

This version introduces many new HTML5/CSS features, in particular the Media Source Extensions (MSE) API which allow native HTML5 playback on YouTube. The new preferences implementation is also enabled for the first half of the beta cycle, please help us to test this new feature!

On the mobile version of Firefox, we are also shipping the new Tablet user interface!

Download this new version:

And as usual, please report any issues.

Categorieën: Mozilla-nl planet

Jess Klein: EYE Witness News: Promotional Content on Webmaker

vr, 16/01/2015 - 15:32
On January 28th Mozilla will be celebrating Data Privacy Day. This is an international effort centered on "Respecting Privacy, Safeguarding Data and Enabling Trust." There will be content on Mozilla, Webmaker and Mozilla Advocacy. The Webmaker team had previously developed privacy content with the Private Eye activity (featuring the Lightbeam add-on), so the primary challenge here was how to promote that content via the Webmaker splash page. This is actually a two - fold design opportunity:

1. micro: how might we promote the unique Privacy Day content on the splash page for the 28th?

2. macro: how might we incorporate promotional interest-based content into the real estate on the Webmaker splash page on an ongoing basis?

Constraints: needs to be conceived, designed and implemented within 2 weeks.
Start from the beginning 

I took a look at the current splash page. The content that we are promoting is directly connected to the Mozilla mission, so I identified a sliver of space directly above the section where we state the project's values. My thinking here is that we are creating a three tier hierarchy of values on the page: 1) we are webmaker - we are all about making - and this is what you can do right this second to get started, 2) we are deeply concerned about [privacy] - and this what you can do right now to dive into that topic and 3)we are more than just making + [privacy] - here are all the things that we value.

I SEE what you did thereThat sliver was great, but it was below the non-existent but deeply considered fold of the page. If this was a painting I would create a repoussoir element to bring the users attention to the core content  by framing the edge. In the painting below you can see that tree branch that directs your attention directly into the heart of the composition.


Jacob Isaaksz. van Ruisdael, The Jewish Cemetery (1655-60)
Building off of my thinking from designing the Mozilla snippet and the onboarding ux,  I wanted to make this repoussoir element something that a user might find quirky, whimsical or relateable. All of the other elements on the page were expected and kind of standard elements for a webpage. I needed to create something that would be subtle yet attention grabbing.  Looking at subject of privacy, I immediately had associations with corporations and individuals big- brothering me as I visited web pages. I realized that the activity we were directing users to was called private eye - and this led me to create a small asset that features an eyeball that follows your cursor around as you explore the splash page. On hover it will flip and direct you to the activity.This worked for desktop, but for mobile we would have to simulate the action by having a simple CSS eyeball animation center aligned on the sliver. Major props here go out to Aki who had to invoke the pythagorean theorem to get the eye to follow the cursor without leaving the sclera.



  I did a study of eyeballs on redpen and immediately got a ton of community and staff feedback - which told me two things: 1. it was a conversation topic and 2. people liked the very first eyeball that I drew. 

My javascript now has a variable called ‘pythagoras’. HAHAHA TAKE THAT, MATH.
— Social Justice Mage (@gesa) January 14, 2015
Let me give you a walk through


    From Mozilla's perspective, we are testing:
    • whimsy vs. traditional promotional placement 
    • mission driven content 
    • how many people are we getting to engage with Webmaker and sign up for new accounts

    What's Next Up:
    • This will be deployed on staging on Monday and then our goal is to go live on January 28th, which is Privacy Day!
    • Now that we have a promotional framework, figuring out how to incorporate a richer learning experience around mission - based content.
    • Users can opt into enrolling in a sustained challenge - based Webmaker activity. Almost as if it's a virtual Webmaker club.


      Shout outs to the team that made this possible: Aki, Andrew, Erika, Paul Johnson, Dave


    Categorieën: Mozilla-nl planet

    Matjaž Horvat: Pontoon report 2014: Get involved

    vr, 16/01/2015 - 15:12

    This is the last in a series of blog posts outlining Pontoon development in 2014. I’ll mostly focus on new features targeting translators. If you’re more interested in developer oriented updates, please have a look at the release notes.

    Part 1. User interface
    Part 2. Backend
    Part 3. Meet our top contributors
    Part 4. Make your translations better
    Part 5. Get involvedyou are here

    In the past years, Pontoon has come a long way from an idea, a prototype, to a working product. As of today, there’s a dozen of Mozilla projects available for localization in Pontoon. If you want to move it even further, there are plenty of ways to do so.

    For localizers
    Start learning how things work by looking at the new Pontoon homepage, which is also used as a demo project to be translated using Pontoon. Perhaps you can translate it to your mother language. You can also learn more advanced features.

    For developers
    Making your website or web application localizable with Pontoon is quick and easy. A simple script needs to be added and you are halfway through. Follow implementation instructions for more details.

    Take action
    Do you have ideas for improvement? Are you a developer? Learn how to get your hands dirty. It has never been easier to set up development environment and start contributing. We’re on GitHub.

    Categorieën: Mozilla-nl planet

    Mozilla Reps Community: Reps Weekly Call – January 15th 2015

    vr, 16/01/2015 - 13:30

    Last Thursday we had our weekly call about the Reps program, where we talk about what’s going on in the program and what Reps have been doing during the last week.

    andrefox

    Summary
    • Data Privacy Day.
    • Hello Campaign.
    • Womoz Update.
    • Event metrics challenges update.
    • Mozlandia videos.
    • How we can improve reports to be more easy?
    • Reps and schools.

    Detailed notes

    AirMozilla video

    Don’t forget to comment about this call on Discourse and we hope to see you next week!

    Categorieën: Mozilla-nl planet

    Christian Heilmann: You’re a spokesperson, why do you talk about things breaking?

    vr, 16/01/2015 - 12:13

    Every once in a while you will find someone saying something “bad” about a product of the company they work for. This could be employees or – god forbid – even official spokespeople.

    silence, please

    It happens to me, too, for example when my browser crashes on me. The inevitable direct response to this is most of the time some tweet in the style of:

    Should a spokesperson of $company talk badly about it? Think about the followers you have and what that means for the people who worked on the product!

    It is a knee-jerk reaction making a lot of assumptions:

    • that the person is not rooting for the team,
    • that the person is abusing his or her reach,
    • that the intentions are to harm with this,
    • that criticising a product means criticising the company and
    • that the person has no respect for his or her colleagues.

    Or, that they are bad at their job and cause a lot of damage without meaning to and should be chastised by some other person on Twitter.

    All these could be valid points, had the person mentioneded something in a terrible way or without context. It is – of course – bad style and not professional for any employee to speak ill of their employer or its products publicly.

    However, things go wrong and things break, and no matter if you are a professional spokesperson or not, it is simply honest to mention that. It also begs the question what is better: help the team that build a product to fix an obvious issue by owning the fixing process or to wait till someone else finds it? The latter means you’ll have a much shorter time to fix it.

    It is ironic that an audience who hates sales pitches and advertisement is complaining when an official advocate of something points out a flaw.

    It all comes down to how you mention an issue. You can cause a lot of good by mentioning an issue. Or you could cause a lot of problems.

    How to report a failure and make it useful

    Things you do by mentioning a fault:

    • You admit that things go wrong for you, too. That makes you a user of your products, not a salesperson (or shill, really)
    • You mention the fault before somebody else does. This puts you in the driver’s seat. Instead of reacting to criticism, you advertise that you are aware of the issue and that you are looking into it. It is better when you find a flaw than when the competition does.
    • You show that you are a user of the product. There is nothing worse than a spokesperson who only listens to what the marketing team talks about or who starts believing exclusively in their own “feel good” messages about a product. You need to use the product to be able to talk about it. And this means that you inevitably will find problems.
    • You stay approachable and honest. Things go wrong for all of us – you are no exception.

    Of course, just complaining is bad form. To make your criticism something useful, you should do more:

    • Be detailed about your environment. Did you use a developer edition of your product? What’s your setup? When did the thing go wrong?
    • Stick to one thing that goes wrong. “Browser $x is unstable” is a bad message, “$x just crashed on me when trying to play this video/game” is an OK one.
    • You should report the problem internally. In the best case, this should happen before you mention it. You can then follow up your public criticism with a report how the issue is being dealt with. This step is crucial and in many cases you already find a reason why something is broken. You can then mention the issue and the solution at the same time. This is powerful – people like solutions.
    • Investigate what happened. Other people might run into the same issue and there is nothing more powerful than a post somewhere on how to fix an issue. Don’t let the thing just lie and be broken. And don’t let people come up with quick fixes or workarounds that might prove to be harmful in the long run.
    • Deal with the feedback. People fixing the issue shouldn’t have this as an extra burden. This is where your job as a spokesperson comes in: deal with feedback in a grown-up fashion and keep people updated when things get fixed or more information is unearthed why something happens.

    It is very tempting to just vent when something goes wrong. This is not good. Count to ten and consider the steps above first. I am not saying that you shouldn’t report things that annoy you. On the contrary, it is part of your job to do that as it shows that you care about the product. It makes a lot of sense though to turn your gripes into actions.

    When not to mention an issue

    There are times though when you should not mention an issue. Not many, but there are. It mostly boils down to who will suffer by you mentioning the problem.

    • Don’t punish your users. It is a bad idea to publicly talk about a security flaw that would endanger your users. That needs immediate fixing and any public disclosure just makes it harder to fix the problem. It also is a feast for the tech press. People love a security drama and you and your press people will have to deal with a lot of half-truths and hyperbole by the press. You don’t want a bug tarnish the trust in your company as a whole, and this is what happens with premature security issue reports and the inevitable spin the press is wont to give it.
    • Don’t report without knowing who can fix the issue. Investigate who is responsible and give them a heads up. Failing this will cause massive bad blood in the company and you don’t want to have to deal with public feedback and internal grumblings and mistrust at the same time. A scorned developer is not one that will do things for you or help fixing the issue. They are much more likely to join the public conversation and strongly disagree with you and other critics. Be the person who helps fixing an issue by showing your colleagues in a light that they deal with problems swiftly and professionally. Don’t throw blame into the unknown.
    • Don’t report your own faults as problems. You might have a setup that is very unique and causes issues. Make sure you can reproduce the issue in several environments and not just one setting in a certain environment. Make sure you used the product correctly. If you didn’t, write about how you used it wrongly to avoid other false reports of bugs.

    Be aware about the effects you have

    Reporting bad things happening without causing internal and external issues requires good communication skills. The most important part is keeping everyone involved in the loop and be very open about the fixing process. If you can’t be sure that things will get fixed, it might not be worth your while to report them publicly. It would be a kind of blackmail or blame game you can not turn into something useful. Instead, be prepared to respond when others find the problem – as inevitably they will.

    Stay honest and open and there is no problem with reporting flaws.

    Photo Credit: martins.nunomiguel via Compfight cc

    Categorieën: Mozilla-nl planet

    Gregory Szorc: Bugzilla and the Future of Firefox Development

    vr, 16/01/2015 - 11:50

    Bugzilla has played a major role in the Firefox development process for over 15 years. With upcoming changes to how code changes to Firefox are submitted and reviewed, I think it is time to revisit the central role of Bugzilla and bugs in the Firefox development process. I know this is a contentious thing to say. Please, gather your breath, and calmly read on as I explain why I believe this.

    The current Firefox change process defaults to requiring a Bugzilla bug for everything. It is rare (and from my experience frowned upon) when a commit to Firefox doesn't reference a bug number. We've essentially made Bugzilla and a bug prerequisites for changing anything in the Firefox version control repository. For the remainder of this post, I'm going to say that we require a bug for any change, even though that statement isn't technically accurate. Also, when I say Bugzilla, I mean bugzilla.mozilla.org, not the generic project.

    Before I go on, let's boil the Firefox change process down to basics.

    At the heart of any change to the Firefox source repository is a diff. The diff (a representation of the differences between a set of files) is the smallest piece of data necessary to represent a change to the Firefox code. I argue that anything more than the vanilla diff is overhead and could contribute to process debt. Now, there is some essential overhead. Version control tools supplement diffs with metadata, such as the author, commit message, and date. Mozilla has also instituted a near-mandatory code review policy, where changes need to be signed off by a set of trusted individuals. I view both of these additions to the vanilla diff as essential for Firefox development and non-negotiable. Therefore, the bare minimum requirements for changing Firefox code are a diff plus metadata (a commit/patch) and (almost always) a review/sign-off. That's it. Notably absent from this list is a Bugzilla bug. I argue that a bug is not strictly required to change Firefox. Instead, we've instituted a near-universal policy that we should have bugs. We've chosen to add overhead and process debt - interaction with Bugzilla - to our Firefox change process.

    Now, this choice to require all changes be associated with bugs has its merits. Bugs provide excellent anchor points for historical context and for additional information after the change has been committed and is forever set in stone in the repository (commits are immutable in Mercurial and Git and you can't easily attach metadata to the commit after the fact). Bugs are great to track relationships between different problems or units of work. Bugs can even be used to track progress towards a large feature. Bugzilla components also provide a decent mechanism to follow related activity. There's also a lot of tooling and familiar process standing on top of the Bugzilla platform. There's a lot to love here and I don't want diminish the importance of all these things.

    When I look to the future, I see a world where the current, central role of Bugzilla and bugs as part of the Firefox change process begin to wane. I see a world where the benefits to maintaining our current Bugzilla-centric workflow start to erode and the cost of maintaining it becomes higher and harder to justify. You actually don't have to look too far into the future: that world is already here and I've already started to feel the pains of it.

    A few days ago, I blogged about GitHub and its code first approach to change. That post was spun off from an early draft of this post (as were the posts about Firefox contribution debt and utilizing GitHub for Firefox development). I wanted to introduce the concept of code first because it is central to my justification for changing how we do things. In summary, code first capitalizes on the fact that any change to software involves code and therefore puts code front and center in the change process. (In hindsight, I probably should have used the term code centric, because that's how I want people to think about things.) So how does code first relate to Bugzilla and Firefox development?

    Historically, code review has occurred in Bugzilla: upload a patch to Bugzilla, ask for review, and someone will look at it. And, since practically every change to Firefox requires review, you need a bug in Bugzilla to contain that review. Thus, one way to view a bug is as a vehicle for code review. Not every bug is just a code review, of course. But a good number of them are.

    The only constant is change. And the way Mozilla conducts code review for changes to Firefox (and other projects) is changing. We now have MozReview, a code review tool that is not Bugzilla. If we start accepting GitHub pull requests, we may perform reviews exclusively on GitHub, another tool that is not Bugzilla.

    (Before I go on, I want to quickly point out that MozReview is nowhere close to its final form. Parts of MozReview are pretty bad right now. The maintainers all know this and we have plans to fix it. We'll be in Toronto all of next week working on it. If you don't think you'll ever use it because parts are bad today, I ask you to withhold judgement for a few more months.)

    In case you were wondering, the question on whether Bugzilla should always be used for code review for Firefox has been answered and that answer is no. People, including maintainers of Bugzilla, realized that better-than-Splinter/Bugzilla code review tools exist and that continuing to invest time to develop Bugzilla/Splinter into a best-in-class code review tool would be better spent integrating Bugzilla with an existing tool. This is why we now have a Review Board based code review tool - MozReview - integrated with Bugzilla. If you care about code quality and more powerful workflows, you should be rejoicing at this because the implementation of code review in Bugzilla does not maximize optimal outcomes.

    The world we're moving to is one where code review occurs outside of Bugzilla. This raises an important question: if Bugzilla was being used primarily as a vehicle for code review, what benefit and/or role should Bugzilla play when code review is conducted outside of Bugzilla?

    I posit that there are a class of bugs that won't need to exist going forward because bugs will provide little to no value. Put another way, I believe that a growing number of commits to the Firefox repository won't reference bugs.

    Come with me on a journey to the future.

    MozReview is purposefully being designed in a code and repository centric way. To initiate the formal process for considering a change to code, you push to a Mercurial (or Git!) repository. This could be directly to Mozilla's review repository. If I have my way, this could even be kicked off by submitting a pull request on GitHub or Bitbucket. No Bugzilla attachment uploading here: our systems talk in terms of repositories and commits. Again, this is by design: we don't want submitting code to Mozilla to be any harder than hg push or git push so as to not introduce process debt. If you have code, you'll be able to send it to us.

    In the near future, MozReview will stop cross-posting detailed review updates to Bugzilla. Instead, we'll use Review Board's e-mail feature to send its flavor of emails. These will have rich HTML content (or plain text if you insist) and will provide a better experience than Bugzilla ever will. We'll adopt the model of tools like Phabricator and GitHub and only post summaries or links of activity, not full content, to bugs. You may be familiar with the concept as applied to the web: it's called hyperlinking.

    Work is being invested into Autoland. Autoland is an automated landing queue that pushes/lands commits semi-automatically once they are ready (have review, pass automation, etc). Think of Autoland as a bot that does all the labor intensive and menial actions around pushing that you do now. I believe Autoland will eventually handle near 100% of pushes to the Firefox repository. And, if I have my way, Autoland will result in the abolishment of integration branches and merge commits in the Firefox repository. Good riddance.

    MozReview and Autoland will be highly integrated. MozReview will be the primary user interface for interacting with Autoland. (Some of this should be in place by the end of the quarter.)

    In this world, MozReview and its underlying version control repositories essentially become a database of all submitted, pending, and discarded commits to Firefox. The metaphorical primary keys of this database are not bug numbers: they are code/commits. (Code first!) Some of the flags stored in this database tell Autoland what it should do. And the MozReview user interface (and API) provide a mechanism into controlling those flags.

    Landing a change in Firefox will be initiated by a simple action such as clicking a checkbox in MozReview. (That could even be the Grant Review checkbox.) Commits cleared for landing will be picked up by Autoland and eventually automatically pushed to the Firefox repository (assuming the build and test automation is happy, of course). Once Autoland takes control, humans are just passengers. We won't be bothered with menial tasks like updating the commit message to reflect a review was performed: this will happen automatically inside MozReview or Autoland. (Although, there's a chance we may adopt some PGP-based signing to more strongly convey review for some code changes in order to facilitate stronger auditing and trust guarantees. Stay tuned.) Likewise, if a commit becomes associated with a bug, we can add that metadata to the commit before it is landed, no human involvement necessary beyond specifying the link in the MozReview web UI (or API). Autoland/MozReview will close review requests and/or bugs automatically. (Are you excited about performing less work yet?)

    When commits are added to MozReview, MozReview will read metadata from the repository they came from to automatically determine an appropriate reviewer. (We plan to leverage moz.build files for this in the case of Firefox.) This should eliminate a lot of process debt around choosing a reviewer. Similar metadata will also be used to determine what Bugzilla component a change is related to, static analysis rules to use to critique the phsyical structure of the change, and even automation jobs that should be executed given the set of files that changed. The use of this metadata will erode significant process debt around the change contribution workflow.

    As commits are pushed into MozReview/Autoland, the systems will be intelligent about automatically tracking dependencies and facilitating complex development workflows that people run into on a daily basis.

    If I create a commit on top of someone else's commit that hasn't been checked in yet, MozReview will detect the dependency between my changes and the parent ones. This is an advantage of being code first: by interfacing with repositories rather than patch files, you have an explicit dependency graph embedded in the repository commit DAG that can be used to aid machines in their activities.

    It will also be possible to partially land a series of commits. If I get review on the first 5 of 10 commits but things stall on commit 6, I can ask Autoland to land the already-reviewed commits so they don't get bit rotted and so you have partial progress (psychological studies show that a partial reward for work results in greater happiness through a sense of accomplishment).

    Since initiating actions in MozReview is light weight (just hg push), itch scratching is encouraged. I don't know about you, but in the course of working on the Firefox code base, I frequently find myself wanting to make small, 15-30s changes to fix something really minor. In today's world, the overhead for these small changes is often high. I need to upload a separate patch to Bugzilla. Sometimes I even need to create a new bug to hold that patch. If that patch depends on other work I did, I need to set up bug dependencies then worry about landing everything in the right order. All of a sudden, the overhead isn't worth it and my positive intentions go unacted on. Multiplied by hundreds of developers over many years, and you can imagine the effect on software quality. With MozReview, the overhead for itch scratching like this is minor. Just make a small commit, push, and the system will sort everything out. (These small commits are where I think a bugless process really shines.)

    This future world revolves around code and commits and operations on them. While MozReview has review in its name, it's more than a review tool: it's a database and interface to code and its state.

    In this code first world, Bugzilla performs an ancillary role. Bugzilla is still there. Bugs are still there. MozReview review requests and commits link to bugs. But it is the code, not bugs, that are king. If you want to do anything with code, you interact with the code tools. And Bugzilla is not one of them.

    Another way of looking at this is that nearly everything involving code or commits becomes excised from Bugzilla. This would relegate Bugzilla to, well, an issue/bug tracker. And - ta da - that's something it excels at since that's what it was originally designed to do! MozReview will provide an adequate platform to discuss code (a platform that Bugzilla provides today since it hosts code review). So if not Bugzilla tools are handling everything related to code, do you really need a bug any more?

    This is the future we're trying to build with MozReview and Autoland. And this is why I think bugs and Bugzilla will play a less central role in the development process of Firefox in the future.

    Yes, there are many consequences and concerns about making this shift. You would be rational to be skeptical and doubt that this is the right thing to do. I have another post in the works that attempts to outline some common conerns and propose solutions to many of them. Before writing a long comment pointing out every way in which this will fail to work, I encourage you to wait for that post to be published. Stay tuned.

    Categorieën: Mozilla-nl planet

    Pagina's