Mozilla Nederland LogoDe Nederlandse
Mozilla gemeenschap

Mozilla Reps Community: Reps Weekly Call – November 27th 2014

Mozilla planet - vr, 28/11/2014 - 14:05

Last Thursday we had our regular weekly call about the Reps program, where we talk about what’s going on in the program and what Reps have been doing during the last week.


  • End of the year – Metrics and Receipts.
  • Reminder: Vouch and vouched on Mozillians.
  • Community PR survey.
  • AdaCamp.

Detailed notes

AirMozilla video

Don’t forget to comment about this call on Discourse and we hope to see you next week!

Categorieën: Mozilla-nl planet

Gervase Markham: Bugzilla for Humans, II

Mozilla planet - vr, 28/11/2014 - 12:52

In 2010, johnath did a very popular video introducing people to Bugzilla, called “Bugzilla for Humans“. While age has been kind to johnath, it has been less kind to his video, which now contains several screenshots and bits of dialogue which are out of date. And, being a video featuring a single presenter, it is somewhat difficult to “patch” it.

Enter Popcorn Maker, the Mozilla Foundation’s multimedia presentation creation tool. I have written a script for a replacement presentation, voiced it up, and used Popcorn Maker to put it together. It’s branded as being in the “Understanding Mozilla” series, as a sequel to “Understanding Mozilla: Communications” which I made last year.

So, I present “Understanding Mozilla: Bugzilla“, an 8.5 minute introduction to Bugzilla as we use it here in the Mozilla project:

Because it’s a Popcorn presentation, it can be remixed. So if the instructions ever change, or Bugzilla looks different, new screenshots can be patched in or erroneous sections removed. It’s not trivial to seamlessly patch my voiceover unless you get me to do it, but it’s still much more possible than patching a video. (In fact, the current version contains a voice patch.) It can also be localized – the script is available, and someone could translate it into another language, voice it up, and then remix the presentation and adjust the transitions accordingly.

Props go to the Popcorn team for making such a great tool, and the Developer Tools team for Responsive Design View and the Screenshot button, which makes it trivial to reel off a series of screenshots of a website in a particular custom size/shape format without any need for editing.

Categorieën: Mozilla-nl planet

Doug Belshaw: On the denotative nature of programming

Mozilla planet - vr, 28/11/2014 - 11:41

This is just a quick post almost as a placeholder for further thinking. I was listening to the latest episode of Spark! on CBC Radio about Cracking the code of beauty to find the beauty of code. Vikram Chandra is a fiction author as well as a programmer and was talking about the difference between the two mediums.

It’s definitely worth a listen [MP3]

The thing that struck me was the (perhaps obvious) insight that when writing code you have to be as denotative as possible. That is to say ambiguity is a bad thing leading to imprecision, bugs, and hard-to-read code. That’s not the case with fiction, which relies on connotation.

This reminded me of a paper I wrote a couple of years ago with my thesis supervisor about a ‘continuum of ambiguity’. In it, we talk about the overlap between the denotative and connotative aspects of a word, term, or phrase being the space in which ambiguity occurs. For everything other than code, it would appear, this is the interesting and creative space.

I’ve recently updated the paper to merge comments from the 'peer review’ I did with people in my network. I also tidied it up a bit and made it look a bit nicer.

Read it here: Digital literacy, digital natives, and the continuum of ambiguity

Comments? Questions? Email me:

Categorieën: Mozilla-nl planet

Soledad Penades: Publishing a Firefox add-on without using

Mozilla planet - vr, 28/11/2014 - 10:46

A couple of days ago Tom Dale published a post detailing the issues the Ember team are having with getting the Ember Inspector add-on reviewed and approved.

It left me wondering if there would not be any other way to publish add-ons on a different site. Knowing Mozilla, it would be very weird if add-ons were “hardcoded” and tied only and exclusively to a property.

So I asked. And I got answers. The answer is: yes, you can publish your add-on anywhere, and yes your add-on can get the benefit of automatic updates too. There are a couple of things you need to do, but it is entirely feasible.

First, you need to host your add-on using HTTPS or “all sorts of problems will happen”.

Second: the manifest inside the add-on must have a field pointing to an update file. This field is called the updateURL, and here’s an example from the very own Firefox OS simulator source code. Snippet for posterity:


You could have some sort of “template” file to generate the actual manifest at build time–you already have some build step that creates the xpi file for the add-on anyway, so it’s a matter of creating this little file.

And you also have to create the update.rdf file which is what the browser will be looking at somewhat periodically to see if there’s an update. Think of that as an RSS feed that the browser subscribes to ;-)

Here’s, again, an example of how an update.rdf file looks like, taken from one of the Firefox OS simulators:

<?xml version="1.0" encoding="utf-8"?>
<RDF xmlns="" xmlns:em="">
<Description about="">

And again this file could be generated at build time and perhaps checked in the repo along with the xpi file containing the add-on itself, and served using github pages which do allow serving https.

The Firefox OS simulators are a fine example of add-ons that you can install, get automatic updates for, and are not hosted in

Hope this helps.

Thanks to Ryan Stinnett and Alex Poirot for their information-rich answers to my questions–they made this post possible!

flattr this!

Categorieën: Mozilla-nl planet

Benoit Girard: Improving Layer Dump Visualization

Mozilla planet - vr, 28/11/2014 - 08:16

I’ve blogged before about adding a feature to visualize platforms log dumps including the layer tree. This week while working on bug 1097941 I had no idea which module the bug was coming from. I used this opportunity to improve the layer visualization features hoping that it would help me identify the bug. Here are the results (working for both desktop and mobile):

Layer Tree Visualization Demo
Layer Tree Visualization Demo – Maximize me

This tools works by parsing the output of layers.dump and layers.dump-texture (not yet landed). I reconstruct the data as DOM nodes which can quite trivially support the features of a layers tree because layers tree are designed to be mapped from CSS. From there some javascript or the browser devtools can be used to inspect the tree. In my case all I had to do was locat from which layer my bad texture data was coming from: 0xAC5F2C00.If you want to give it a spin just copy this pastebin and paste it here and hit ‘Parse’. Note: I don’t intend to keep backwards compatibility with this format so this pastebin may break after I go through the review for the new layers.dump-texture format.

Categorieën: Mozilla-nl planet

Yura Zenevich: Resources For Contributing to Firefox OS Accessibility.

Mozilla planet - vr, 28/11/2014 - 01:00
Resources For Contributing to Firefox OS Accessibility.

28 Nov 2014 - Toronto, ON

I believe when contributing to Firefox OS and Gaia, just like with most open source projects, a lot of attention should be given to reducing the barrier for entry for new contributors. It is even more vital for Gaia since it is an extremely fast moving project and the number of things to keep track of is overwhelming. In an attempt to make it easier to start contributing to Firefox OS Accessibility I compiled the following list of resources, that I will try keeping up to date. It should be helpful for a successful entry into the project:

Firstly, links to high level documentation:


Gaia project is hosted on Github and the version control that is used is Git. Here's a link to the project source code:

One of my coworkers (James Burke) proposed the following workflow that you might find useful (link to source):

  • Fork Gaia using Github UI (I will use my Github user name - yzen as an example)
  • From your fork of Gaia (, clone that locally, and set up a "remote" that is called "upstream" that points to the mozilla-b2g/gaia repo:
git clone --recursive gaia-yzen cd gaia-yzen git remote add upstream
  • For each bug you are working on, create a branch to work on it. This branch will be used for the pull request when you are ready. So taking this bug number 123 as an example, and assuming you are starting in your clone's master branch in the project directory:
# this updates your local master to match mozilla-b2g's latest master # you should always do this to give better odds your change will work # with the latest master state for when the pull request is merged git pull upstream master # this updates your fork on github's master to match git push origin master # Create bug-specific branch based on current state of master git checkout -b bug-123
  • Now you will be in the bug-123 branch locally, and its contents will look the same as the master branch. The idea with bug-specific branches is that you keep your master branch pristine and only matching what is in the official mozilla-b2g branch. No other local changes. This can be useful for comparisons or rebasing.

  • Do the changes in relation to the bug you are working on.

  • Commit the change to the branch and then push the branch to your fork. For the commit message, you can just copy in the title of the bug:

git commit -am "Bug 123 - this is the summary of changes." git push origin bug-123
  • Now you can go to and do the pull request.

  • In the course of the review, if you need to do other commits to the branch for review feedback, once it is all reviewed, you can flatten all the commits into one commit, then force push the change to your branch. I normally use rebase -i for this. So, in the gaia-yzen directory, while you are in the bug-123, you can run:

git rebase -i upstream/master

At this point, git gives you a way to edit all the commits. I normally 'pick' the first one, then choose 's' for squash for the rest, so the rest of the commits are squashed to the first picked commit.

Once that is done and git is happy, you can the force push the new state of the branch back to GitHub:

git push -f origin bug-123

More resources at:

Source Code
  • All apps are located in apps/ directory. Each up is located within its own directory. So for example if you are working on a Calendar app you would be making your changes in the apps/calendar directory.

  • The way we want to make sure that the improvements that we work on actually help Firefox OS accessibility and do not regress we have a policy of adding gaia-ui python Marionette tests for all new accessible functionality. You can find tests in the tests/python/gaia-ui-tests/gaiatest/tests/accessibility/ directory.

More resources at:

Building and Running Gaia Testing Localization

Localization is very relevant to accessibility especially because one of the tasks that we perform when making something accessible is ensuring that all elements in the applications are labeled for the user of assistive technologies. Please see Localization best practices for guidelines on how to add new text to applications.

Debugging Using a screen reader

Using a device or navigating a web application is different with the screen reader. Screen reader introduces a concept of virtual cursor (or focus) that represents screen reader's current position inside the app or web page. For mode information and example videos please see: Screen Reader


Here are some of the basic resources to help you get to know what mobile accessibility (and accessibility) is:


Categorieën: Mozilla-nl planet

Kevin Ngo: Adelheid, an Interactive Photocentric Storybook

Mozilla planet - vr, 28/11/2014 - 01:00
Photos scroll along the bottom, pages slide left and right.

Half a year ago, I built an interactive photocentric storybook as a gift to my girlfriend for our anniversary. It binds photos, writing, music, and animation together into an experential walk down memory lane. I named it Adelheid, a long-form version of my girlfriend's name. And it took me about a month of my after-work free time whenever she wasn't around. Adelheid is an almagation of my thoughts as it molds my joy of photography, writing, and web development into an elegantly-bound package.


A preview of the personal storybook I put together.

Design Process

As before, I wanted it to a representation of myself: photography, writing, web development. I spent time sketching it out on a notebook and came up with this. The storybook is divided into chapters. Chapters consist of a song, summary text, a key photo, other photos, and moments. Moments are like subchapters; they consist of text and a key photo. Chapters slide left and right like pages in a book, photos roll through the bottom like an image reel, moments lie behind the chapters like the back of a notecard, all while music plays in the background. Then I put in a title page at the beginning that lifts like a stage curtain.

It took a month of work to bring it to fruition, and it was at last unveiled as a surprise on a quiet night at Picnic Island Park in Tampa, Florida.

Technical Bits

With all of the large image and audio files, it becomes quite a large app. My private storybook contains about 110MB, as a single-page app! Well, that's quite ludicrous. However, I made it easy for myself and had it intended to only be used as a packaged app. This means I don't have to worry about load times over a web server since all assets can be downloaded and installed as a desktop app.

Unfortunately, it currently only works well in Firefox. Chrome was targeted initially but was soon dropped to decrease maintenance time and hit my deadline. There's a lot of fancy animation going on, and it was difficult to get it working properly in both browsers. Not only for CSS compatability, but it currently only works as a packaged app for Firefox. Packaged apps have not been standardized, and I only configured packaged app manifests for Firefox's specifications.

After the whole thing, I became a bit more adept at CSS3 animations. This included the chapter turns, image reels, and moment flips. Some nice touches were parallaxed images so the key images transitioned a bit slower to give off a three-dimensional effect. Also the audio faded in and out between chapter turns using a web audio library.

You can install the demo app at

Categorieën: Mozilla-nl planet

Christian Heilmann: What if everything is awesome?

Mozilla planet - do, 27/11/2014 - 23:54

this kid is awesome

These are the notes for my talk at Codemotion Madrid this year.
You can watch the screencast on YouTube and you can check out the slides at Slideshare.

An incredibly silly movie

The other day I watched Pacific Rim and was baffled by the awesomeness and the awesome inanity of this movie.

Let’s recap a bit:

  • There is a rift to another dimension under water that lets alien monsters escape into our world.
  • These monsters attack our cities and kill people. That is bad.
  • The most effective cause of action is that we build massive, erect walking robots on land to battle them.
  • These robots are controlled by pilots who walk inside them and throw punches to box these monsters.
  • These pilots all are super fit, ripped and beautiful and in general could probably take on these monsters in a fight with bare hands. The scientists helping them are helpless nerds.
  • We need to drop the robots with helicopters to where they are needed, because that looks awesome, too.

All in all the movie is borderline insane as if we had a rift like that under water, all we’d need is mine it. Or have some massive ships and submarines where the rift is ready to shoot and bomb anything that comes through. Which, of course, beats trying to communicate with it.

The issue is that this solution would not make for a good blockbuster 3D movie aimed at 13 year olds. Nothing fights or breaks in a fantastic manner and you can’t show crumbling buildings. We’d be stuck with mundane tasks. Like writing a coherent script, proper acting or even manual camera work and settings instead of green screen. We can’t have that.

Tech press hype

What does all that have to do with web development? Well, I get the feeling we got to a world where we try to be awesome for the sake of being awesome. And at the same time we seem to be missing out on the fact that what we have is pretty incredible as it is.

One thing I blame is the tech press. We still get weekly Cinderella stories of the lonely humble developer making it big with his first app (yes, HIS first app). We hear about companies buying each other for billions and everything being incredibly exciting.

Daily frustrations

In stark contrast to that, our daily lives as developers are full of frustrations. People not knowing what we do and thus not really giving us feedback on what we do, for example. We only appear on the scene when things break.

Our users can also be a bit of an annoyance as they do not upgrade the way we want them to and keep using things differently than we anticipated.

And even when we mess up, not much happens. We put our hearts and lots of efforts into our work. When we see something obviously broken or in dire need of improvement we want to fix it. The people above us in the hierarchy, however, are happy to see it as a glitch to fix later.

Flight into professionalism

Instead of working on the obvious broken communication between us and those who use our products and us and those who sell them (or even us and those who maintain our products) I hear a louder and louder call for “professional development”. This involves many abstractions and intelligent package managers and build scripts that automate a lot of the annoying cruft of our craft. Cruft in the form of extraneous code. Code that got there because of mistakes that our awesome new solutions make go away. But isn’t the real question why we still make so many mistakes in the first place?

Apps good, web bad!

One of the things we seem to be craving is to match the state of affairs of native platforms, especially the form factor of apps. Apps seem to be the new, modern form factor of software delivery. Fact is that they are a questionable success (and may even be a step back in software evolution as I put it in a TEDx talk. If you look at who earns something with them and how long they are in use on average it is hard to shout “hooray for apps”. On the web, there is a problem that there are so far no standards that define an app that are working across platforms. If you wonder how far that is going along, the current state of mobile apps on the web describes this in meticulous detail at the W3C.

Generic code is great code?

A lot of what we crave as developers is generic. We don’t want to write code that does one job well, we want to write code that can take any input and does something intelligent with it. This is feel good code. We are not only clever enough to be programmers We also write solutions that prevent people from making mistakes by predicting them.

Fredrik Noren wrote a brilliant piece on this called “On Generalisation“. In it he argues that writing generic code means trying to predict the future and that we are bad at that. He calls out for simpler, more modular and documented code that people can extend instead of catch-all solutions to simple problems.

I found myself nodding along reading this. There seems to be a general tendency to re-invent instead of improving existing solutions. This comes natural for developers – we want to create instead of read and understand. I also blame sites like hacker news which are a beauty pageant of small, quick and super intelligent technical solutions for every conceivable problem out there.

Want some proof? How about Static Site Generators listing 295 different ways to create static HTML pages? Let’s think about that: static HTML pages!

The web is obese!

We try to fix our world by stacking abstractions and creating generic solutions for small issues. The common development process and especially the maintenance process looks different, though.

People using Content Management Systems to upload lots of un-optimised photos are a problem. People using too many of our sleak and clever solutions also add to the fact that that web performance is still a big issue. According to the HTTP Archive the average web site is 2 MB in data delivered in 100(!) HTTP requests. And that years after we told people that each request is a massive cause of a slow and sluggish web experience. How can anyone explain things like the new LG G Watch site clocking in at 54 MB on the first loadLG G Watch site clocking in at 54 MB on the first load whilst being a responsive design?

Tools of awesome

There are no excuses. We have incredible tools that give us massive insight into our work. What we do is not black art any longer, we don’t hope that browsers do good things with our code. We can peek under the hood and see the parts moving. is incredible. It gives us detailed insight into what is going right and wrong in our web sites right in the browser. You can test the performance of a site simulating different speeds and load it from servers all over the world. You get a page optimisation checklist, graphs about what got loaded when and when things started rendering. You even get a video of your page loading and getting ready for users to play with.

There are many resources how to use this tool and others that help us with fixing performance issues. Addy Osmani gave a great talk at CSS Conf in Berlin re-writing the JSConf web site live on stage using many of these tools.

Browsers are incredible tools

Browsers have evolved from simple web consumption tools to full-on development environments. Almost all browsers have some developer tools built in that not only allow you to see the code in the current page but also to debug. You have step-by-step debugging of JavaScript, CSS debugging and live previews of colours, animations, element dimensions, transforms and fonts. You have insight into what was loaded in what sequence, you can see what is in localStorage and you can do performance analysis and see the memory consumption.
The innovation in development tools of browsers is incredible and moves at an amazing speed. You can now even debug on devices connected via USB or wireless and Chrome allows you to simulate various devices and network/connectivity conditions.
Sooner or later this might mean that we won’t any other editors any more. Any user downloading a browser could also become a developer. And that is incredible. But what about older browsers?

Polyfills as a services

A lot of bloat on the web happens because of us trying to give new, cool effects to old, tired browsers. We do this because of a wrong understanding of the web. It is not about giving the same functionality to everybody, but instead to give a working experience to everybody.

The idea of a polyfill is genius: write a solution for an older environment to play with new functionality and get our UX ready for the time browsers support it. It fails to be genius when we never, ever remove the polyfills from our solutions. The Financial Times development team now had a great idea to offer polyfill as a service. This means you include one JavaScript file.

<script src="//" async defer> </script>

You can define which functionality you want to polyfill and it’ll be done that way. When the browser supports what you want, the stop-gap solution never gets included at all. How good is that?

Flexbox growing up

Another thing of awesome I saw the other day at CSS tricks. Chris Coyier uses Flexbox to create a toolbar that has fixed elements and others using up the rest of the space. It extends semantic HTML and does a great job being responsive.


All the CSS code needed for it is this:

*, *:before, *:after { -moz-box-sizing: inherit; box-sizing: inherit; } html { -moz-box-sizing: border-box; box-sizing: border-box; } body { padding: 20px; font: 100% sans-serif; } .bar { display: -webkit-flex; display: -ms-flexbox; display: flex; -webkit-align-items: center; -ms-flex-align: center; align-items: center; width: 100%; background: #eee; padding: 20px; margin: 0 0 20px 0; } .bar > * { margin: 0 10px; } .icon { width: 30px; height: 30px; background: #ccc; border-radius: 50%; } .search { -webkit-flex: 1; -ms-flex: 1; flex: 1; } .search input { width: 100%; } .bar-2 .username { -webkit-order: 2; -ms-flex-order: 2; order: 2; } .bar-2 .icon-3 { -webkit-order: 3; -ms-flex-order: 3; order: 3; } .bar-3 .search { -webkit-order: -1; -ms-flex-order: -1; order: -1; } .bar-3 .username { -webkit-order: 1; -ms-flex-order: 1; order: 1; } .no-flexbox .bar { display: table; border-spacing: 15px; padding: 0; } .no-flexbox .bar > * { display: table-cell; vertical-align: middle; white-space: nowrap; } .no-flexbox .username { width: 1px; } @media (max-width: 650px) { .bar { -webkit-flex-wrap: wrap; -ms-flex-wrap: wrap; flex-wrap: wrap; } .icon { -webkit-order: 0 !important; -ms-flex-order: 0 !important; order: 0 !important; } .username { -webkit-order: 1 !important; -ms-flex-order: 1 !important; order: 1 !important; width: 100%; margin: 15px; }   .search { -webkit-order: 2 !important; -ms-flex-order: 2 !important; order: 2 !important; width: 100%; } }

That is pretty incredible, isn’t it?

More near-future tech of awesome

Other things that are brewing get me equally excited. WebRTC, WebGL, Web Audio and many more things are pointing to a high fidelity web. A web that allows for rich gaming experiences and productivity tools built right into the browser. We can video and audio chat with each other and send data in a peer-to-peer fashion without relying or burning up a server between us.

Service Workers will allow us to build a real offline experience. With AppCahse we’re hoping users will get something and not aggressively cache outdated information. If you want to know more about that watch these two amazing videos by Jake Archibald: The Service Worker: The Network layer that is yours to own and The Service worker is coming, look busy!

Web Components have been the near future for quite a while now and seem to be in a bit of a “let’s build a framework instead” rut. Phil Legetter has done and incredible job collecting what that looks like. It is true: the support of Shadow DOM across the board is still not quite there. But a lot of these frameworks offer incredible client-side functionality to go into the standard.

What can you do?

I think it is time to stop chasing the awesome of “soon we will be able to use that” and instead be more fearless about using what we have now. We love to write about just how broken things are when they are in their infancy. We tend to forget to re-visit them when they’ve matured more. Many things that were a fever dream a year ago are now ready for you to roll out – if you work with progressive enhancement. In general, this is a safe bet as the web will never be in a finished state. Even native platforms are only in a fixed state between major releases. Mattias Petter Johansson of Spotify put it quite succinctly in a thread why JavaScript is the only client side language:

Hating JavaScript is like hating the Internet.
The Internet is a cobweb of different technologies cobbled together with duct tape, string and chewing gum. It’s not elegantly designed in any way, because it’s more of a growing organism than it is a machine constructed with intent.

The web is chaotic, so much for sure, but it also aims to be longer lasting than other platforms. The in-built backwards compatibility of its technologies makes it a beautiful investment. As Paul Bakaus of Google put it:

If you build a web app today, it will run in browsers 10 years from now. Good luck trying the same with your favorite mobile OS (excluding Firefox OS).

The other issue we have to overcome is the dogma associated with some of our decisions. Yes, it would be excellent if we could use open web standards to build everything. It would be great if all solutions had their principles of distribution, readability and easy sharing. But we live in a world that has changed. In many ways in the mobile space we have to count our blessings. We can and should allow some closed technology take its course before we go back to these principles. We’ve done it with Flash, we can do it with others, too. My mantra these days is the following:

If you enable people world-wide to get a good experience and solve a problem they have, I like it. The technology you use is not the important part. How much you lock them in is. Don’t lock people in.

Go share and teach

One thing is for sure: we never had a more amazing environment to learn and share. Services like GitHub, JSFiddle, JSBin and Codepen make it easy to distribute and explain code. You can show instead of describe and you can fix instead of telling people that they are doing wrong. There is no better way to learn than to show and if you set out to teach you end up learning.

A great demo of this is together.js. Using this WebRTC based tool (or its implementation in JSFiddle by hitting the collaborate button) you can code together, with several cursors, audio chat or a text chat client directly in the browser. You explain in context and collaborate live. And you make each other learn something and get better. And this is what is really awesome.

Categorieën: Mozilla-nl planet

Mozilla wijzigt zoekfunctie in Firefox - Tweakers

Nieuws verzameld via Google - do, 27/11/2014 - 09:14

Mozilla wijzigt zoekfunctie in Firefox
De functie om te zoeken via sleutelwoorden, waarbij gebruikers bijvoorbeeld 'pricewatch tablets' kunnen invoeren om in de Pricewatch direct te zoeken op tablets vanuit de url-balk, werkt in de komende versie niet meer, maar zal volgens Mozilla ...

Categorieën: Mozilla-nl planet

Allison Naaktgeboren: Applying Privacy Series: The 2nd meeting

Mozilla planet - do, 27/11/2014 - 06:04

The day after the first meeting…

Engineering Manager: Welcome DBA, Operations Engineer, and Privacy Officer. Did you all get a chance to look over the project wiki? What do you think?

Operations Engineer: I did.

DBA: Yup, and I have some questions.

Privacy Officer: Sounds really cool, as long as we’re careful.

Engineer: We’re always careful!

DBA: There are a lot of pages on the web, Keeping that much data is going to be expensive. I didn’t see anything on the wiki about evicting entries and for a table that big, we’ll need to do that regularly.

Privacy Officer: Also, when will we delete the device ids? Those are like a fingerprint for someone’s phone, so keeping them around longer than absolutely necessary increases risk for the user & the company’s risk.

Operations Engineer: The less we keep around, the less it costs to maintain.

Engineer: We know that most mobile users have only 1-3 pages open at any given time and we estimate no more than 50,000 users will be eligible for the service.

DBA: Well that does suggest a manageable load, but that doesn’t answer my question.

Engineer: Want to say if a page hasn’t been accessed in 48 hours we evict it from the server? And we can tune that knob as necessary?

Operations Engineer: As long as I can tune it in prod if something goes haywire.

Privacy Officer:: And device ids?

Engineer: Apply the same rule to them?

Engineering Manager: 48 hours would be too short. Not everyone uses their mobile browser every day. I’d be more comfortable with 90 days to start.

DBA: I imagine you’d want secure destruction for the ids.

Privacy Officer:: You got it!

DBA: what about the backup tapes? We back up the dbs regularly?

Privacy Officer:: are the backups online?

DBA: No, like I said, they’re on tape. Someone has to physically run ‘em through a machine. You’d need physical access to the backup storage facility.

Privacy Officer:: Then it’s probably fine if we don’t delete from the tapes.

Operations Engineer: What is the current timeline?

Engineer: End of the quarter, 8 weeks or so.

Operations Engineer: We’re under water right now, so it might be tight getting the hardware in & set up. New hardware orders usually take 6 weeks to arrive. I can’t promise the hardware will be ready in time.

Engineering Manager: We understand, please do your best and if we have to, Product Manager won’t be happy, but we’ll delay the feature if we need to.

Privacy Officer:: Who’s going to be responsible for the data on the stage & production servers?

Engineering Manager: Product Manager has final say.

DBA: thanks. good to know!

Engineer: I’ll draw up a plan  and send it around for feedback tomorrow.


Who brought up user data safety & privacy concerns in this conversation?

Privacy Officer is obvious. The DBA & Operations Engineer also raised privacy concerns.

Categorieën: Mozilla-nl planet

Robert Helmer: Better Source Code Browsing With FreeBSD and Mozilla DXR

Mozilla planet - do, 27/11/2014 - 05:30

Lately I've been reading about the design and implementation of the FreeBSD Operating System (great book, you should read it).

However I find browsing the source code quite painful. Using vim or emacs is fine for editing invidual files, but when you are trying to understand and browse around a large codebase, dropping to a shell and grepping/finding around gets old fast. I know about ctags and similar, but I also find editors uncomfortable for browsing large codebases for an extended amount of time - web pages tend to be easier on the eyes.

There's an LXR fork called FXR available, which is way better and I am very grateful for it - however it has all the same shortcomings LXR that we've become very familiar with on the Mozilla LXR fork (MXR):

  • based on regex, not static analysis of the code - sometimes it gets things wrong, and it doesn't really understand the difference between a variable with the same name in different files
  • not particularly easy on the eyes (shallow and easily fixable, I know)

I've been an admirer of Mozilla's next gen code browsing tool, DXR, for a long time now. DXR uses a clang plugin to do static analysis of the code, so it produces the real call graph - this means it doesn't need to guess at the definition of types or where a variable is used, it knows.

A good example is to contrast a file on MXR with the same file on DXR. Let's say you wanted to know where this macro was first defined, that's easy in DXR - just click on the word "NS_WARNING" and select "Jump to definition".

Now try that on MXR - clicking on "NS_WARNING" instead yields a search which is not particularly helpful, since it shows every place in the codebase that the word "NS_WARNING" appears (note that DXR has the ability to do this same type of search, in case that's really what you're after).

So that's what DXR is and why it's useful. I got frustrated enough with the status quo trying to grok the FreeBSD sources that I took a few days and the with help of folks in the #static channel on (particularly Erik Rose) to get DXR running on FreeBSD and indexed a tiny part of the source tree as a proof-of-concept (the source for "/bin/cat"):

This is running on a FreeBSD instance in AWS.

DXR is currently undergoing major changes, SQLite to ElasticSearch transition being the central one. I am tracking how to get the "es" branch of DXR going in this gist.

Currently I am able to get a LINT kernel build indexed on DXR master branch, but still working through issues on the "es" branch.

Overall, I feel like I've learned way more about static analysis, how DXR works, FreeBSD source code and produced some useful patches for the Mozilla and the DXR project and hopefully will provide a useful resource for the FreeBSD project, all along the way. Totally worth it, I highly recommended working with all of the aforementioned :)

Categorieën: Mozilla-nl planet

Brian R. Bondy: Developing and releasing the Khan Academy Firefox OS app

Mozilla planet - do, 27/11/2014 - 04:51

I'm happy to announce that the Khan Academy Firefox OS app is now available in the Firefox Marketplace!

Khan Academy’s mission is to provide a free world-class education for anyone anywhere. The goal of the Firefox OS app is to help with the “anyone anywhere” part of the KA mission.


There's something exciting about being able to hold a world class education in your pocket for the cheap price of a Firefox OS phone. Firefox OS devices are mostly deployed in countries where the cost of an iPhone or Android based smart phone is out of reach for most people.

The app enables developing countries, lower income families, and anyone else to take advantage of the Khan Academy content. A persistent internet connection is not required.

What's that.... you say you want another use case? Well OK, here goes: A parent wanting each of their kids to have access to Khan Academy at the same time could be very expensive in device costs. Not anymore.


App features
  • Access to the full library of Khan Academy videos and articles.
  • Search for videos and articles.
  • Ability to sign into your account for:
  • Profile access.
  • Earning points for watching videos.
  • Continuing where you left off from previous partial video watches, even if that was on the live site.
  • Partial and full completion status of videos and articles.
  • Downloading videos, articles, or entire topics for later use.
  • Sharing functionality.
  • Significant effort was put in, to minify topic tree sizes for minimal memory use and faster loading.
  • Scrolling transcripts for videos as you watch.
  • The UI is highly influenced by the first generation Firefox OS iPhone app.
Development statistics
  • 340 commits
  • 4 months of consecutive commits with at least 1 commit per day
  • 30 minutes - 2 hours per day max
Technologies used

Technologies used to develop the app include:


The app is fully localized for English, Portuguese, French, and Spanish, and will use those locales automatically depending on the system locale. The content (videos, articles, subtitles) that the app hosts will also automatically change.

I was lucky enough to have several amazing and kind translators for the app volunteer their time.

The translations are hosted and managed on Transifex.

Want to contribute?

The Khan Academy Firefox OS app source is hosted in one of my github repositories and periodically mirrored on the Khan Academy github page.

If you'd like to contribute there's a lot of future tasks posted as issues on github.

Current minimum system requirements
  • Around 8MB of space.
  • 512 MB of RAM
Low memory devices

By default, apps on the Firefox marketplace are only served to devices with at least 500MB of RAM. To get them on 256MB devices, you need to do a low memory review.

One of the major enhancements I'd like to add next, is to add an option to use the YouTube player instead of HTML5 video. This may use less memory and may be a way onto 256MB devices.

How about exercises?

They're coming in a future release.

Getting preinstalled on devices

It's possible to request to get pre-installed on devices and I'll be looking into that in the near future after getting some more initial feedback.

Projects like Matchstick also seem like a great opportunity for this app.

Categorieën: Mozilla-nl planet

Hannah Kane: We are very engaging

Mozilla planet - wo, 26/11/2014 - 23:12

Yesterday someone asked me what the engagement team is up to, and it made me sad because I realized I need to do a waaaaay better job of broadcasting my team’s work. This team is dope and you need to know about it.

As a refresher, our work encompasses these areas:

  • Grantwriting
  • Institutional partnerships
  • Marketing and communications
  • Small dollar fundraising
  • Production work (i.e. Studio Mofo)

In short, we aim to support the Webmaker product and programs and our leadership pipelines any time we need to engage individuals or institutions.

What’s currently on our plate:

Pro-tip: You can always see what we’re up to by checking out the Engagement Team Workbench.

These days we’re spending our time on the following:

  • End of Year Fundraising: With the help of a slew of kick-ass engineers, Andrea and Kelli are getting to $2M. (view the Workbench).
  • Mozilla Gear launch: Andrea and Geoffrey are obsessed with branded hoodies. To complement our fundraising efforts, they just opened a brand new site for people to purchase Mozilla Gear (view the project management spreadsheet).
  • Fall Campaign: Remember the 10K contributor goal? We do! An-Me and Paul have been working with Claw, Amira, Michelle, and Lainie, among others, to close the gap through a partner-based strategy (view the Workbench).
  • Mobile Opportunity: Ben is helping to envision and build partnerships around this work, and Paul and Studio Mofo are providing marketing, comms, and production support (the Mobile Opportunity Workbench is here, the engagement-specific work will be detailed soon).
  • Building a Webmaker Marketing Plan for 2015: The site and programs aren’t going to market themselves! Paul is drafting a comprehensive marketing calendar for 2015 that complements the product and program strategies. (plan coming soon)
  • 2015 Grants Pipeline: Ben and An-Me are always on the lookout for opportunities, and Lynn is responsible for writing grants and reports to fund our various programs and initiatives.
  • Additional Studio Mofo projects: Erika, Mavis, and Sabrina are always working on something. In addition to their work supporting most of the above, you can see a full list of projects here.
  • Salesforce for grants and partnerships: We’ve completed a custom Salesforce installation and Ben has begun the process of training staff to use it. Much more to come to make it a meaningful part of our workflow (Workbench coming soon).
  • Open Web Fellows recruitment: We’re supporting our newest fellowship with marketing support (view the Hype Plan)

Categorieën: Mozilla-nl planet

Niko Matsakis: Purging proc

Mozilla planet - wo, 26/11/2014 - 22:58

The so-called “unboxed closure” implementation in Rust has reached the point where it is time to start using it in the standard library. As a starting point, I have a pull request that removes proc from the language. I started on this because I thought it’d be easier than replacing closures, but it turns out that there are a few subtle points to this transition.

I am writing this blog post to explain what changes are in store and give guidance on how people can port existing code to stop using proc. This post is basically targeted Rust devs who want to adapt existing code, though it also covers the closure design in general.

To some extent, the advice in this post is a snapshot of the current Rust master. Some of it is specifically targeting temporary limitations in the compiler that we aim to lift by 1.0 or shortly thereafter. I have tried to mention when that is the case.

The new closure design in a nutshell

For those who haven’t been following, Rust is moving to a powerful new closure design (sometimes called unboxed closures). This part of the post covers the highlight of the new design. If you’re already familiar, you may wish to skip ahead to the “Transitioning away from proc” section.

The basic idea of the new design is to unify closures and traits. The first part of the design is that function calls become an overloadable operator. There are three possible traits that one can use to overload ():

1 2 3 trait Fn<A,R> { fn call(&self, args: A) -> R }; trait FnMut<A,R> { fn call_mut(&mut self, args: A) -> R }; trait FnOnce<A,R> { fn call_once(self, args: A) -> R };

As you can see, these traits differ only in their “self” parameter. In fact, they correspond directly to the three “modes” of Rust operation:

  • The Fn trait is analogous to a “shared reference” – it means that the closure can be aliased and called freely, but in turn the closure cannot mutate its environment.
  • The FnMut trait is analogous to a “mutable reference” – it means that the closure cannot be aliased, but in turn the closure is permitted to mutate its environment. This is how || closures work in the language today.
  • The FnOnce trait is analogous to “ownership” – it means that the closure can only be called once. This allows the closure to move out of its environment. This is how proc closures work today.
Enabling static dispatch

One downside of the older Rust closure design is that closures and procs always implied virtual dispatch. In the case of procs, there was also an implied allocation. By using traits, the newer design allows the user to choose between static and virtual dispatch. Generic types use static dispatch but require monomorphization, and object types use dynamic dispatch and hence avoid monomorphization and grant somewhat more flexibility.

As an example, whereas before I might write a function that takes a closure argument as follows:

1 2 3 4 5 fn foo(hashfn: |&String| -> uint) { let x = format!("Foo"); let hash = hashfn(&x); ... }

I can now choose to write that function in one of two ways. I can use a generic type parameter to avoid virtual dispatch:

1 2 3 4 5 6 7 fn foo<F>(hashfn: F) where F : FnMut(&String) -> uint { let x = format!("Foo"); let hash = hashfn(&x); ... }

Note that we write the type parameters to FnMut using parentheses syntax (FnMut(&String) -> uint). This is a convenient syntactic sugar that winds up mapping to a traditional trait reference (currently, for<'a> FnMut<(&'a String,), uint>). At the moment, though, you are required to use the parentheses form, because we wish to retain the liberty to change precisely how the Fn trait type parameters work.

A caller of foo() might write:

1 2 let some_salt: String = ...; foo(|str| myhashfn(str.as_slice(), &some_salt))

You can see that the || expression still denotes a closure. In fact, the best way to think of it is that a || expression generates a fresh structure that has one field for each of the variables it touches. It is as if the user wrote:

1 2 3 let some_salt: String = ...; let closure = ClosureEnvironment { some_salt: &some_salt }; foo(closure);

where ClosureEnvironment is a struct like the following:

1 2 3 4 5 6 7 8 9 struct ClosureEnvironment<'env> { some_salt: &'env String } impl<'env,'arg> FnMut(&'arg String) -> uint for ClosureEnvironment<'env> { fn call_mut(&mut self, (str,): (&'arg String,)) -> uint { myhashfn(str.as_slice(), &self.some_salt) } }

Obviously the || form is quite a bit shorter.

Using object types to get virtual dispatch

The downside of using generic type parameters for closures is that you will get a distinct copy of the fn being called for every callsite. This is a great boon to inlining (at least sometimes), but it can also lead to a lot of code bloat. It’s also often just not practical: many times we want to combine different kinds of closures together into a single vector. None of these concerns are specific to closures. The same things arise when using traits in general. The nice thing about the new closure design is that it lets us use the same tool – object types – in both cases.

If I wanted to write my foo() function to avoid monomorphization, I might change it from:

1 2 3 fn foo<F>(hashfn: F) where F : FnMut(&String) -> uint {...}


1 2 fn foo(hashfn: &mut FnMut(&String) -> uint) { {...}

Note that the argument is now a &mut FnMut(&String) -> uint, rather than being of some type F where F : FnMut(&String) -> uint.

One downside of changing the signature of foo() as I showed is that the caller has to change as well. Instead of writing:

1 foo(|str| ...)

the caller must now write:

1 foo(&mut |str| ...)

Therefore, what I expect to be a very common pattern is to have a “wrapper” that is generic which calls into a non-generic inner function:

1 2 3 4 5 6 7 8 fn foo<F>(hashfn: F) where F : FnMut(&String) -> uint { foo_obj(&mut hashfn) } fn foo_obj(hashfn: &mut FnMut(&String) -> uint) {...}

This way, the caller does not have to change, and only this outer wrapper is monomorphized, and it will likely be inlined away, and the “guts” of the function remain using virtual dispatch.

In the future, I’d like to make it possible to pass object types (and other “unsized” types) by value, so that one could write a function that just takes a FnMut() and not a &mut FnMut():

1 2 fn foo(hashfn: FnMut(&String) -> uint) { {...}

Among other things, this makes it possible to transition simply between static and virtual dispatch without altering callers and without creating a wrapper fn. However, it would compile down to roughly the same thing as the wrapper fn in the end, though with guaranteed inlining. This change requires somewhat more design and will almost surely not occur by 1.0, however.

Specifying the closure type explicitly

We just said that every closure expression like || expr generates a fresh type that implements one of the three traits (Fn, FnMut, or FnOnce). But how does the compiler decide which of the three traits to use?

Currently, the compiler is able to do this inference based on the surrouding context – basically, the closure was an argument to a function, and that function requested a specific kind of closure, so the compiler assumes that’s the one you want. (In our example, the function foo() required an argument of type F where F implements FnMut.) In the future, I hope to improve the inference to a more general scheme.

Because the current inference scheme is limited, you will sometimes need to specify which of the three fn traits you want explicitly. (Some people also just prefer to do that.) The current syntax is to use a leading &:, &mut:, or :, kind of like an “anonymous parameter”:

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 // Explicitly create a `Fn` closure which cannot mutate its // environment. Even though `foo()` requested `FnMut`, this closure // can still be used, because a `Fn` closure is more general // than `FnMut`. foo(|&:| { ... }) // Explicitly create a `FnMut` closure. This is what the // inference would select anyway. foo(|&mut:| { ... }) // Explicitly create a `FnOnce` closure. This would yield an // error, because `foo` requires a closure it can call multiple // times in a row, but it is being given a closure that can be // called exactly once. foo(|:| { ... }) // (ERROR)

The main time you need to use an explicit fn type annotation is when there is no context. For example, if you were just to create a closure and assign it to a local variable, then a fn type annotation is required:

1 let c = |&mut:| { ... };

Caveat: It is still possible we’ll change the &:/&mut:/: syntax before 1.0; if we can improve inference enough, we might even get rid of it altogether.

Moving vs non-moving closures

There is one final aspect of closures that is worth covering. We gave the example of a closure |str| myhashfn(str.as_slice(), &some_salt) that expands to something like:

1 2 3 struct ClosureEnvironment<'env> { some_salt: &'env String }

Note that the variable some_salt that is used from the surrounding environment is borrowed (that is, the struct stores a reference to the string, not the string itself). This is frequently what you want, because it means that the closure just references things from the enclosing stack frame. This also allows closures to modify local variables in place.

However, capturing upvars by reference has the downside that the closure is tied to the stack frame that created it. This is a problem if you would like to return the closure, or use it to spawn another thread, etc.

For this reason, closures can also take ownership of the things that they close over. This is indicated by using the move keyword before the closure itself (because the closure “moves” things out of the surrounding environment and into the closure). Hence if we change that same closure expression we saw before to use move:

1 move |str| myhashfn(str.as_slice(), &some_salt)

then it would generate a closure type where the some_salt variable is owned, rather than being a reference:

1 2 3 struct ClosureEnvironment { some_salt: String }

This is the same behavior that proc has. Hence, whenever we replace a proc expression, we generally want a moving closure.

Currently we never infer whether a closure should be move or not. In the future, we may be able to infer the move keyword in some cases, but it will never be 100% (specifically, it should be possible to infer that the closure passed to spawn should always take ownership of its environment, since it must meet the 'static bound, which is not possible any other way).

Transitioning away from proc

This section covers what you need to do to modify code that was using proc so that it works once proc is removed.

Transitioning away from proc for library users

For users of the standard library, the transition away from proc is fairly straightforward. Mostly it means that code which used to write proc() { ... } to create a “procedure” should now use move|| { ... }, to create a “moving closure”. The idea of a moving closure is that it is a closure which takes ownership of the variables in its environment. (Eventually, we expect to be able to infer whether or not a closure must be moving in many, though not all, cases, but for now you must write it explicitly.)

Hence converting calls to libstd APIs is mostly a matter of search-and-replace:

1 2 3 4 5 Thread::spawn(proc() { ... }) // becomes: Thread::spawn(move|| { ... }) task::try(proc() { ... }) // becomes: task::try(move|| { ... })

One non-obvious case is when you are creating a “free-standing” proc:

1 let x = proc() { ... };

In that case, if you simply write move||, you will get some strange errors:

1 let x = move|| { ... };

The problem is that, as discussed before, the compiler needs context to determine what sort of closure you want (that is, Fn vs FnMut vs FnOnce). Therefore it is necessary to explicitly declare the sort of closure using the : syntax:

1 2 let x = proc() { ... }; // becomes: let x = move|:| { ... };

Note also that it is precisely when there is no context that you must also specify the types of any parameters. Hence something like:

1 2 3 4 5 6 7 8 let x = proc(x:int) foo(x * 2, y); // ~~~~ ~~~~~ // | | // | | // | | // | No context, specify type of parameters. // | // proc always owns variables it touches (e.g., `y`)

might become:

1 2 3 4 5 6 7 8 let x = move|: x:int| foo(x * 2, y); // ~~~~ ^ ~~~~~ // | | | // | | No context, specify type of parameters. // | | // | No context, also specify FnOnce. // | // `move` keyword means that closure owns `y` Transitioning away from proc for library authors

The transition story for a library author is somewhat more complicated. The complication is that the equivalent of a type like proc():Send ought to be Box<FnOnce() + Send> – that is, a boxed FnOnce object that is also sendable. However, we don’t currently have support for invoking fn(self) methods through an object, which means that if you have a Box<FnOnce()> object, you can’t call it’s call_once method (put another way, the FnOnce trait is not object safe). We plan to fix this – possibly by 1.0, but possibly shortly thereafter – but in the interim, there are workarounds you can use.

In the standard library, we use a trait called Invoke (and, for convenience, a type called Thunk). You’ll note that although these two types are publicly available (under std::thunk), these types do not appear in the public interface any other stable APIs. That is, Thunk and Invoke are essentially implementation details that end users do not have to know about. We recommend you follow the same practice. This is for two reasons:

  1. It generally makes for a better API. People would rather write Thread::spawn(move|| ...) and not Thread::spawn(Thunk::new(move|| ...)) (etc).
  2. Eventually, once Box<FnOnce()> works properly, Thunk and Invoke may be come deprecated. If this were to happen, your public API would be unaffected.

Basically, the idea is to follow the “thin wrapper” pattern that I showed earlier for hiding virtual dispatch. If you recall, I gave the example of a function foo that wished to use virtual dispatch internally but to hide that fact from its clients. It did do by creating a thin wrapper API that just called into another API, performing the object coercion:

1 2 3 4 5 6 7 8 fn foo<F>(hashfn: F) where F : FnMut(&String) -> uint { foo_obj(&mut hashfn) } fn foo_obj(hashfn: &mut FnMut(&String) -> uint) {...}

The idea with Invoke is similar. The public APIs are generic APIs that accept any FnOnce value. These just turnaround and wrap that value up into an object. Here the problem is that while we would probably prefer to use a Box<FnOnce()> object, we can’t because FnOnce is not (currently) object-safe. Therefore, we use the trait Invoke (I’ll show you how Invoke is defined shortly, just let me finish this example):

1 2 3 4 5 6 7 8 9 10 pub fn spawn<F>(taskbody: F) where F : FnOnce(), F : Send { spawn_inner(box taskbody) } fn spawn_inner(taskbody: Box<Invoke+Send>) { ... }

The Invoke trait in the standard library is defined as:

1 2 3 trait Invoke<A=(),R=()> { fn invoke(self: Box<Self>, arg: A) -> R; }

This is basically the same as FnOnce, except that the self type is Box<Self>, and not Self. This means that Invoke requires allocation to use; it is really tailed for object types, unlike FnOnce.

Finally, we can provide a bridge impl for the Invoke trait as follows:

1 2 3 4 5 6 7 8 impl<A,R,F> Invoke<A,R> for F where F : FnOnce(A) -> R { fn invoke(self: Box<F>, arg: A) -> R { let f = *self; f(arg) } }

This impl allows any type that implements FnOnce to use the Invoke trait.

High-level summary

Here are the points I want you to take away from this post:

  1. As a library consumer, the latest changes mostly just mean replacing proc() with move|| (sometimes move|:| if there is no surrounding context).
  2. As a library author, your public interface should be generic with respect to one of the Fn traits. You can then convert to an object internally to use virtual dispatch.
  3. Because Box<FnOnce()> doesn’t currently work, library authors may want to use another trait internally, such as std::thunk::Invoke.

I also want to emphasize that a lot of the nitty gritty details in this post are transitionary. Eventually, I believe we can reach a point where:

  1. It is never (or virtually never) necessary to explicitly declare Fn vs FnMut vs FnOnce explicitly.
  2. We can frequently (though not always) infer the keyword move.
  3. Box<FnOnce()> works, so Invoke and friends are not needed.
  4. The choice between static and virtual dispatch can be changed without affecting users and without requiring wrapper functions.

I expect the improvements in inference before 1.0. Fixing the final two points is harder and so we will have to see where it falls on the schedule, but if it cannot be done for 1.0 then I would expect to see those changes shortly thereafter.

Categorieën: Mozilla-nl planet

Jared Wein: The Bugs Blocking In-Content Prefs, part 2

Mozilla planet - wo, 26/11/2014 - 20:59

Season's greetingsAt the beginning of November I published a blog post with the list of bugs that are blocking in-content prefs from shipping. Since that post, quite a few bugs have been fixed and we figured out an approach for fixing most of the high-contrast bugs.

As in the last post, bugs that should be easy to fix for a newcomer are highlighted in yellow.

Here is the new list of bugs that are blocking the release:

The list is now down to 16 bugs (from 20). In the meantime, the following bugs have been fixed:

  • Bug 1022578: Can’t tell what category is selected in about:preferences when using High Contrast mode
  • Bug 1022579: Help buttons in about:preferences have no icon when using High Contrast mode
  • Bug 1012410: Can’t close in-content cookie exceptions dialog
  • Bug 1089812: Implement updated In-content pref secondary dialogs

Big thanks goes out to Richard Marti and Tim Nguyen for fixing the above mentioned bugs as well as their continued focus on helping to bring the In-Content Preferences to to the Beta and Release channels.

Tagged: firefox, planet-mozilla, ux
Categorieën: Mozilla-nl planet

Lucas Rocha: Leaving Mozilla

Mozilla planet - wo, 26/11/2014 - 17:57

I joined Mozilla 3 years, 4 months, and 6 days ago. Time flies!

I was very lucky to join the company a few months before the Firefox Mobile team decided to rewrite the product from scratch to make it more competitive on Android. And we made it: Firefox for Android is now one of the highest rated mobile browsers on Android!

This has been the best team I’ve ever worked with. The talent, energy, and trust within the Firefox for Android group is simply amazing.

I’ve thoroughly enjoyed my time here but an exciting opportunity outside Mozilla came up and decided to take it.

What’s next? That’s a topic for another post ;-)

Categorieën: Mozilla-nl planet

Will Kahn-Greene: Input: New feedback form

Mozilla planet - wo, 26/11/2014 - 17:20

Since the beginning of 2014, I've been laying the groundwork to rewrite the feedback form that we use on Input.

Today, after a lot of work, we pushed out the new form! Just in time for Firefox 34 release.

This blog post covers the circumstances of the rewrite.


In 2011, James, Mike and I rewrote Input from the ground up. In order to reduce the amount of time it took to do that rewrite, we copied a lot of the existing forms and styles including the feedback forms. At that time, there were two: one for desktop and one for mobile. In order to avoid a translation round, we kept all the original strings of the two forms. The word "Firefox" was hardcoded in the strings, but that was fine since at the time Input only collected feedback for Firefox.

In 2013, in order to reduce complexity on the site because there's only one developer (me), I merged the desktop and mobile forms into one form. In order to avoid a translation round, I continued to keep the original strings. The wording became awkward and the flow through the form wasn't very smooth. Further, the form wasn't responsive at all, so it worked ok on desktop machines, but mediocre on other viewport sizes.

2014 rolled around and it was clear Input was going to need to branch out into capturing feedback for multiple products---some of which were not Firefox. The form made this difficult.

Related, the smoketest framework I wrote in 2014 struggled with testing the form accurately. I spent some time tweaking it, but a simpler form would make smoketesting a lot easier and less flakey.

Thus over the course of 3 years, we had accumulated the following problems:

  1. The flow through the form felt awkward, instructions weren't clear and information about what data would be public and what data would be private wasn't clear.
  2. Strings had "Firefox" hardcoded and wouldn't support multiple products.
  3. The form wasn't responsive and looked/behaved poorly in a variety of situations.
  4. The form never worked in right-to-left languages and possibly had other accessibility issues.
  5. The architecture didn't let us experiment with the form---tweaking the wording, switching to a more granular gradient of sentiment, capturing other data, etc.

Further, we were seeing many instances of people putting contact information in the description field and there was a significant amount of dropoff.

I had accrued the following theories:

  1. Since the email address is on the third card, users would put their email address in the description field because they didn't know they could leave their contact information later.
  2. Having two cards would reduce the amount of drop-off and unfinished forms than three cards.
  3. Having simpler instruction text would reduce the amount of drop-off.

Anyhow, it was due for an overhaul.

So what's changed?

I've been working on the overhaul for most of 2014, but did the bulk of the work in October and November. It has the following changes:

  1. The new form is shorter and clearer text-wise and design-wise.
  2. It consists of two cards: one for capturing sentiment and one for capturing details about that sentiment.
  3. It clearly delineates data that will be public from data that will be kept private.
  4. It works with LTR and RTL languages (If that's not true, please open a bug.)
  5. It fixes some accessibility issues. (If you find any, please open a bug.)
  6. It uses responsive design, mobile first. Thus it was designed for mobile devices and then scaled to desktop-sized viewports.
  7. It's smaller in kb size and requires fewer HTTP requests.
  8. It's got a better architecture for future development.
  9. It doesn't have "Firefox" hardcoded anymore.
  10. It's simpler so the smoketests work reliably now.
The old Input feedback form.

The old Input feedback form.

The new Input feedback form.

The new Input feedback form.

Note: Showing before and after isn't particularly exciting since this is only the first card of the form in both cases.

Going forward

The old and new forms were instrumented in various ways, so we'll be able to analyze differences between the two. Particularly, we'll be able to see if the new form performs worse.

Further, I'll be checking the data to see if my theories hold true especially the one regarding why people put contact data in the description.

There are a few changes in the queue that we want to make over the course of the next 6 months. Now that the new form has landed, we can start working on those.

Even if there are problems with the new form, we're in a much better position to fix them than we were before. Progress has been made!

Take a moment---try out the form and tell us YOUR feedback

Have you ever submitted feedback? Have you ever told Mozilla what you like and don't like about Firefox?

Take a moment and fill out the feedback form and tell us how you feel about Firefox.

Thanks, etc

I've been doing web development since 1997 or so. I did a lot of frontend work back then, but I haven't done anything serious frontend-wise in the last 5 years. Thus this was a big project for me.

I had a lot of help: Ricky, Mike and Rehan from the SUMO Engineering team were invaluable reviewing code, helping me fix issues and giving me a huge corpus of examples to learn from; Matt, Gregg, Tyler, Ilana, Robert and Cheng from the User Advocacy team who spent a lot of time smoothing out the rough edges of the new form so it captures the data we need; Schalk who wrote the product picker which I later tweaked; Matej who spent time proof-reading the strings to make sure they were consistent and felt good; the QA team which wrote the code that I copied and absorbed into the current Input smoketests; and the people who translated the user interface strings (and found a bunch of issues) making it possible for people to see this form in their language.

Categorieën: Mozilla-nl planet

Brian R. Bondy: SQL on Khan Academy enabled by SQLite, sqljs, asm.js and Emscripten

Mozilla planet - wo, 26/11/2014 - 16:17

Originally the computer programming section at Khan Academy only focused on learning JavaScript by using ProcessingJS. This still remains our biggest environment and still has lots of plans for further growth, but we recently generalized and abstracted the whole framework to allow for new environments.

The first environment we added was HTML/CSS which was announced here. You can try it out here. We also have a lot of content for learning how to make webpages already created.

SQL on Khan Academy

We recently also experimented with the ability to teach SQL on Khan Academy. This wasn't a near term priority for us, so we used our hack week as an opportunity to bring an SQL environment to Khan Academy.

You can try out the SQL environment here.


To implement the environment, one would think of WebSQL, but there are a couple major browser vendors (Mozilla and Microsoft) who do not plan to implement it and W3C stopped working on the specification at the end of 2010,

Our implementation of SQL is based off of SQLite which is compiled down to asm.js by Emscripten packaged into sqljs.

All of these technologies I just mentioned, other than SQLite which is sponsored by Mozilla, are Mozilla based projects. In particular, largely thanks to Alon Zakai.

The environment

The environment looks like this, the entire code for creating, inserting, updating, and querying a database occur in a single editor. Behind the scenes, we re-create the entire state of the database and result sets on each code edit. Things run smoothly in the browser and you don't notice that.

Unlike many online SQL tutorials, this environment is entirely client side. It has no limitations on what you can do, and if we wanted, we could even let you export the SQL databases you create.

One of the other main highlights is that you can modify the inserts in the editor, and see the results in real time without having to run the code. This can lead to some cool insights on how changing data affects aggregate queries.

Hour of Code

Unlike the HTML/CSS work, we don’t have a huge number of tutorials created, but we do have some videos, coding talk throughs, challenges and a project setup in a single tutorial which we’ll be using for one of our hour of code offerings: Hour of Databases.

Categorieën: Mozilla-nl planet

Robert Nyman: Leaving Mozilla

Mozilla planet - wo, 26/11/2014 - 14:42

This is a really hard blog post to write, but I need to share this with you: I’m leaving Mozilla.

It started in 2009

At the end of 2008 I had started learning to code extensions for Firefox, and in March 2009 I went to Berlin to give my first international presentation at an add-ons workshop.

It was amazing! The rush of being on stage, teaching people, learning from them; helping, discussing and having a great time! I really loved it and at that time I felt like I had found home, what I was supposed to work with.

The following years I was part of the Mozilla community, speaking at more workshops and attending MozCamps. In 2011, a position came up as a Technical Evangelist and I joined Mozilla full time.

What has happened since

Since I started I’ve gotten to meet numerous fantastic and inspiring people, both employees and people in the great Mozilla community. I’ve traveled extensively and became the Most well-travelled speaker on Lanyrd – now the count is up to 32 countries.

I’ve also written more in detail about Why I travel and about working with developer relations and Why I Do What I Do. There’s also lots more in the Travel category.


I’ve worked on a lot of things at Mozilla over the years, and a couple of the things I’m really proud of is having run the Mozilla Hacks blog over the last two years, and having published 350 quality posts in 2 years! I also took the initiative to launch feedback channels around the Firefox Developer Tools and Open Web Apps and we’ve gotten great feedback from developers there.

Moving on

Alas, it’s time to move on. I’ve always preached to developers to always strive for more, whether that’s a new position in the current company or changing jobs, to ensure they keep on evolving and don’t stagnate. And I feel I really have to follow my own example in this regard.

I’ve gotten to learn and experience a lot of things at Mozilla and for that I’m eternally grateful.

Mozilla is going through a number of challenges at the moment, and to be honest, it’s my belief that the upper management need to acknowledge and address these.

I believe Mozilla is representing a great cause and I wish they can fix and tend to what they’re facing and that they come out stronger. I believe the Open Web and people need Mozilla and I wish it, and all the great people I know there, all the best.

What happens next?

I will be starting a new job, and I’ll tell you about it tomorrow, Thursday. For now, I’ll just let this sink in and then I’ll talk more about it.

If you have any questions or thoughts, please let me know here in the comments or e-mail me at robert [at] robertnyman [dot] com.

I’m always here for you. Thanks for reading.

Categorieën: Mozilla-nl planet