Mozilla Nederland LogoDe Nederlandse
Mozilla gemeenschap

Mozilla Reps Community: Rep of the month: November 2014

Mozilla planet - zo, 07/12/2014 - 13:31

Again, it’s that time of the month where we show gratitude to the best of the Reps program.

Flore Allemandou is one of the oldest mozillians around, being a Mozilla Rep for a long time too. In the last months she took lead of the WoMoz project coordinating our activities in this area. She also organized our presence at the Adacamp editions in Berlin and Bangalore.

floreAt the MozFest she was one of the most active Reps around, helping out by leading sessions from community Building and diversity to even flash around 80 Flame devices. As part of the Mobilizers in France she organized several event in Lyon and Paris helping with our Firefox OS Launch there.

As this wasn’t enough, she organized Mozilla’s presence at Open World forum and Code of war. Locasprint was also another of the last months events together with some great photos for the Firefox 10 Celebration.

Thank you Flore for your amazing work!

Don’t forget to congratulate her on Discourse!

Categorieën: Mozilla-nl planet

Adrian Gaudebert: Socorro: the Super Search Fields guide

Mozilla planet - zo, 07/12/2014 - 00:58

Socorro has a master list of fields, called the Super Search Fields, that controls several parts of the application: Super Search and its derivatives (Signature report, Your crash reports... ), available columns in report/list/, and exposed fields in the public API. Fields contained in that list are known to the application, and have a set of attributes that define the behavior of the app regarding each of those fields. An explanation of those attributes can be found in our documentation.

In this guide, I will show you how to use the administration tool we built to manage that list.

You need to be a superuser to be able to use this administration tool.

Understanding the effects of this list

It is important to fully understand the effects of adding, removing or editing a field in this Super Search Fields tool.

A field needs to have a unique Name, and a unique combination of Namespace and Name in database. Those are the only mandatory values for a field. Thus, if a field does not define any other attribute and keeps their default values, it won't have any impact in the application -- it will merely be "known", that's all.

Now, here are the important attributes and their effects:

  • Is exposed - if this value is checked, the field will be accessible in Super Search as a filter.
  • Is returned - if this value is checked, the field will be accessible in Super Search as a facet / aggregation. It will also be available as a column in Super Search and report/list/, and it will be returned in the public API.
  • Permissions needed - permissions listed in this attribute will be required for a user to be able to use or see this field.
  • Storage mapping - this value will be used when creating the mapping to use in Elasticsearch. It changes the way the field is stored. You can use this value to define some special rules for a field, for example if it needs a specific analyzer. This is a sensitive attribute, if you don't know what to do with it, leave it empty and Elasticsearch will guess what the best mapping is for that field.

It is, as always, a rule of thumb to apply changes to the dev/staging environments before doing so in production. And to my Mozilla colleagues: this is mandatory! Please always apply any change to stage first, verify it works as you want (using Super Search for example), then apply it to production and verify there.

Getting there

To get to the Super Search Fields admin tool, you first need to be logged in as a superuser. Once that's done, you will see a link to the administration in the bottom-right corner of the page.

Fig 1 - Admin link

Clicking that link will get you to the admin home page, where you will find a link to the Super Search Fields page.

Fig 2 - Admin home page

The Super Search Fields page lists all the currently known fields with their attributes.

Fig 3 - Super Search Fields page

Adding a new field

On the Super Search Fields page, click the Create a new field link in the top-right corner. That leads you to a form.

Fig 4 - New field button

Fill all the inputs with the values you need. Note that Name is a unique identifier to this field, but also the name that will be displayed in Super Search. It doesn't have to be the same as Name in database. The current convention is to use the database name but in lower case and with underscores. So for example if your field is named DOMIPCEnabled in the database, we would make the Name something like dom_ipc_enabled.

Use the documentation about our attributes to understand how to fill that form.

Fig 5 - Example data for the new field form

Clicking the Create button might take some time, especially if you filled the Storage mapping attribute. If you did, in the back-end the application will perform a few things to very that this change does not break Elasticsearch indexing. If you get redirected to the Super Search Fields page, that means the operation was successful. Otherwise, an error will be displayed and you will need to press the Back button of your browser and fix the form data.

Note that cache is refreshed whenever you make a change to the list, so you can verify your changes right away by looking at the list.

Editing a field

Find the field you want to edit in the Super Search Fields list, and click the edit icon to the right of that field's row. That will lead you to a form much like the New field one, but prefilled with the current attributes' values of that field. Make the appropriate changes you need, and press the Update button. What applies to the New field form does apply here as well (mapping checks, cache refreshing, etc. ).

Fig 6 - The edit icon

Deleting a field

Find the field you want to edit in the Super Search Fields list, and click the delete icon to the right of that field's row. You will be prompted to confirm your intention. If you are sure about what you're doing, then confirm and you will be done.

Fig 7 - The delete icon

The missing fields tool

We have a tool that looks at all the fields known by Elasticsearch (meaning that Elasticsearch has received at least one document containing that field) and all the fields known in the Super Search Fields, and shows a diff of those. It is a good way to see if you did not forget some key fields that you could use in the app.

To access that list, click the See the list of missing fields link just above the Super Search Fields list.

Fig 8 - The missing fields link

The list of missing fields provides a direct link to create the field for each row. It will take you to the New field form with some prefilled values.

Fig 9 - Missing fields page


I think I have covered it all. If not, let me know and I'll adjust this guide. Same goes if you think some things are unclear or poorly explained.

If you find bugs in this Super Search Fields tool, please use Bugzilla to report them. And remember, Socorro is free / "libre" software, so you can also go ahead and fix the bugs yourself! :-)

Categorieën: Mozilla-nl planet

Tarek Ziadé: 5 work week tips

Mozilla planet - za, 06/12/2014 - 08:05

Our Mozilla work week just ended with an amazing evening. We had a private Mackelmore concert. Just check out Twitter with the #mozlandia tag and feel the vibe. example.

When they got on stage I must admit I did not know who Mackelmore was. yeah sorry. I live in a Spotify-curated music world and I have no TV.

At some point they played a song that got me thinking: ooohhh yeah that song, ok.


During some conversations a lot of folks told me that they where overwhelmed by the work week and har a very hard time to keep up with all the events. Some of them were very frustrated and felt like they were completely disconnected.

I went through this a lot in the past but things improved throughout the years. This blog post collect a few tips.

1. List the folks you want to meet

This one is a given. Before you arrive, make a list of the folks you want to meet and the topics you want to talk about with them.

Make that list short. 10, no more.

2. Do not code

This is the worst thing to do: dive into your laptop and code. It's easy to do and time will fly by once you've started to code. People that don't know you well will be afraid of disturbing you.

Coding is not something to do during your work weeks. If you need a break from the crowd that's the next tip.

3. Listen to your body

A work week is intense for your body. By the end of the week you will look like a Zombie and you will not be able to fully enjoy what's happening. If you are coming for a far place, the jetlag is going to make the problem worse. If you're a partygoer that's not going to help either. All the food and drinks are not really helping.

I've seen numerous folks getting really sick on day 3 or 4 because they had intensive days at the beginning of the event. It's hard not to burn out.

Some (young) folks are doing fine on this. I know I am not. What I did for the Portland work week was to skip everything on day 2 starting at 5pm, ate a soup and went to sleep at 8pm. Skipping on all the cookies and beers and goodies gives your body a bit of rest :)

That gave me the energy I needed for day 3.

4. Don't hang with your team all the time

You talk to those folks all the time. Meet other folks, check out other sessions, etc.

This is especially important if your native langage is not English. I got trapped many time by this problem: just hanging with a few french guys.

5. Walk away from meetings

Don't be shy of walking away from meetings that don't bring you any value. Walk out discretly and politely. You are not in a meeting to read hackernews on your laptop. You can do this at home.

People won't get offended in the context of a work week - unless this is a vital team meeting or something.

What are your tips?

Categorieën: Mozilla-nl planet

Tarek Ziadé: DNS-Based soft releases

Mozilla planet - za, 06/12/2014 - 06:30

Firefox Hello is this cool WebRTC app we've landed in Firefox to let you video chat with friends. You should try it, it's amazing.

My team was in charge of the server side of this project - which consists of a few APIs that keep track of some session information like the list of the rooms and such things.

The project was not hard to scale since the real work is done in the background by Tokbox - who provide all the firewall traversal infrastructure. If you are curious about the reasons we need all those server-side bits for a peer-2-peer technology, this article is great to get the whole picture:

One thing we wanted to avoid is a huge peak of load on our servers on Firefox release day. While we've done a lot of load testing, there are so many interacting services that it's quite hard to be 100% confident. Potentially going from 0 to millions of users in a single day is... scary ? :)

So right now only 10% of our user base sees the Hello button. You can bypass this by tweaking a few prefs, as explained in many places on the web.

This percent is going to be gradually increased so our whole user base can use Hello.

How does it work ?

When you start Firefox, a random number is generated. Then Firefox ask our service for another number. If the generated number is inferior to the number sent by the server, the Hello button is displayed. If is superior, the button is hidden.

Adam Roach proposed to set up an HTTP endpoint on our server to send back the number and after a team meeting I suggested to use a DNS lookup instead.

The reason I wanted to use a DNS server was to rely on a system that's highly available and freaking fast. On the server side all we had to do is to add a new DNS entry and let Firefox do a DNS lookup - yeah you can do DNS lookups in Javascript as long as you are within Gecko.

Due to a DNS limitation we had to move from a TXT field to an A field - which returns an IP field. But converting IP to integer values is not a problem, so that worked out.

See for all the details.

Generalizing the idea

I think using DNS as a distributed database for simple values like this is an awesome idea. I am happy I thought of this one :)

Based on the same technique, you can also set up some A/B testing based on the DNS server ability to send back a different value depending on things like a user location for example.

For example, we could activate a feature in Firefox only for people in Connecticut, or France or Europe.

We had a work week in Portland and we started to brainstorm on how such a service could look like, and if it would be practical from a client-side point of view.

The general feedback I had so far on this is: Hell yeah we want this!

To be continued...

Categorieën: Mozilla-nl planet

David Burns: Bugsy 0.4.0 - More search!

Mozilla planet - za, 06/12/2014 - 01:05

I have just released the latest version of Bugsy. This allows you to search bugs via change history fields and within a certain timeframe. This allows you to find things like bugs created within the last week, like below.

I have updated thedocumentation to get you started.

>>> bugs = bugzilla.search_for\ .keywords('intermittent-failure')\ .change_history_fields(["[Bug creation]"])\ .timeframe("2014-12-01", "2014-12-05")\ .search()

You can see the Changelog for more details..

Please raise issues on GitHub

Categorieën: Mozilla-nl planet

Mike Hommey: Using git to interact with mercurial repositories

Mozilla planet - vr, 05/12/2014 - 20:45

I was planning to publish this later, but after talking about this project to a few people yesterday and seeing the amount of excitement in response, I took some time this morning to tie a few loose ends and publish this now. Mozillians, here comes the git revolution.

Let me start with a bit of history. I am an early git user. I’ve been using git almost since its first release. I like it. A lot. I’ve contributed dozens of patches to git.

I started using mercurial when I got commit access to Mozilla repositories, much later. I don’t enjoy using mercurial much.

There are many tools to make git talk to mercurial. Most are called git-remote-hg because they use the git remote helpers infrastructure. All of them rely on having a local mercurial clone. When dealing with repositories like mozilla-central, it means storing more than 1.5GB of data just to talk to mercurial, on top of the git database.

So a few years ago, I started to toy with the idea to make git talk to mercurial directly. I got as far as being able to do a full clone of mozilla-central back then, in a reasonable amount of time. But I left it at that because I needed to figure out how to efficiently store all the metadata required to handle incremental updates/pulling, and didn’t have enough incentive to go forward: working with mercurial was not painful enough.

Fast forward to the beginning of this year. The mozilla-central repository is now much bigger than it used to be, and mercurial handles it much less smoothly than it used to when Mozilla switched to using it. That was enough to get me started again, but not enough to dedicate enough time to it.

Fast forward to a few weeks ago. Gregory Szorc poked dev-platform to know what kind of workflows people were using with git to work on Mozilla code. And I was really not satisfied with the answers. First, I was wondering why no-one was mentioning the existing tools. So I picked one, and tried.

Cloning mozilla-central took 12 hours and left me with a ~10GB .git directory. Running git gc –agressive for another 10 hours (my settings may have made gc take more time than it would have with the default configuration) brought it down to about 2.6GB, only 700MB of which is actual git data, the remainder being the associated mercurial repository. And as far as I understand it, the tool doesn’t really support our use of mercurial repositories, especially try (but I could be wrong, I didn’t really look too much).

That was the straw that broke the camel’s back. So after a couple weeks hacking, I now have something that can clone mozilla-central within 30 minutes on my machine (network transfer excluded). The resulting .git directory is around 1.5GB with the default git config, without running git gc. If you tweak the compression level in your git config, cloning takes a bit longer, and the repo takes about 1.1GB, And you can subsequently pull from mozilla-central. As well as pull from other branches without having to clone them from scratch. Push support is not there yet because it’s an early prototype, but I should be able to get that to work in the next couple weeks.

At this point, you may be wondering how you can use that thing. Here it comes:

$ git clone $ export PATH=$PATH:$(pwd)/git-remote-hg

Note it requires having the mercurial code available to python, because git-remote-hg uses the mercurial code to talk the mercurial wire protocol. Usually, having mercurial installed is enough.

You can now clone a mercurial repository:

$ git clone hg::

If, like me, you had a local mercurial clone, you can do the following instead:

$ git clone hg::/path/to/mozilla-central-clone $ git remote set-url origin hg::

You can then use git fetch/pull like with git repositories:

$ git pull

Now, you can add other repositories:

$ git remote add inbound hg:: $ git remote update inbound

There are a few caveats, like the fact that it currently creates new remote branches essentially any time you pull something. But it shouldn’t disrupt anything.

It should be noted that while the contents are identical to the gecko-dev git repositories (the git tree object sha1s are identical, I checked), the commit SHA1s are different. For two reasons: gecko-dev also contains the CVS history, and hg-git, which is used to fill it adds some mercurial metadata to commit messages that git-remote-hg doesn’t add.

It is, however, possible to graft the CVS history from gecko-dev to a clone created with git-remote-hg. Assuming you have a remote for gecko-dev and fetched from it, you can do the following:

$ echo eabda6aae98d14c71d7e7b95a66896868ff9500b 3ec464b55782fb94dbbb9b5784aac141f3e3ac01 >> .git/info/grafts

Last note: please read the README file when you update your git clone of the git-remote-hg repository. As the prototype evolves, there might be things that you need to do to your existing clones, and it will be written there.

Categorieën: Mozilla-nl planet

Mike Hommey: Using C++ templates to prevent some classes of buffer overflows

Mozilla planet - vr, 05/12/2014 - 18:45

I recently found a small buffer overflow in Firefox’s SOCKS support, and came up with a nice-ish way to make it a C++ compilation error when it may happen with some template magic.

A simplfied form of the problem looks like this:

class nsSOCKSSocketInfo { public: nsSOCKSSocketInfo() : mData(new uint8_t[BUFFER_SIZE]) , mDataLength(0) {} ~nsSOCKSSocketInfo() { delete[] mData; } void WriteUint8(uint8_t aValue) { mData[mDataLength++] = aValue; } void WriteV5AuthRequest() { mDataLength = 0; WriteUint8(0x05); WriteUint8(0x01); WriteUint8(0x00); } private: uint8_t* mData; uint32_t mDataLength; static const size_t BUFFER_SIZE = 2; };

Here, the problem is more or less obvious: the third WriteUint8() call in WriteV5AuthRequest() will write beyond the allocated buffer size. (The real buffer size was much larger than that, and it was a different method overflowing, but you get the point)

While growing the buffer size fixes the overflow, that doesn’t do much to prevent a similar overflow from happening again if the code changes. That got me thinking that there has to be a way to do some compile-time checking of this. The resulting solution, at its core, looks like this:

template <size_t Size> class Buffer { public: Buffer() : mBuf(nullptr) , mLength(0) {} Buffer(uint8_t* aBuf, size_t aLength=0) : mBuf(aBuf), mLength(aLength) {} Buffer<Size - 1> WriteUint8(uint8_t aValue) { static_assert(Size >= 1, "Cannot write that much"); *mBuf = aValue; Buffer result(mBuf + 1, mLength + 1); mBuf = nullptr; mLength = 0; return result; } size_t Written() { return mLength; } private: uint8_t* mBuf; size_t mLength; };

Then replacing WriteV5AuthRequest() with the following:

void WriteV5AuthRequest() { mDataLength = Buffer<BUFFER_SIZE>(mData) .WriteUint8(0x05) .WriteUint8(0x01) .WriteUint8(0x00) .Written(); }

So, how does this work? The Buffer class is templated by size. The first thing we do is to create an instance for the complete size of the buffer:


Then call the WriteUint8 method on that instance, to write the first byte:


The result of that call is a Buffer<BUFFER_SIZE – 1> (in our case, Buffer<1>) instance pointing to &mData[1] and recording that 1 byte has been written.
Then we call the WriteUint8 method on that result, to write the second byte:


The result of that call is a Buffer<BUFFER_SIZE – 2> (in our case, Buffer<0>) instance pointing to &mData[2] and recording that 2 bytes have been written so far.
Then we call the WriteUint8 method on that new result, to write the third byte:


But this time, the Size template parameter being 0, it doesn’t match the Size >= 1 static assertion, and the build fails.
If we modify BUFFER_SIZE to 3, then the instance we run that last WriteUint8 call on is a Buffer<1> and we don’t hit the static assertion.

Interestingly, this also makes the compiler emit more efficient code than the original version.

Check the full patch for more about the complete solution.

Categorieën: Mozilla-nl planet

Adam Lofting: Fundraising testing update

Mozilla planet - vr, 05/12/2014 - 17:01

I wrote a post over on about our latest round of optimization work for our End of Year Fundraising campaign.

We’ve been sprinting on this during the Mozilla all-hands workweek in Portland, which has been a lot of fun working face-to-face with the awesome team making this happen.

You can follow along with the campaign, and see how were doing at

And of course, we’d be over the moon if you wanted to make a donation.


These amazing people are working hard to build the web the world needs.

Categorieën: Mozilla-nl planet