Mozilla Nederland LogoDe Nederlandse

Hacks.Mozilla.Org: @media, MathML, and Django 1.11: MDN Changelog for May 2018

Mozilla planet - vr, 08/06/2018 - 16:50

Editor’s note: A changelog is “a log or record of all notable changes made to a project. [It] usually includes records of changes such as bug fixes, new features, etc.” Publishing a changelog is kind of a tradition in open source, and a long-time practice on the web. We thought readers of Hacks and folks who use and contribute to MDN Web Docs would be interested in learning more about the work of the MDN engineering team, and the impact they have in a given month. We’ll also introduce code contribution opportunities, interesting projects, and new ways to participate.

Done in May

Here’s what happened in May to the code, data, and tools that support MDN Web Docs:

We’ll continue this work in June.

Migrated CSS @media and MathML compat data

The browser compatibility migration continued, jumping from 72% to 80% complete. Daniel D. Beck completed the CSS @media rule features, by converting data (PR 2087 and a half-dozen others) and by reviewing PRs like 1977 from Nathan Cook. Mark Boas finished up MathML, submitting 26 pull requests.

The first few entries of the CSS @media browser compatiblity table

The @media table has 32 features. It’s a big one.

There are over 8500 features in the BCD dataset, and half of the remaining work is already submitted as pull requests. We’re making steady progress toward completing the migration effort.

Prepared for Django 1.11

MDN runs on Django 1.8, the second long-term support (LTS) release. In May, we updated the code so that MDN’s test suite and core processes work on Django 1.8, 1.9, 1.10, and 1.11. We didn’t make it by the end of May, but by the time this report is published, we’ll be on Django 1.11 in production.

Many Mozilla web projects use Django, and most stick to the LTS releases. In 2015, the MDN team upgraded from Django 1.4 LTS to 1.7 in PR 3073, which weighed in at 223 commits, 12,000 files, 1.2 million lines of code, and at least six months of effort. Django 1.8 followed six months later in PR 3525, a lighter change of 30 commits, 2,600 files, and 250,000 lines of code. After this effort, the team was happy to stick with 1.8 for a while.

Most of the upgrade pain was specific to Kuma. The massive file counts were due to importing libraries as git submodules, a common practice at the time for security, reliability, and customization. It was also harder to update libraries, which meant updates were often delayed until necessary. The 1.7 upgrade included an update from Python 2.6 to 2.7, and it is challenging to support dual-Python installs on a single web server.

The MDN team worked within some of these constraints, and tackled others. They re-implemented the development VM setup scripts in Ansible, but kept Puppet for production. They updated submodules for the 1.7 effort, and switched to hashed requirements files for 1.8. A lot of codebase and process improvements were gained during the update effort. Other improvements, such as Dockerized environments, were added later to avoid issues in the next big update.

Django 1.4 wasn’t originally planned as an LTS release, but instead was scheduled for retirement after the 1.6 release. The end of support was repeatedly pushed further into the future, finally extending to six months after 1.8 was published.

On the other hand, the Django team knew that 1.8 was an LTS release when they shipped it, and would get security updates for three years. Many of the Django team’s decisions in 2015 made MDN’s update easier. Django 1.11 retained Python 2.7 support, avoiding the pain of a simultaneous Python upgrade. Django now has a predictable release process, which is great for website and library maintainers.

The Django supported versions table shows what releases are supported, and when they are supported. There's a new release each 8 months, regular releases are supported for 16, and LTS release for 3 years.

Django’s release roadmap makes scheduling update efforts easier.

Django also maintains a guide on updating Django, and the suggested process worked well for us.

The first step was to upgrade third-party libraries, which we’ve been working on for the past two years. Our goal was to get a library update into most production pushes, and we updated dozens of libraries while shipping unrelated features. Some libraries, such as django-pipeline, didn’t support Django 1.11, so we had to update them ourselves. In other cases, like django-tidings, we also had to take over project maintenance for a while.

The next step was to turn on Django’s deprecation warnings, to find the known update issues. This highlighted a few more libraries to update, and also the big changes needed in our code. One problem area was our language-prefixed URLs. MDN uses Mozilla-standard language codes, such as /en-US/, where the region is in capital letters. Django ships a locale-prefixed URL framework, but uses lowercase language codes, such as /en-us/. Mozilla’s custom framework broke in 1.9, and we tried a few approaches before copying Django’s framework and making adjustments for Mozilla-style language codes (PR 4790).

Next we ran our automated tests, and started fixing the many failures. Our goal was to support multiple Django versions in a single codebase, and we added optional builds to TravisCI to run tests on 1.9, 1.10, and 1.11 (PR 4806). We split the fixes into over 50 pull requests, and tested small batches of changes by deploying to production. In some cases, the same code works across all four versions. In other cases, we switched between code paths based on the Django version. Updates for Django 1.9 were about 90% of the upgrade effort.

This incremental approach is safer than a massive PR, and avoids issues with keeping a long-running branch up to date. It does make it harder to estimate the scope of a change. Looking at the commits that mention the update bugs, we changed 2500 to 3000 lines, which represents 10% of the project code.

Started tracking work in ZenHub

For the past two years, the MDN team had been using Taiga to plan and track our work in 3-week sprints. The team was unhappy with performance issues, and had been experimenting with GitHub’s project management features. Janet Swisher and Eric Shepherd led an effort to explore alternatives. In May, we started using ZenHub, which provides a sprint planning layer on top of GitHub issues and milestones.

The ZenHub board collects task cards into columns, that mive to the right as they approach completion.

See how the ‘sausage’ gets made with our ZenHub board.

If you have the ZenHub plugin installed, you can view the sprint board in the new mdn/sprints repository, which collects tasks across our 10 active repositories. It adds additional data to GitHub issues and pull requests, linking them into project planning. If you don’t have the plugin, you can view the sprint board on the ZenHub website. If you don’t want to sign in to ZenHub with your GitHub account, you can view the milestones in the individual projects, like the Sprint 4 milestone in the Interactive Examples project.

Continued HTML Interactive Examples

We’re still working through the technical challenges of shipping HTML interactive examples. One of the blocking issues was restricting style changes to the demo output on the right, but not the example code on the left. This is a use case for Shadow DOM, which can restrict styles to a subset of elements. Schalk Neethling shipped a solution based on Shadow DOM in PR 873 and PR 927, and it works natively in Chrome. Not all current browsers support Shadow DOM, so Schalk added a shim in PR 894. There are some other issues that we’ll continue to fix before including tabbed HTML examples on MDN.

Meanwhile, contributors from the global MDN community have written some interesting demos for <track> (Leonard Lee, PR 940), <map> (Adilson Sandoval, PR 931), and <blockquote> (Florian Scholz, PR 906). We’re excited to get these and the other examples on MDN.

On the left, HTML defines a <map> of polygon targets, and on the right the resulting image with link targets is displayed.

A demo of the polygon targets for <map>.

Shipped Tweaks and Fixes

There were 397 PRs merged in May:

60 of these were from first-time contributors:

Other significant PRs:

Planned for June

We plan to ship Django 1.11 to production in early June, even before this post is published. We’ll spend some additional time in June fixing any issues that MDN’s millions of visitors discover, and upgrading the code to take advantage of some of the new features.

We plan to keep working on the compatibility data migration, HTML interactive examples, and other in-progress projects. June is the end of the second quarter, and is a deadline for many quarterly goals.

We are gathering in San Francisco for Mozilla’s All-Hands. It will be a chance for the remote team to be together in the same room, to celebrate the year’s accomplishments to date, and to make plans for the rest of the year.

Categorieën: Mozilla-nl planet

Will Kahn-Greene: Standup report: June 8th, 2018

Mozilla planet - vr, 08/06/2018 - 14:00
What is Standup?

Standup is a system for capturing standup-style posts from individuals making it easier to see what's going on for teams and projects. It has an associated IRC bot standups for posting messages from IRC.

Project report

Over the last six months, we've done:

  • monthly library updates
  • revamped static assets management infrastructure
  • service maintenance
  • fixed the textarea to be resizeable (Thanks, Arai!)

The monthly library updates have helped with reducing technical debt. That takes a few hours each month to work through.

Paul redid how Standup does static assets. We no longer use django-pipeline, but instead use gulp. It works muuuuuuch better and makes it possible to upgrade to Djagno 2.0 soon. That was a ton of work over the course of a few days for both of us.

We've been keeping the Standup service running. That includes stage and production websites as well as stage and production IRC bots. That also includes helping users who are stuck--usually with accounts management. That's been a handful of hours.

Arai fixed the textareas so they're resizeable. That helps a ton! I'd love to get more help with UI/UX fixing.

Some GitHub stats:

GitHub ====== mozilla/standup: 15 prs Committers: pyup-bot : 6 ( +588, -541, 20 files) willkg : 5 ( +383, -169, 27 files) pmac : 2 ( +4179, -223, 58 files) arai-a : 1 ( +2, -1, 1 files) g-k : 1 ( +3, -3, 1 files) Total : ( +5155, -937, 89 files) Most changed files: requirements.txt (11) requirements-dev.txt (7) standup/ (5) docker-compose.yml (4) standup/status/jinja2/base.html (3) standup/status/ (3) standup/status/tests/ (3) standup/status/ (3) standup/status/ (3) standup/ (3) Age stats: Youngest PR : 0.0d: 466: Add site-wide messaging Average PR age : 2.3d Median PR age : 0.0d Oldest PR : 10.0d: 459: Scheduled monthly dependency update for May All repositories: Total merged PRs: 15 Contributors ============ arai-a g-k pmac pyup-bot willkg

That's it for the last six months!

Switching to swag-driven development

Do you use Standup?

Did you use Standup, but the glacial pace of fixing issues was too much so you switched to something else?

Do you want to use Standup?

We think there's still some value in having Standup around and there are still people using it. There's still some technical debt to fix that makes working on it harder than it should be. We've been working through that glacially.

As a project, we have the following problems:

  1. The bulk of the work is being done by Paul and Will.
  2. We don't have time to work on Standup.
  3. There isn't anyone else contributing.

Why aren't users contributing? Probably a lot of reasons. Maybe everyone has their own reason! Have I spent a lot of time to look into this? No, because I don't have a lot of time to work on Standup.

Instead, we're just going to make some changes and see whether that helps. So we're doing the following:

  1. Will promises to send out Standup project reports every 6 months before the All Hands and in doing this raise some awareness of what's going on and thank people who contributed.
  2. We're fixing the Standup site to be clearer on who's doing work and how things get fixed so it's more likely your ideas come to fruition rather than get stale.
  3. We're switching Standup to swag-driven development!

What's that you say? What's swag-driven development?

I mulled over the idea in my post on swag-driven development.

It's a couple of things, but mainly an explicit statement that people work on Standup in our spare time at the cost of not spending that time on other things. While we don't feel entitled to feeling appreciated, it would be nice to feel appreciated sometimes. Not feeling appreciated makes me wonder whether I should spend the time elsewhere. (And maybe that's the case--I have no idea.) Maybe other people would be more interested in spending their spare time on Standup if they knew there were swag incentives?

So what does this mean?

It means that we're encouraging swag donations!

  • If your team has stickers at the All Hands and you use Standup, find Paul and Will and other Standup contributors and give them one!
  • If there are features/bugs you want fixed and they've been sitting in the queue forever, maybe bribing is an option.
For the next quarter

Paul and I were going to try to get together at the All Hands and discuss what's next.

We don't really have an agenda. I know I look at the issue tracker and go, "ugh" and that's about where my energy level is these days.

Possible things to tackle in the next 6 months off the top of my head:

If you're interested in meeting up with us, toss me an email at willkg at mozilla dot com.

Categorieën: Mozilla-nl planet

Daniel Stenberg: quic wg interim Kista

Mozilla planet - vr, 08/06/2018 - 08:53

The IETF QUIC working group had its fifth interim meeting the other day, this time in Kista, Sweden hosted by Ericsson. For me as a Stockholm resident, this was ridiculously convenient. Not entirely coincidentally, this was also the first quic interim I attended in person.

We were 30 something persons gathered in a room without windows, with another dozen or so participants joining from remote. This being a meeting in a series, most people already know each other from before so the atmosphere was relaxed and friendly. Lots of the participants have also been involved in other protocol developments and standards before. Many familiar faces.


As QUIC is supposed to be done "soon", the emphasis is now a lot to close issues, postpone some stuff to "QUICv2" and make sure to get decisions on outstanding question marks.

Kazuho did a quick run-through with some info from the interop days prior to the meeting.

After MT's initial explanation of where we're at for the upcoming draft-13, Ian took us a on a deep dive into the Stream 0 Design Team report. This is a pretty radical change of how the wire format of the quic protocol, and how the TLS is being handled.

The existing draft-12 approach...

Is suggested to instead become...

What's perhaps the most interesting take away here is that the new format doesn't use TLS records anymore - but simplifies a lot of other things. Not using TLS records but still doing TLS means that a QUIC implementation needs to get data from the TLS layer using APIs that existing TLS libraries don't typically provide. PicoTLS, Minq, BoringSSL. NSS already have or will soon provide the necessary APIs. Slightly behind, OpenSSL should offer it in a nightly build soon but the impression is that it is still a bit away from an actual OpenSSL release.

EKR continued the theme. He talked about the quic handshake flow and among other things explained how 0-RTT and early data works. Taken from that context, I consider this slide (shown below) fairly funny because it makes it look far from simple to me. But it shows communication in different layers, and how the acks go, etc.


Mike then presented the state of HTTP over quic. The frames are no longer that similar to the HTTP/2 versions. Work is done to ensure that the HTTP layer doesn't need to refer or "grab" stream IDs from the transport layer.

There was a rather lengthy discussion around how to handle "placeholder streams" like the ones Firefox uses over HTTP/2 to create "anchors" on which to make dependencies but are never actually used over the wire. The nature of the quic transport makes those impractical and we talked about what alternatives there are that could still offer similar functionality.

The subject of priorities and dependencies and if the relative complexity of the h2 model should be replaced by something simpler came up (again) but was ultimately pushed aside.


Alan presented the state of QPACK, the HTTP header compression algorithm for hq (HTTP over QUIC). It is not wire compatible with HPACK anymore and there have been some recent improvements and clarifications done.

Alan also did a great step-by-step walk-through how QPACK works with adding headers to the dynamic table and how it works with its indices etc. It was very clarifying I thought.

The discussion about the static table for the compression basically ended with us agreeing that we should just agree on a fairly small fixed table without a way to negotiate the table. Mark said he'd try to get some updated header data from some server deployments to get another data set than just the one from WPT (which is from a single browser).

Interop-testing of QPACK implementations can be done by encode  + shuffle + decode a HAR file and compare the results with the source data. Just do it - and talk to Alan!

And the first day was over. A fully packed day.


Magnus started off with some heavy stuff talking Explicit Congestion Notification in QUIC and it how it is intended to work and some remaining issues.

He also got into the subject of ACK frequency and how the current model isn't ideal in every situation, causing to work like this image below (from Magnus' slide set):

Interestingly, it turned out that several of the implementers already basically had implemented Magnus' proposal of changing the max delay to min(RTT/4, 25 ms) independently of each other!

mvfst deployment

Subodh took us on a journey with some great insights from Facebook's deployment of mvfast internally, their QUIC implementation. Getting some real-life feedback is useful and with over 100 billion requests/day, it seems they did give this a good run.

Since their usage and stack for this is a bit use case specific I'm not sure how relevant or universal their performance numbers are. They showed roughly the same CPU and memory use, with a 70% RPS rate compared to h2 over TLS 1.2.

He also entertained us with some "fun issues" from bugs and debugging sessions they've done and learned from. Awesome.

The story highlights the need for more tooling around QUIC to help developers and deployers.

Load balancers

Martin talked about load balancers and servers, and how they could or should communicate to work correctly with routing and connection IDs.

The room didn't seem overly thrilled about this work and mostly offered other ways to achieve the same results.

Implicit Open

During the last session for the day and the entire meeting, was mt going through a few things that still needed discussion or closure. On stateless reset and the rather big bike shed issue: implicit open. The later being the question if opening a stream with ID N + 1 implicitly also opens the stream with ID N. I believe we ended with a slight preference to the implicit approach and this will be taken to the list for a consensus call.

Frame type extensibility

How should the QUIC protocol allow extensibility? The oldest still open issue in the project can be solved or satisfied in numerous different ways and the discussion waved back and forth for a while, debating various approaches merits and downsides until the group more or less agreed on a fairly simple and straight forward approach where the extensions will announce support for a feature which then may or may involve one or more new frame types (to be in a registry).

We proceeded to discuss other issues all until "closing time", which was set to be 16:00 today. This was just two days of pushing forward but still it felt quite intense and my personal impression is that there were a lot of good progress made here that took the protocol a good step forward.

The facilities were lovely and Ericsson was a great host for us. The Thursday afternoon cakes were great! Thank you!

Coming up

There's an IETF meeting in Montreal in July and there's a planned next QUIC interim probably in New York in September.

Categorieën: Mozilla-nl planet

Mozilla Open Design Blog: Paris, Munich, & Dresden: Help Us Give the Web a Voice!

Mozilla planet - vr, 08/06/2018 - 02:19

Text available in: English | Français | Deutsche

In July, our Voice Assistant Team will be in France and Germany to explore trust and technology adoption. We’re particularly interested in how people use voice assistants and how people listen to content like Pocket and podcasts. We would like to learn more how you use technology and how a voice assistant or voice user interface (VUIs) could improve your Internet and open web experiences. We will be conducting a series of in-home interviews and participatory design sessions. No prior voice assistant experience needed!

We would love to meet folks in person in:

Paris: July 3 – 6, 2018
Munich: July 9 – 13, 2018
Dresden: July 16 – 20, 2018

If you are interested in participating in our in home interviews (2 hours) or participatory design sessions (1.5 hours), please let us know! We’d love to meet you in-person!

If you are interested in meeting us in Paris, please fill out this form.
If you are interested in meeting us in Germany, please fill out this form.

All information will be held under Mozilla’s Privacy Policy.


Paris, Munich & Dresde : aidez-nous à donner une voix au Web !

En juillet, notre équipe Assistants vocaux sera en France et en Allemagne pour explorer la confiance et l’adoption de ces technologies. Nous sommes particulièrement intéressés par la façon dont les gens utilisent les assistants vocaux et par comment ils écoutent du contenu comme Pocket ou des podcasts. Nous aimerions en savoir plus sur la façon dont vous utilisez cette technologie et sur la façon dont un assistant vocal ou une interface utilisateur vocale (VUI) pourrait améliorer vos expériences sur le Web. Nous mènerons une série d’entrevues à domicile et de séances de conception participatives. Aucune utilisation d’assistant vocal n’est requise au préalable !

Nous aimerions rencontrer des gens en personne à :

Paris : du 3 au 6 juillet 2018,
Munich : du 9 au 13 juillet 2018,
Dresde : du 16 au 20 juillet 2018.

Si vous êtes intéressé(e)s à participer à nos interviews à domicile (2 heures) ou à des sessions de conception participative (1,5 heure), faites le nous savoir ! Nous aimerions vous rencontrer !

Si vous souhaitez nous rencontrer à Paris, veuillez remplir ce formulaire.
Si vous souhaitez nous rencontrer en Allemagne, veuillez remplir ce formulaire.

Toutes les informations seront conservées dans la politique de confidentialité de Mozilla.


Paris, München, & Dresden: Bitte helfen Sie uns, dem Internet eine Stimme zu geben!

Im Juli wird unser Voice Assistant Team in Frankreich und Deutschland sein, um Technologie-Akzeptanz und -Vertrauen zu erkunden. Uns interessiert besonders, wie Menschen Sprachassistenten benutzen und wie Menschen Inhalte wie Pocket und Podcasts hören. Wir würden gerne mehr darüber erfahren, wie Sie Technologie benutzen und wie ein Sprachassistent oder eine Sprachbenutzeroberfläche (VUIs) Ihr Internet- und freies Weberlebnis verbessern könnte. Wir werden eine Reihe von In-Home-Interviews und partizipativen Design-Sessions durchführen. Keine vorherige Erfahrung des Sprachassistenten erforderlich!

Wir würden uns freuen Sie persönlich zu treffen in:

Paris: 3. – 6. Juli 2018;
München: 9. – 13. Juli 2018;
Dresden: 16. – 20. Juli 2018.

Wenn Sie daran interessiert sind, an unseren In-Home-Interviews (2 Stunden) oder partizipativen Design-Sessions (1,5 Stunden) teilzunehmen, lassen Sie es uns bitte wissen! Wir würden uns freuen, Sie persönlich zu treffen!

Wenn Sie Interesse haben, uns in Paris zu treffen, füllen Sie bitte dieses Formular aus.
Wenn Sie Interesse haben, uns in Deutschland zu treffen, füllen Sie bitte dieses Formular aus.

Alle Informationen werden unter der Datenschutzerklärung von Mozilla gespeichert.

The post Paris, Munich, & Dresden: Help Us Give the Web a Voice! appeared first on Mozilla Open Design.

Categorieën: Mozilla-nl planet

Michael Comella: Fixing Content Scripts on

Mozilla planet - vr, 08/06/2018 - 02:00

When writing a WebExtension using content scripts on, you’ll quickly find they don’t work as expected: the content scripts won’t run when clicking links, e.g. when clicking into an issue from the issues list.

Content scripts ordinarily reload for each new page visited but, on GitHub, they don’t. This is because links on GitHub mutate the DOM and use the history.pushState API instead of loading pages the standard way, which would create an entirely new DOM per page.

I wrote a content script to fix this, which you can easily drop into your own WebExtensions. The script works by adding a MutationObserver that will check when the DOM has been updated, thus indicating that the new page has been loaded from a user’s perspective, and notify the WebExtension about this event.

If you want to try it out, the source is on GitHub. You can also check out a sample.


The seemingly “correct” approach would be to create a history.pushState observer using the webNavigation.onHistoryStateUpdated listener. However, this listener does not work as expected: it’s called twice – once before the DOM has been mutated and once after – and I haven’t found a good way to distinguish them other than to look at changes in the DOM, which is already the approach my solution takes.

Categorieën: Mozilla-nl planet

Dave Townsend: Searchfox in VS Code

Mozilla planet - do, 07/06/2018 - 21:21

I spend most of my time developing flipping back and forth between VS Code and Searchfox. VS Code is a great editor but it has nowhere near the speed needed to do searches over the entire tree, at least on my machine. Searchfox on the other hand is pretty fast. But there’s something missing. I usually want to search Searchfox for something I found in the code. Then I want to get the file I found in Searchfox open in my editor.

Luckily VS Code has a decent extension system that allows you to add new features so I spent some time yesterday evening building an extension to integration some of Searchfox’s functionality into VS Code. With the extension installed you can search Searchfox for something from the code editor or pop open an input box to write your own query. The results show up right in VS Code.

A screenshot of Searchfox displayed in VS Code<figcaption class="wp-caption-text">Searchfox in VS Code</figcaption>

Click on a result in Searchfox and it will open the file in an editor in VS Code, right at the line you wanted to see.

It’s pretty early code so the usual disclaimers apply, expect some bugs and don’t be too surprised if it changes quite a bit in the near-term. You can check out the fairly simple code (rendering the Searchfox page is the hardest part of it) on Github.

If you want to give it a try, install the extension from the VS Code Marketplace or find it by searching for “Searchfox” in VS Code itself. Feel free to file issues for bugs or improvements that would be useful or of course submit pull requests of your own! I’d love to hear if you find it useful.

Categorieën: Mozilla-nl planet

Zibi Braniecki: Pseudolocalization in Firefox

Mozilla planet - do, 07/06/2018 - 19:47

One of the core projects we did over 2017 was a major overhaul of the Localization and Internationalization layers in Gecko, and all throughout the first half of 2018 we were introducing Fluent into Firefox.

All of that work was “behind the scenes” and laid the foundation to enable us to bring higher level improvements in the future.

Today, I’m happy to announce that the first of those high-level features has just landed in Firefox Nightly!


Pseudolocalization is a technology allowing for testing the localizability of software UI. It allows developers to check how the UI they are working on will look like when translated, without having to wait for translations to become available.

It shortens the Test-Driven Development cycle and lowers the burden of creating localizable UI.

Here’s a demo of how it works:

How to turn it on?

At the moment, we don’t have any UI for this feature. You need to create a new preference called intl.l10n.pseudo and set its value to accented for a left-to-right, ~30% longer strategy, or bidi for a right-to-left strategy. (more documentation).

If you test the bidi strategy you also will likely want to switch another preference – intl.uidirection – to 1. This is because right now the directionality of text and layout are not connected. We will improve that in the future.

We’ll be looking into ways to expose this functionality in the UI, and if you have any ideas or suggestions for what you’d like to see, let’s talk!

Nitty-gritty details

Although the feature may seem simple to add, and the actual patch that adds it was less than 100 lines long, it took many years of prototyping and years of development to build the foundation layers to allow for it.

Many of the design principles of Project Fluent combined with the vision shaped by the L10n Drivers Team at Mozilla allowed for dynamic runtime locale switching and declarative UI localization bindings.

Thanks to all of that work, we don’t have to require special builds or increase the bundle size for this feature to work. It comes practically for free and we can extend and fine tune pseudolocalization strategies on fly.


If that feature looks cool, in the esoteric way localization and internationalization can, please, make sure to high-five the people who put a lot of work to get this done: Staś Małolepszy, Axel Hecht, Francesco Lodolo, Jeff Beatty and Dave Townsend.

More features are coming! Stay tuned.

Categorieën: Mozilla-nl planet

Daniel Glazman: Browser detection inside a WebExtension

Mozilla planet - do, 07/06/2018 - 15:14

Just for the record, if you really need to know about the browser container of your WebExtension, do NOT rely on StackOverflow answers... Most of them are based, directly or not, on the User Agent string. So spoofable, so unreliable. Some will recommend to rely on a given API, implemented by Firefox and not Edge, or Chrome and not the others. In general valid for a limited time only... You can't even rely on chrome, browser or msBrowser since there are polyfills for that to make WebExtensions cross-browser.

So the best and cleanest way is probably to rely on chrome.extension.getURL("/") . It can start with "moz", "chrome" or "ms-browser". Unlikely to change in the near future. Simple to code, works in both content and background.

My pleasure :-)

Categorieën: Mozilla-nl planet

Mozilla Open Innovation Team: More Common Voices

Mozilla planet - do, 07/06/2018 - 11:25

Today we are excited to announce that Common Voice, Mozilla’s initiative to crowdsource a large dataset of human voices for use in speech technology, is going multilingual! Thanks to the tremendous efforts from Mozilla’s communities and our deeply engaged language partners you can now donate your voice in German, French and Welsh, and we are working to launch 40+ more as we speak. But this is just the beginning. We want Common Voice to be a tool for any community to make speech technology available in their own language.

Since we launched Common Voice last July, we have collected hundreds of thousands of voice samples in English through our website and iOS app. Last November, we published the first version of the Common Voice dataset. This data has been downloaded thousands of times, and we have seen the data being used in commercial voice products as well as open-source software like Kaldi and our very own speech recognition engine, project Deep Speech.

Up until now, Common Voice has only been available for voice contributions in English. But the goal of Common Voice has always been to support many languages so that we may fulfill our vision of making speech technology more open, accessible, and inclusive for everyone. That is why our main effort these last few months has been around growing and empowering individual language communities to launch Common Voice in their parts of the world, in their local languages and dialects.

In addition to localizing the website, these communities are populating Common Voice with copyright-free sentences for people to read that have the required characteristics for a high quality dataset. They are also helping promote the site in their countries, building a community of contributors, with the goal of growing the total number of hours of data available in each language.

In addition to English, we are now collecting voice samples in French, German and Welsh. And there are already more than 40 other languages on the way — not only big languages like Spanish, Chinese or Russian, but also smaller ones like Frisian, Norwegian or Chuvash. For us, these smaller languages are important because they are often under-served by existing commercial speech recognition services. And so by making this data available, we can empower entrepreneurs and communities to address this gap on their own.

Going multilingual marks a big step for Common Voice and we hope that it’s also a big step for speech technology in general. Democratizing voice technology will not only lower the barrier for global innovation, but also the barrier for access to information. Especially so for people who traditionally have had less of this access — for example, vision impaired, people who never learned to read, children, the elderly and many others.

We are thrilled to see the growing support we are getting to build the world’s largest public, multi-language voice dataset. You can help us grow it right now by donating your voice. You can also use the iOS app. If you would like to help bring Common Voice and speech technology to your language, visit our language page. And if you are part of an organization and have an idea for participating in this project, please get in touch (

Our Forum gives more details on how to help, as well as being a great place to ask questions and meet the communities.

Special Thanks

We would like to thank our Speech Advisory Group, people who have been expert advisors and contributors to the Common Voice project:

  • Francis Tyers — Assistant Professor of Computational Linguistics at Higher School of Economics in Moscow
  • Gilles Adda — Speech scientist
  • Thomas Griffiths — Digital Services Officer, Office of the Legislative Assembly, Australia
  • Joshua Meyer — PhD candidate in Speech Recognition
  • Delyth Prys — Language technologies at Bangor University research center
  • Dewi Bryn Jones — Language technologies at Bangor University research center
  • Wael Farhan — MS in Machine Learning from UCSD, currently doing research for Arabic NLP at
  • Eren Gölge — Machine learning scientist currently working on TTS for Mozilla
  • Alaa Saade — Senior Machine Learning Scientist @ Snips (Paris)
  • Laurent Besacier — Professor at Université Grenoble Alpes, NLP, speech processing, low resource languages
  • David van Leeuwen — Speech Technologist
  • Benjamin Milde — PhD candidate in NLP/speech processing
  • Shay Palachy — M.Sc. in Computer Science, Lead Data Scientist in a startup


Common Voice complements Mozilla’s work in the field of speech recognition, which runs under the project name Deep Speech, an open-source speech recognition engine model that approaches human accuracy, which was released in November 2017. Together with the growing Common Voice dataset we believe this technology can and will enable a wave of innovative products and services, and that it should be available to everyone.

More Common Voices was originally published in Mozilla Open Innovation on Medium, where people are continuing the conversation by highlighting and responding to this story.

Categorieën: Mozilla-nl planet

The Mozilla Blog: Parlez-vous Deutsch? Rhagor o Leisiau i Common Voice

Mozilla planet - do, 07/06/2018 - 10:00

We’re very proud to be announcing the next phase of the Common Voice project. It’s now available for contributors in three new languages, German, French and Welsh, with 40+ other languages on their way! But this is just the beginning. We want Common Voice to be a tool for any community to make speech technology available in their own language.

Speech interfaces are the next frontier for the Internet. Project Common Voice is our initiative to build a global corpus of open voice data to be used to train machine-learning algorithms to power the voice interfaces of the future. We believe these interfaces shouldn’t be controlled by a few companies as gatekeepers to voice-enabled services, and we want users to be understood consistently, in their own languages and accents.

As anyone who has studied the economics of the Internet knows, services chase money. And so it’s quite natural that developers and publishers seek to develop for the audience that will best reward their efforts. What we see as a consequence is an Internet that is heavily skewed towards English, in a world where English is only spoken by 20% of the global population, and only 5% natively. This is increasingly going to be an accessibility issue, as Wired noted last year, “Voice Is the Next Big Platform, Unless You Have an Accent”.

Inevitably, English is becoming a global language, spoken more and more widely, and this is a trend that was underway before the emergence of the Internet. However, the skew of Internet content to English is certainly accelerating this. And while global communications may be becoming easier, there is also a cultural wealth that we should preserve. Native languages provide a deeper shared cultural context, down to the level of influencing our thought patterns. This is a part of our humanity we surely wish to retain and support with technology. In doing so, we’re upholding a proud Mozilla tradition of enabling local ownership by a global community: Firefox is currently offered in 90 languages (and counting), powered by volunteers near you.

Common Voice contribution sprints in Berlin (credit: Michael Kohler), Mexico City (credit: Luis A. Sánchez), Jakarta (credit: Irayani Queencyputri) and Taipei (credit: Irvin Chen), from the top left to the bottom right

With Common Voice it’s the same volunteer passion that drives the project further and we’re grateful for all contributors who already said, “We want to help bringing speech recognition technology to my part of the world – what can we do?”. It is the underlying stories which also make this project so rewarding for me personally:

In Indonesia 20 community members came to our community space in Jakarta for a meet-up to write up sentences for the text corpus that will become the basis for voice recordings. They went into overdrive and submitted around 4,000 sentences within two days.

In Kenya a group of volunteers interested in Mozilla projects found out about Common Voice and started both localizing the website and submitting sentences in Swahili, Jibana and Kikiyu, all highly underrepresented languages, which we’re extremely happy to support. This is in addition to working with language experts in these communities like Laurent Besacier, the initiator of ALFFA, an interdisciplinary project bundling resources and expertise in speech analysis and speech technologies for African languages.

If we look at the country where I’m from, there has been one particular contributor to the Common Voice github project since the very early days. He originally contributed to the English effort, but he is German and wanted to see Common Voice come to Germany. He set himself on a strict schedule, wrote a few sentences every day for the next 6 months (while commuting to school or work), and collected 11,000 (!) sentences, ranging from poetry to day-to-day conversations.

Speaking of which: Another German contributor joined the Global Sprint in our Berlin office, utterly frustrated about a lengthy but fruitless discussion at the post office (Sounds familiar, Germany?). He may not have gotten his package, but I’d like to believe he had his personal cathartic moment when he submitted his whole experience in written form. Now Germans everywhere will help him voice his frustrations.

These are only a few of many wonderful examples from around the world – Taiwan, Slovenia, Macedonia, Hungary, Brazil, Serbia, Thailand, Spain, Nepal, and many more. They show that anyone can help grow the Common Voice project. Any individual or organization that has an interest in its native language, or an interest in open voice interfaces, will find it worth their while. You can contribute your voice at, or if you have a larger corpus of transcribed speech data, we’d love to hear from you.


Common Voice complements Mozilla’s work in the field of speech recognition, which runs under the project name Deep Speech, an open-source speech recognition engine model that approaches human accuracy, which was released in November 2017. Together with the growing Common Voice dataset we believe this technology can and will enable a wave of innovative products and services, and that it should be available to everyone.

The post Parlez-vous Deutsch? Rhagor o Leisiau i Common Voice appeared first on The Mozilla Blog.

Categorieën: Mozilla-nl planet

David Lawrence: Happy BMO Push Day!

Mozilla planet - wo, 06/06/2018 - 23:43

release tag

the following changes have been pushed to

  • [1430905] Remove legacy phabbugz code that is no longer needed
  • [1466159] crash graph is wrong
  • [1466122] Change “Reviews Requested of You” to show results are from Phabricator and not from BMO
  • [1465889] field should be red instead of black

discuss these changes on

Categorieën: Mozilla-nl planet

Armen Zambrano: AreWeFastYet UI refresh

Mozilla planet - wo, 06/06/2018 - 21:44

For a long time Mozilla’s JS team and others have been using to track the JS engine performance against various benchmarks.

<figcaption>Screenshot of landing page</figcaption>

In the last little while, there’s been work moving those benchmarks to another continuous integration system and we have the metrics in Mozilla’s Perfherder. This rewrite will focus on using the new generated data.

If you’re curious on the details about the UI refresh please visit this document. Feel free to add feedback. Stay tuned for an update next month.

Categorieën: Mozilla-nl planet

Mark Côté: Phabricator and Lando Launched

Mozilla planet - wo, 06/06/2018 - 17:11

The Engineering Workflow team at Mozilla is happy to announce that Phabricator and Lando are now ready for use with mozilla-central! This represents about a year of work integrating Phabricator with our systems and building out Lando.

There are more details in my post to the dev.platform list.

Categorieën: Mozilla-nl planet

The Firefox Frontier: A Socially Responsible Way to Internet

Mozilla planet - wo, 06/06/2018 - 16:32

Choices matter. That might sound flippant or obvious, but we’re not just talking about big, life-changing decisions. Little choices — daily choices — add up in a big way, online and off. This is … Read more

The post A Socially Responsible Way to Internet appeared first on The Firefox Frontier.

Categorieën: Mozilla-nl planet

Gervase Markham: A Case for the Total Abolition of Software Patents

Mozilla planet - wo, 06/06/2018 - 10:27

A little while back, I wrote a piece outlining the case for the total abolition (or non-introduction) of software patents, as seen through the lens of “promoting innovation”. Few of the arguments are new, but the “Narrow Road to Patent Goodness” presentation of the information is quite novel as far as I know, and may form a good basis for anyone trying to explain all the possible problems with software (or other) patents.

You can find it on my website.

Categorieën: Mozilla-nl planet

William Lachance: Mission Control 1.0

Mozilla planet - di, 05/06/2018 - 23:50

Just a quick announcement that the first “production-ready” version of Mission Control just went live yesterday, at this easy-to-remember URL:

For those not yet familiar with the project, Mission Control aims to track release stability and quality across Firefox releases. It is similar in spirit to arewestableyet and other crash dashboards, with the following new and exciting properties:

  • Uses the full set of crash counts gathered via telemetry, rather than the arbitrary sample that users decide to submit to crash-stats
  • Results are available within minutes of ingestion by telemetry (although be warned initial results for a release always look bad)
  • The denominator in our crash rate is usage hours, rather than the probably-incorrect calculation of active-daily-installs used by arewestableyet (not a knock on the people who wrote that tool, there was nothing better available at the time)
  • We have a detailed breakdown of the results by platform (rather than letting Windows results dominate the overall rates due to its high volume of usage)

In general, my hope is that this tool will provide a more scientific and accurate idea of release stability and quality over time. There’s lots more to do, but I think this is a promising start. Much gratitude to kairo, calixte, chutten and others who helped build my understanding of this area.

The dashboard itself an easier thing to show than talk about, so I recorded a quick demonstration of some of the dashboard’s capabilities and published it on air mozilla:


Categorieën: Mozilla-nl planet

Daniel Pocock: Public Money Public Code: a good policy for FSFE and other non-profits?

Mozilla planet - di, 05/06/2018 - 22:40

FSFE has been running the Public Money Public Code (PMPC) campaign for some time now, requesting that software produced with public money be licensed for public use under a free software license. You can request a free box of stickers and posters here (donation optional).

Many non-profits and charitable organizations receive public money directly from public grants and indirectly from the tax deductions given to their supporters. If the PMPC argument is valid for other forms of government expenditure, should it also apply to the expenditures of these organizations too?

Where do we start?

A good place to start could be FSFE itself. Donations to FSFE are tax deductible in Germany, the Netherlands and Switzerland. Therefore, the organization is partially supported by public money.

Personally, I feel that for an organization like FSFE to be true to its principles and its affiliation with the FSF, it should be run without any non-free software or cloud services.

However, in my role as one of FSFE's fellowship representatives, I proposed a compromise: rather than my preferred option, an immediate and outright ban on non-free software in FSFE, I simply asked the organization to keep a register of dependencies on non-free software and services, by way of a motion at the 2017 general assembly:

The GA recognizes the wide range of opinions in the discussion about non-free software and services. As a first step to resolve this, FSFE will maintain a public inventory on the wiki listing the non-free software and services in use, including details of which people/teams are using them, the extent to which FSFE depends on them, a list of any perceived obstacles within FSFE for replacing/abolishing each of them, and for each of them a link to a community-maintained page or discussion with more details and alternatives. FSFE also asks the community for ideas about how to be more pro-active in spotting any other non-free software or services creeping into our organization in future, such as a bounty program or browser plugins that volunteers and staff can use to monitor their own exposure.

Unfortunately, it failed to receive enough votes (minutes: item 24, votes: 0 for, 21 against, 2 abstentions)

In a blog post on the topic of using proprietary software to promote freedom, FSFE's Executive Director Jonas Öberg used the metaphor of taking a journey. Isn't a journey more likely to succeed if you know your starting point? Wouldn't it be even better having a map that shows which roads are a dead end?

In any IT project, it is vital to understand your starting point before changes can be made. A register like this would also serve as a good model for other organizations hoping to secure their own freedoms.

For a community organization like FSFE, there is significant goodwill from volunteers and other free software communities. A register of exposure to proprietary software would allow FSFE to crowdsource solutions from the community.

Back in 2018

I'll be proposing the same motion again for the 2018 general assembly meeting in October.

If you can see something wrong with the text of the motion, please help me improve it so it may be more likely to be accepted.

Offering a reward for best practice

I've observed several discussions recently where people have questioned the impact of FSFE's campaigns. How can we measure whether the campaigns are having an impact?

One idea may be to offer an annual award for other non-profit organizations, outside the IT domain, who demonstrate exemplary use of free software in their own organization. An award could also be offered for some of the individuals who have championed free software solutions in the non-profit sector.

An award program like this would help to showcase best practice and provide proof that organizations can run successfully using free software. Seeing compelling examples of success makes it easier for other organizations to believe freedom is not just a pipe dream.

Therefore, I hope to propose an additional motion at the FSFE general assembly this year, calling for an award program to commence in 2019 as a new phase of the PMPC campaign.

Please share your feedback

Any feedback on this topic is welcome through the FSFE discussion list. You don't have to be a member to share your thoughts.

Categorieën: Mozilla-nl planet

The Mozilla Blog: Facebook Must Do Better

Mozilla planet - di, 05/06/2018 - 18:53

The recent New York Times report alleging expansive data sharing between Facebook and device makers shows that Facebook has a lot of work to do to come clean with its users and to provide transparency into who has their data. We raised these transparency issues with Facebook in March and those concerns drove our decision to pause our advertising on the platform. Despite congressional testimony and major PR campaigns to the contrary, Facebook apparently has yet to fundamentally address these issues.

In its response, Facebook has argued that device partnerships are somehow special and that the company has strong contracts in place to prevent abuse. While those contracts are important, they don’t remove the need to be transparent with users and to give them control. Suggesting otherwise, as Facebook has done here, indicates the company still has a lot to learn.

The post Facebook Must Do Better appeared first on The Mozilla Blog.

Categorieën: Mozilla-nl planet

Henrik Skupin: My 15th Bugzilla account anniversary

Mozilla planet - di, 05/06/2018 - 18:51

Exactly 15 years ago at “2003-06-05 09:51:47 PDT” my journey in Bugzilla started. At that time when I created my account I would never have imagined where all these endless hours of community work ended-up. And even now I cannot predict how it will look like in another 15 years…

Here some stats from my activities on Bugzilla:

Bugs filed 4690 Comments made 63947 Assigned to 1787 Commented on 18579 QA-Contact 2767 Patches submitted 2629 Patches reviewed 3652


Categorieën: Mozilla-nl planet