Mozilla Nederland LogoDe Nederlandse

Hacks.Mozilla.Org: Hack on MDN: Building useful tools with browser compatibility data

Mozilla planet - do, 29/03/2018 - 20:00

From Friday, March 16 to Sunday, March 18, 2018, thirty-four people met in Mozilla’s Paris office to work on improving MDN’s Browser Compat Data. The amazing results included 221 pull requests that improved the quality of our data and created, prototyped, and improved tools and dashboards.

People clustered around 3 tables working on their computers in a gorgeous 19th century Parisian ballroom.<figcaption>People at work during Hack on MDN Paris 2018 in the gorgeous Garnier ballroom.</figcaption> Hack on MDN events

Hack on MDN evolved from the documentation sprints the MDN team organized between 2010 and 2013, which brought together a core team of volunteers to write and localize MDN content over a weekend. In 2014, we expanded the scope of the sprints by inviting people with different backgrounds; not just technical writers and wordsmiths, but also people who like to code or have UX design skill.

The first two Hack on MDN events happened in Paris in 2014 and Berlin in 2015. We took a break for a few years but missed the events and the community spirit they embody. This Hack on MDN in Paris was the first of two planned for this year (the next will take place in the autumn). For March, we decided to bring together an emerging MDN community and maximize productivity by focusing on a single theme with a broad scope: browser compatibility data.

The Hack on MDN format is a combination of unconference and hackathon; participants pitch projects and commit to working on concrete tasks (rather than meetings or long discussions) that can be completed in three days or less. People choose projects in which a group can make significant progress over a weekend.

Browser Compatibility Data

The web platform is unique in that it aims to create a consistent experience on top of different tools, browsers or servers. You create your website once, and it works in every browser, regardless of device, OS, or tool choice.

In an ever-evolving connected world, this is incredibly hard to achieve, and browsers implement the platform at different paces and with different priorities. Even if they aim for the same goals, it’s unlikely that a new feature is implemented by all major actors at the same time. Knowing the level of support in each browser helps developers make informed decisions about which technologies are mature enough to use, and which to avoid (e.g. unstable, non-standard, or obsolete ones).

MDN has collected this kind of browser compatibility information for the last decade and we use it to improve our reference pages. Integrating this info directly into MDN pages has had its own drawbacks: it’s been difficult to maintain, and next to impossible to reuse elsewhere. A few years ago, we decided to move this information into a machine-readable format so that it can be reused.

A BCD table containing example browser information for an unspecified "browser status"<figcaption>A typical Browser Compatibility Data (BCD) table as found on MDN Web docs.</figcaption>

Under Florian Scholz‘s lead, we are now migrating browser compatibility data into a JSON database, and we are about 60% done (including all of HTTP, HTML, JS, and even WebExtensions). We are working hard on getting all the Web APIs in it, as well as SVG, WebDriver, and MathML information.

At the same time, we are experimenting with reusing compatibility information in new tools, some developed internally and some externally. We publish our data weekly in the form of an npm package that is guaranteed to stay up-to-date as MDN itself uses it. We are our own first customer!

Our goal this year is to have 100% of the MDN compatibility info in the JSON database, as well as to start reusing this data in tools beyond our inline compatibility tables.

The 2018 Paris event

The level of interest around browser compatibility data (BCD) and the sheer amount of work left to do on it made it a natural candidate for the theme of the March event. The BCD community on Github is active and the event provided a great opportunity for contributors to meet in person.

Some code displayed on a giant screen of the Paris office during the MDN Paris Event<figcaption>When demoing, we weren’t afraid to dig deep in the code.</figcaption>

Thirty-four people from different backgrounds and organisations gathered in the splendid Mozilla Paris office: Mozilla employees (developers, writers, and even managers) from several different teams (MDN, Open Innovation, Web Compat, and WebDriver/Marionette), volunteers, and representatives from Google, Samsung, Microsoft, and the W3C (both on-site and remote).

On the first morning, Florian Scholz presented BCD and everyone introduced themselves, so people were not afraid to talk to each other during the event and got an overview of the skills available in the room. After the project pitching, people clustered into groups and the work began. It was interesting to watch people interact with others who’d either pitched an idea or had specific skills. In a quarter of an hour, everybody was already deep into hacking.

At the end of each afternoon, we gathered to demo the work that had been done. Saturday and Sunday morning we also held a set of lightning talks, where anyone could present anything, with the goal of opening our minds to other ideas.

We finished on Sunday with a final set of demos, and the outcome was truly amazing…


221 PRs were made to our repository by the participants of Hack on MDN. So many projects have been worked on that it is impossible to be exhaustive, but here are a few highlights.

Visualisation tools

Mozilla Tech Speaker and JavaScript hacker Istvan ‘Flaki’ Szmozsanszky created a tool that displays a compatibility table without the help of the server: it reads the BCD file and constructs the table directly in the browser. This is a fundamental piece of code that will allow us to easily embed compatibility tables everywhere, starting with our own pull requests on Github. Flaki went further by coding a feature to edit the JSON in the page and generate a PR from it, as well as studying how to display the differences between the current data and the new one in a visual way.

An example output of Flaki's tool, generating a local inline BCD table.<figcaption>An example output of Flaki’s tool, generating a local inline BCD table.</figcaption>

John Whitlock (from MDN’s Dev team) and Anthony Maton worked on creating a bot for GitHub requests: they focused on the back-end groundwork that will allow easy code maintenance. They created a new repository and moved the rendering code into plain JS.

Will BambergEduardo Bouças, and Daniel Beck worked on a new macro displaying aggregate data in one table, like all animation-* CSS properties.

Data migration

The more data we have in our JSON format, the more accurate the MDN pages and tools using it will be. We had just over 60% of our original data migrated before the event and made significant progress on the remaining 40% over the weekend.

Under the lead of Jérémie Patonnier from Mozilla, Maxime Lo Re and Sebastian Zartner migrated most of the SVG element data, and that of their attributes during the weekend. Chris Mills, David Ross and Bruno Bruet did the same with a lot of Web APIs. The amount of data migrated is more or less equivalent to a quarter’s worth of work and is a significant step in our migration work. Well done!

Andreas Tolfsen, one of Mozilla’s WebDriver specialists, worked, with the help of Chris Mills, to bring basic WebDriver browser compatibility info to our repository, as well as to start documenting WebDriver on MDN.

Data quality

Our data is not perfect: we have some data errors (usually this involves features marked as not supported when they have been supported), missing data (we have a way of marking unknown data differently), and of course some real code errors.

Several projects were conducted in order to improve the quality of our dataset.

Mark Dittmer from Google worked to bridge the Confluence tool with MDN. He created a tool, mdn-confluence that allows cross-checking of the information between both repositories.

Ada Rose Cannon and Peter O’Shaughnessy from Samsung created a tool that produces an initial set of data for Samsung Internet, which brings this important mobile browser to our repository. What makes this dataset even more interesting is that it has been designed to be repurposed for any Chromium-based browser, so we may be able to include QQ or UC browser info in our repository one day.

Erika Doyle, Libby McCormick, and Matt Wojciakowski from Microsoft participated remotely from the Seattle area and worked on some Edge-related data: updated EdgeHTML release dates and added Edge compat data to WebExtensions.

Scraping tools

Several people worked on taking existing data, on MDN or elsewhere, and using it to generate BCD JSON, totally or partially. These tools are valuable time-savers and will allow us to migrate data at a quicker pace.

Dominique Hazaël-Massieux from the W3C worked on a tool that takes a WebIDL as input and generates the skeleton of our BCD. This is extremely useful for all new APIs that we will want to document, as we only have to modify the values afterwards. Several PRs that Dominique submitted have been generated using this tool.

Kayce Basques from Google created a tool, MDN Crawler, which takes an MDN page and reads the browser compatibility data from it. Even if not all the data can be read correctly (the manually crafted tables do not always follow the same structure), it is able to extract a lot of information that can be manually tweaked afterwards. This is a big time saver for the migration. Kayce also published this tool as a service (with instructions).

External tools reusing the data

Eduardo Bouças worked on improving his add-on, compat-report, that produces a visual compatibility report inside Firefox Dev Tools.

Screenshot of compat-report fromEduardo Bouças.<figcaption>Screenshot of compat-report from Eduardo Bouças.</figcaption>

Julien Gattelier fixed several problems with his tool, compat-tester, adding support for global HTML attributes. He also added a contribute mode that lists features that are not in the browser compatibility dataset, allowing a user or potential contributor to detect missing features!

Dennis Schubert from Mozilla’s Web Compat team, along with Julien Gattelier and Kayce Basques, brainstormed about a new tool reusing Julien’s compat-tester tool to produce a report about the state of web compatibility, by crawling significant websites.

Other projects

Kadir Topal created a dashboard enabling us to visualize the quality of our data and measure improvements we are making.

Example of output of the data quality dashboard.<figcaption>Example of output of the data quality dashboard.</figcaption> What’s next?

There is a lot of follow-up work to do: we need to review all the PRs and do some work to integrate new prototypes and tools into our codebase or workflow. It is a good problem to have!

Overall, we will continue to migrate our browser compat data and improve its quality: the better the data is, the better the tools using it – and MDN Web docs itself – will be.

The most important outcome of this event is human: by working together we created new bonds and the relationships between participants will hopefully continue and grow, bringing extra awesomeness into the future of MDN Web Docs and the Browser Compatibility Data project.

Want to get involved? Not sure where to begin? Visit the MDN community on Discourse to learn about what we do and how you can make MDN more awesome with your contribution.

Categorieën: Mozilla-nl planet

The Mozilla Blog: Mozilla marks 20th anniversary with commitment to better human experiences online

Mozilla planet - do, 29/03/2018 - 16:00

This coming Saturday — March 31 — is Mozilla’s 20th anniversary. We’ve accomplished a fair amount in the first 20 years. We aim to accomplish even more in the next 20 years. To do this, we’ve modernized nearly every aspect of Mozilla, from Firefox to the many ways we connect people and technology.

We’re making our first major addition to the key principles that form the foundation for Mozilla’s work. These principles are set out in the Mozilla Manifesto, which was launched in 2007. The Mozilla Manifesto identifies ten principles that we work to build into Firefox and online life generally. The internet should be a global public resource, open and accessible to all. Individuals should have control of their experience. Safety is critical. Private commercial profit and social benefit should coexist in a healthy fashion. We use these principles regularly to describe Mozilla’s identity and inform our decision-making. You can see the Manifesto here.

Today we add four topics to the Mozilla Manifesto. We do this to explicitly address the quality of people’s experiences online.

  • We are committed to an internet that includes all the peoples of the earth — where a person’s demographic characteristics do not determine their online access, opportunities, or quality of experience.
  • We are committed to an internet that promotes civil discourse, human dignity, and individual expression.
  • We are committed to an internet that elevates critical thinking, reasoned argument, shared knowledge, and verifiable facts.
  • We are committed to an internet that catalyzes collaboration among diverse communities working together for the common good.

The full addendum is available on our website, where you are invited to share your support for these new principles via Twitter.

The Manifesto and the addendum will continue to guide our work everyday — how we design products, build technology, build communities and work with others. We hope to encourage, create, lead and support many experiments in bringing these goals to life, and we hope to join with many others pursuing similar ideas.

The post Mozilla marks 20th anniversary with commitment to better human experiences online appeared first on The Mozilla Blog.

Categorieën: Mozilla-nl planet

Firefox Test Pilot: You can edit, highlight and crop your screenshots!

Mozilla planet - do, 29/03/2018 - 15:07

Two weeks ago, Screenshots started shipping with the ability to draw on and re-crop shots. Keep an eye out for the little edit icon on the top-right corner of your ‘My Shots’ page.

<figcaption>The My Shots UI with the link to annotations options in the top-right corner</figcaption>Why Annotations?

For a while, we’ve been getting feature requests for drawing, marking up and particularly, censoring sensitive information on shots, primarily because most people take screenshots for the purpose of sharing them. While annotations won’t give you a full-blown suite of editing tools, there’s a lot you can accomplish when it comes to the simple tasks.

What can you do?

We’re keeping it simple. The new annotations feature ships with a freehand drawing tool and a highlighter (that can double up as a redaction tool). We’ve also added the ability to re-crop shots after they’ve been taken. Saving a shot after editing overwrites the original shot entirely.

<figcaption>Highlight important information</figcaption><figcaption>Create freehand drawings</figcaption><figcaption>Re-crop your shots</figcaption>Post-launch numbers<figcaption>The dashboard showing the number of people using the annotations feature</figcaption>

Since we launched, a little over 6% of Screenshots users have used annotations. How do the new tools compare to each other? Cropping seems to be most popular, followed by highlighting/redaction and freehand drawing.

<figcaption>The number of events per annotation option, with cropping the most popular</figcaption>What’s next?

We want to add more annotation tools like undo and redo. We’re also open to new feature requests, which you can file on GitHub. As always, contributions are more than welcome.

You can edit, highlight and crop your screenshots! was originally published in Firefox Test Pilot on Medium, where people are continuing the conversation by highlighting and responding to this story.

Categorieën: Mozilla-nl planet

The Rust Programming Language Blog: Announcing Rust 1.25

Mozilla planet - do, 29/03/2018 - 02:00

The Rust team is happy to announce a new version of Rust, 1.25.0. Rust is a systems programming language focused on safety, speed, and concurrency.

If you have a previous version of Rust installed via rustup, getting Rust 1.25.0 is as easy as:

$ rustup update stable

If you don’t have it already, you can get rustup from the appropriate page on our website, and check out the detailed release notes for 1.25.0 on GitHub.

What’s in 1.25.0 stable

The last few releases have been relatively minor, but Rust 1.25 contains a bunch of stuff! The first one is straightforward: we’ve upgraded to LLVM 6 from LLVM 4. This has a number of effects, a major one being a step closer to AVR support.

A new way to write use statements has landed: nested import groups. If you’ve ever written a set of imports like this:

use std::fs::File; use std::io::Read; use std::path::{Path, PathBuf};

You can now write this:

// on one line use std::{fs::File, io::Read, path::{Path, PathBuf}}; // with some more breathing room use std::{ fs::File, io::Read, path::{ Path, PathBuf } };

This can reduce some repetition, and make things a bit more clear.

There are two big documentation changes in this release: first, Rust By Example is now included on! We’ll be redirecting the old domain there shortly. We hope this will bring more attention to a great resource, and you’ll get a local copy with your local documentation.

Second, back in Rust 1.23, we talked about the change from Hoedown to pulldown-cmark. In Rust 1.25, pulldown-cmark is now the default. We have finally removed the last bit of C from rustdoc, and now properly follow the CommonMark spec.

Finally, in RFC 1358, #[repr(align(x))] was accepted. In Rust 1.25, it is now stable! This attribute lets you set the alignment of your structs:

struct Number(i32); assert_eq!(std::mem::align_of::<Number>(), 4); assert_eq!(std::mem::size_of::<Number>(), 4); #[repr(align(16))] struct Align16(i32); assert_eq!(std::mem::align_of::<Align16>(), 16); assert_eq!(std::mem::size_of::<Align16>(), 16);

If you’re working with low-level stuff, control of these kinds of things can be very important!

See the detailed release notes for more.

Library stabilizations

The biggest story in libraries this release is std::ptr::NonNull<T>. This type is similar to *mut T, but is non-null and covariant. This blog post isn’t the right place to explain variance, but in a nutshell, NonNull<T>, well, guarantees that it won’t be null, which means that Option<NonNull<T>> has the same size as *mut T. If you’re building a data structure with unsafe code, NonNull<T> is often the right type for you!

libcore has gained a time module, containing the Duration type previously only available in libstd.

Additionally, the from_secs, and from_millis functions associated with Duration were made const fns, allowing them to be used to create a Duration as a constant expression.

See the detailed release notes for more.

Cargo features

Cargo’s CLI has one really important change this release: cargo new will now default to generating a binary, rather than a library. We try to keep Cargo’s CLI quite stable, but this change is important, and is unlikely to cause breakage.

For some background, cargo new accepts two flags: --lib, for creating libraries, and --bin, for creating binaries, or executables. If you don’t pass one of these flags, in previous versions of Cargo, it would default to --lib. We made this decision because each binary (often) depends on many libraries, and so the library case is more common. However, this is incorrect; each library is depended upon by many binaries. Furthermore, when getting started, what you often want is a program you can run and play around with. It’s not just new Rustaceans though; even very long-time community members have said that they find this default surprising. As such, we’re changing it.

Similarly, cargo new previously would be a bit opinionated around the names of packages it would create. Specifically, if your package began with rust- or ended with -rs, Cargo would rename it. The intention was that well, it’s a Rust package, this information is redundant. However, people feel quite strongly about naming, and when they bump into this, they’re surprised and often upset. As such, we’re not going to do that any more.

Many users love cargo doc, a way to generate local documentation for their Cargo projects. It’s getting a huge speed boost in this release, as now, it uses cargo check, rather than a full cargo build, so some scenarios will get faster.

Additionally, checkouts of git dependencies should be a lot faster, thanks to the use of hard links when possible.

See the detailed release notes for more.

Contributors to 1.25.0

Many people came together to create Rust 1.25. We couldn’t have done it without all of you.


Categorieën: Mozilla-nl planet

Chris H-C: Annoying Graphs: Did the Facebook Container Add-on Result in More New Firefox Profiles?

Mozilla planet - wo, 28/03/2018 - 19:56

Yesterday, Mozilla was in the news again for releasing a Firefox add-on called Facebook Container. The work of (amongst others) :groovecoder, :pdol, :pdehaan, :rfeeley, :tanvi, and :jkt, Facebook Container puts Facebook in a little box and doesn’t let it see what else you do on the web.

You can try it out right now if you’d like. It’s really just as simple as going to the Facebook Container page on and clicking on the “+ Add to Firefox” button. From then on Facebook will only be able to track you with their cookies while you are actually visiting Facebook.

It’s easy-to-use, open source, and incredibly timely. So it quickly hit the usual nerdy corners of the web… but then it spread. Even Forbes picked it up. We started seeing incredible numbers of hits on the blogpost (I don’t have plots for that, sorry).

With all this positive press did we see any additional new Firefox users because of it?

Normally this is where I trot out the usual gimmick “Well, it depends on how you word the question.” “Additional” compared to what, exactly? Compared to the day before? The same day a week ago? A month ago?

In this case it really doesn’t depend. I can’t tell, no matter how I word the question. And this annoys me.

I mean, look at these graphs:

Here’s one showing the new-profile pings we receive each minute of some interesting days:c52dd445-e624-47aa-a44d-d5e758b56b04

Summer Time lining up with Daylight Saving Time means that different parts of the world were installing Firefox at different times of the day. The shapes of the curves don’t line up, making it impossible to compare between days.

So here’s one showing the number of new-profile pings we received each day this month:ebce02bb-1c78-4c52-9878-9a9e8d78e459

Yesterday’s numbers are low comparing to other Tuesdays these past four weeks, but look at how low Monday’s numbers are! Clearly this is some weird kinda week, making it impossible to compare between weeks.

So here’s one showing approximate Firefox client counts of last April:1d44c744-0267-4216-9371-5bf042ba47e7

This highlights a seasonal depression starting the week of April 10 similar to the one shown in the previous plot. This is expected since we’re in the weeks surrounding Easter… but why did I look at last April instead of last March? Easter changes its position relative to the civil calendar, making it impossible to compare between years.

So, did we see any additional new Firefox users thanks to all of the hard work put into Facebook Container?



Categorieën: Mozilla-nl planet

The Firefox Frontier: Survey Says, Firefox Loves Oddballs

Mozilla planet - wo, 28/03/2018 - 19:15

For the second year in a row, we did a bit of informal censusing last month to get to know our users in the best way possible: anonymously and collectively. … Read more

The post Survey Says, Firefox Loves Oddballs appeared first on The Firefox Frontier.

Categorieën: Mozilla-nl planet

Air Mozilla: The Joy of Coding - Episode 135

Mozilla planet - wo, 28/03/2018 - 19:15

The Joy of Coding - Episode 135 mconley livehacks on real Firefox bugs while thinking aloud.

Categorieën: Mozilla-nl planet

Air Mozilla: The Joy of Coding - Episode 135

Mozilla planet - wo, 28/03/2018 - 19:15

The Joy of Coding - Episode 135 mconley livehacks on real Firefox bugs while thinking aloud.

Categorieën: Mozilla-nl planet

Air Mozilla: The Joy of Coding - Episode 134

Mozilla planet - wo, 28/03/2018 - 19:00

The Joy of Coding - Episode 134 mconley livehacks on real Firefox bugs while thinking aloud.

Categorieën: Mozilla-nl planet

Air Mozilla: The Joy of Coding - Episode 134

Mozilla planet - wo, 28/03/2018 - 19:00

The Joy of Coding - Episode 134 mconley livehacks on real Firefox bugs while thinking aloud.

Categorieën: Mozilla-nl planet

Air Mozilla: Weekly SUMO Community Meeting, 28 Mar 2018

Mozilla planet - wo, 28/03/2018 - 18:00

Weekly SUMO Community Meeting This is the SUMO weekly call

Categorieën: Mozilla-nl planet

Hacks.Mozilla.Org: ES modules: A cartoon deep-dive

Mozilla planet - wo, 28/03/2018 - 17:00

ES modules bring an official, standardized module system to JavaScript. It took a while to get here, though — nearly 10 years of standardization work.

But the wait is almost over. With the release of Firefox 60 in May (currently in beta), all major browsers will support ES modules, and the Node modules working group is currently working on adding ES module support to Node.js. And ES module integration for WebAssembly is underway as well.

Many JavaScript developers know that ES modules have been controversial. But few actually understand how ES modules work.

Let’s take a look at what problem ES modules solve and how they are different from modules in other module systems.

What problem do modules solve?

When you think about it, coding in JavaScript is all about managing variables. It’s all about assigning values to variables, or adding numbers to variables, or combining two variables together and putting them into another variable.

Code showing variables being manipulated

Because so much of your code is just about changing variables, how you organize these variables is going to have a big impact on how well you can code… and how well you can maintain that code.

Having just a few variables to think about at one time makes things easier. JavaScript has a way of helping you do this, called scope. Because of how scopes work in JavaScript, functions can’t access variables that are defined in other functions.

Two function scopes with one trying to reach into another but failing

This is good. It means that when you’re working on one function, you can just think about that one function. You don’t have to worry about what other functions might be doing to your variables.

It also has a downside, though. It does make it hard to share variables between different functions.

What if you do want to share your variable outside of a scope? A common way to handle this is to put it on a scope above you… for example, on the global scope.

You probably remember this from the jQuery days. Before you could load any jQuery plug-ins, you had to make sure that jQuery was in the global scope.

Two function scopes in a global, with one putting jQuery into the global

This works, but they are some annoying problems that result.

First, all of your script tags need to be in the right order. Then you have to be careful to make sure that no one messes up that order.

If you do mess up that order, then in the middle of running, your app will throw an error. When the function goes looking for jQuery where it expects it — on the global — and doesn’t find it, it will throw an error and stop executing.

The top function scope has been removed and now the second function scope can’t find jQuery on the global

This makes maintaining code tricky. It makes removing old code or script tags a game of roulette. You don’t know what might break. The dependencies between these different parts of your code are implicit. Any function can grab anything on the global, so you don’t know which functions depend on which scripts.

A second problem is that because these variables are on the global scope, every part of the code that’s inside of that global scope can change the variable. Malicious code can change that variable on purpose to make your code do something you didn’t mean for it to, or non-malicious code could just accidentally clobber your variable.

How do modules help?

Modules give you a better way to organize these variables and functions. With modules, you group the variables and functions that make sense to go together.

This puts these functions and variables into a module scope. The module scope can be used to share variables between the functions in the module.

But unlike function scopes, module scopes have a way of making their variables available to other modules as well. They can say explicitly which of the variables, classes, or functions in the module should be available.

When something is made available to other modules, it’s called an export. Once you have an export, other modules can explicitly say that they depend on that variable, class or function.

Two module scopes, with one reaching into the other to grab an export

Because this is an explicit relationship, you can tell which modules will break if you remove another one.

Once you have the ability to export and import variables between modules, it makes it a lot easier to break up your code into small chunks that can work independently of each other. Then you can combine and recombine these chunks, kind of like Lego blocks, to create all different kinds of applications from the same set of modules.

Since modules are so useful, there have been multiple attempts to add module functionality to JavaScript. Today there are two module systems that are actively being used. CommonJS (CJS) is what Node.js has used historically. ESM (EcmaScript modules) is a newer system which has been added to the JavaScript specification. Browsers already support ES modules, and Node is adding support.

Let’s take an in-depth look at how this new module system works.

How ES modules work

When you’re developing with modules, you build up a graph of dependencies. The connections between different dependencies come from any import statements that you use.

These import statements are how the browser or Node knows exactly what code it needs to load. You give it a file to use as an entry point to the graph. From there it just follows any of the import statements to find the rest of the code.

A module with two dependencies. The top module is the entry. The other two are related using import statements

But files themselves aren’t something that the browser can use. It needs to parse all of these files to turn them into data structures called Module Records. That way, it actually knows what’s going on in the file.

A module record with various fields, including RequestedModules and ImportEntries

After that, the module record needs to be turned into a module instance. An instance combines two things: the code and state.

The code is basically a set of instructions. It’s like a recipe for how to make something. But by itself, you can’t use the code to do anything. You need raw materials to use with those instructions.

What is state? State gives you those raw materials. State is the actual values of the variables at any point in time. Of course, these variables are just nicknames for the boxes in memory that hold the values.

So the module instance combines the code (the list of instructions) with the state (all the variables’ values).

A module instance combining code and state

What we need is a module instance for each module. The process of module loading is going from this entry point file to having a full graph of module instances.

For ES modules, this happens in three steps.

  1. Construction — find, download, and parse all of the files into module records.
  2. Instantiation —find boxes in memory to place all of the exported values in (but don’t fill them in with values yet). Then make both exports and imports point to those boxes in memory. This is called linking.
  3. Evaluation —run the code to fill in the boxes with the variables’ actual values.

The three phases. Construction goes from a single JS file to multiple module records. Instantiation links those records. Evaluation executes the code.

People talk about ES modules being asynchronous. You can think about it as asynchronous because the work is split into these three different phases — loading, instantiating, and evaluating — and those phases can be done separately.

This means the spec does introduce a kind of asynchrony that wasn’t there in CommonJS. I’ll explain more later, but in CJS a module and the dependencies below it are loaded, instantiated, and evaluated all at once, without any breaks in between.

However, the steps themselves are not necessarily asynchronous. They can be done in a synchronous way. It depends on what’s doing the loading. That’s because not everything is controlled by the ES module spec. There are actually two halves of the work, which are covered by different specs.

The ES module spec says how you should parse files into module records, and how you should instantiate and evaluate that module. However, it doesn’t say how to get the files in the first place.

It’s the loader that fetches the files. And the loader is specified in a different specification. For browsers, that spec is the HTML spec. But you can have different loaders based on what platform you are using.

Two cartoon figures. One represents the spec that says how to load modules (i.e., the HTML spec). The other represents the ES module spec.

The loader also controls exactly how the modules are loaded. It calls the ES module methods — ParseModule, Module.Instantiate, and Module.Evaluate. It’s kind of like a puppeteer controlling the JS engine’s strings.

The loader figure acting as a puppeteer to the ES module spec figure.

Now let’s walk through each step in more detail.


Three things happen for each module during the Construction phase.

  1. Figure out where to download the file containing the module from (aka module resolution)
  2. Fetch the file (by downloading it from a URL or loading it from the file system)
  3. Parse the file into a module record
Finding the file and fetching it

The loader will take care of finding the file and downloading it. First it needs to find the entry point file. In HTML, you tell the loader where to find it by using a script tag.

A script tag with the type=module attribute and a src URL. The src URL has a file coming from it which is the entry

But how does it find the next bunch of modules — the modules that main.js directly depends on?

This is where import statements come in. One part of the import statement is called the module specifier. It tells the loader where it can find each next module.

An import statement with the URL at the end labeled as the module specifier

One thing to note about module specifiers: they sometimes need to be handled differently between browsers and Node. Each host has its own way of interpreting the module specifier strings. To do this, it uses something called a module resolution algorithm, which differs between platforms. Currently, some module specifiers that work in Node won’t work in the browser, but there is ongoing work to fix this.

Until that’s fixed, browsers only accept URLs as module specifiers. They will load the module file from that URL. But that doesn’t happen for the whole graph at the same time. You don’t know what dependencies the module needs you to fetch until you’ve parsed the file… and you can’t parse the file until you fetched it.

This means that we have to go through the tree layer-by-layer, parsing one file, then figuring out its dependencies, and then finding and loading those dependencies.

A diagram that shows one file being fetched and then parsed, and then two more files being fetched and then parsed

If the main thread were to wait for each of these files to download, a lot of other tasks would pile up in its queue.

That’s because when you’re working in a browser, the downloading part takes a long time.


A chart of latencies showing that if a CPU cycle took 1 second, then main memory access would take 6 minutes, and fetching a file from a server across the US would take 4 years<figcaption>Based on this chart.</figcaption>

Blocking the main thread like this would make an app that uses modules too slow to use. This is one of the reasons that the ES module spec splits the algorithm into multiple phases. Splitting out construction into its own phase allows browsers to fetch files and build up their understanding of the module graph before getting down to the synchronous work of instantiating.

This approach—having the algorithm split up into phases—is one of the key differences between ES modules and CommonJS modules.

CommonJS can do things differently because loading files from the filesystem takes much less time than downloading across the Internet. This means Node can block the main thread while it loads the file. And since the file is already loaded, it makes sense to just instantiate and evaluate (which aren’t separate phases in CommonJS). This also means that you’re walking down the whole tree, loading, instantiating, and evaluating any dependencies before you return the module instance.

A diagram showing a Node module evaluating up to a require statement, and then Node going to synchronously load and evaluate the module and any of its dependencies

The CommonJS approach has a few implications, and I will explain more about those later. But one thing that it means is that in Node with CommonJS modules, you can use variables in your module specifier. You are executing all of the code in this module (up to the require statement) before you look for the next module. That means the variable will have a value when you go to do module resolution.

But with ES modules, you’re building up this whole module graph beforehand… before you do any evaluation. This means you can’t have variables in your module specifiers, because those variables don’t have values yet.

A require statement which uses a variable is fine. An import statement that uses a variable is not.

But sometimes it is really useful to use variables for module paths. For example, you might want to switch which module you load depending on what the code is doing or what environment it is running in.

To make this possible for ES modules, there’s a proposal called dynamic import. With it, you can use an import statement like import(`${path}/foo.js`).

The way this works is that any file loaded using import() is handled as the entry point to a separate graph. The dynamically imported module starts a new graph, which is processed separately.

Two module graphs with a dependency between them, labeled with a dynamic import statement

One thing to note, though — any module that is in both of these graphs is going to share a module instance. This is because the loader caches module instances. For each module in a particular global scope, there will only be one module instance.

This means less work for the engine. For example, it means that the module file will only be fetched once even if multiple modules depend on it. (That’s one reason to cache modules. We’ll see another in the evaluation section.)

The loader manages this cache using something called a module map. Each global keeps track of its modules in a separate module map.

When the loader goes to fetch a URL, it puts that URL in the module map and makes a note that it’s currently fetching the file. Then it will send out the request and move on to start fetching the next file.

The loader figure filling in a Module Map chart, with the URL of the main module on the left and the word fetching being filled in on the right

What happens if another module depends on the same file? The loader will look up each URL in the module map. If it sees fetching in there, it will just move on to the next URL.

But the module map doesn’t just keep track of what files are being fetched. The module map also serves as a cache for the modules, as we’ll see next.


Now that we have fetched this file, we need to parse it into a module record. This helps the browser understand what the different parts of the module are.

Diagram showing main.js file being parsed into a module record

Once the module record is created, it is placed in the module map. This means that whenever it’s requested from here on out, the loader can pull it from that map.

The “fetching” placeholders in the module map chart being filled in with module records

There is one detail in parsing that may seem trivial, but that actually has pretty big implications. All modules are parsed as if they had "use strict" at the top. There are also other slight differences. For example, the keyword await is reserved in a module’s top-level code, and the value of this is undefined.

This different way of parsing is called a “parse goal”. If you parse the same file but use different goals, you’ll end up with different results. So you want to know before you start parsing what kind of file you’re parsing — whether it’s a module or not.

In browsers this is pretty easy. You just put type="module" on the script tag. This tells the browser that this file should be parsed as a module. And since only modules can be imported, the browser knows that any imports are modules, too.

The loader determining that main.js is a module because the type attribute on the script tag says so, and counter.js must be a module because it’s imported

But in Node, you don’t use HTML tags, so you don’t have the option of using a type attribute. One way the community has tried to solve this is by using an .mjs extension. Using that extension tells Node, “this file is a module”. You’ll see people talking about this as the signal for the parse goal. The discussion is currently ongoing, so it’s unclear what signal the Node community will decide to use in the end.

Either way, the loader will determine whether to parse the file as a module or not. If it is a module and there are imports, it will then start the process over again until all of the files are fetched and parsed.

And we’re done! At the end of the loading process, you’ve gone from having just an entry point file to having a bunch of module records.

A JS file on the left, with 3 parsed module records on the right as a result of the construction phase

The next step is to instantiate this module and link all of the instances together.


Like I mentioned before, an instance combines code with state. That state lives in memory, so the instantiation step is all about wiring things up to memory.

First, the JS engine creates a module environment record. This manages the variables for the module record. Then it finds boxes in memory for all of the exports. The module environment record will keep track of which box in memory is associated with each export.

These boxes in memory won’t get their values yet. It’s only after evaluation that their actual values will be filled in. There is one caveat to this rule: any exported function declarations are initialized during this phase. This makes things easier for evaluation.

To instantiate the module graph, the engine will do what’s called a depth first post-order traversal. This means it will go down to the bottom of the graph — to the dependencies at the bottom that don’t depend on anything else — and set up their exports.

A column of empty memory in the middle. Module environment records for the count and display modules are wired up to boxes in memory.

The engine finishes wiring up all of the exports below a module — all of the exports that the module depends on. Then it comes back up a level to wire up the imports from that module.

Note that both the export and the import point to the same location in memory. Wiring up the exports first guarantees that all of the imports can be connected to matching exports.

Same diagram as above, but with the module environment record for main.js now having its imports linked up to the exports from the other two modules.

This is different from CommonJS modules. In CommonJS, the entire export object is copied on export. This means that any values (like numbers) that are exported are copies.

This means that if the exporting module changes that value later, the importing module doesn’t see that change.

Memory in the middle with an exporting common JS module pointing to one memory location, then the value being copied to another and the importing JS module pointing to the new location

In contrast, ES modules use something called live bindings. Both modules point to the same location in memory. This means that when the exporting module changes a value, that change will show up in the importing module.

Modules that export values can change those values at any time, but importing modules cannot change the values of their imports. That being said, if a module imports an object, it can change property values that are on that object.

The exporting module changing the value in memory. The importing module also tries but fails.

The reason to have live bindings like this is then you can wire up all of the modules without running any code. This helps with evaluation when you have cyclic dependencies, as I’ll explain below.

So at the end of this step, we have all of the instances and the memory locations for the exported/imported variables wired up.

Now we can start evaluating the code and filling in those memory locations with their values.


The final step is filling in these boxes in memory. The JS engine does this by executing the top-level code — the code that is outside of functions.

Besides just filling in these boxes in memory, evaluating the code can also trigger side effects. For example, a module might make a call to a server.

A module will code outside of functions, labeled top level code

Because of the potential for side effects, you only want to evaluate the module once. As opposed to the linking that happens in instantiation, which can be done multiple times with exactly the same result, evaluation can have different results depending on how many times you do it.

This is one reason to have the module map. The module map caches the module by canonical URL so that there is only one module record for each module. That ensures each module is only executed once. Just as with instantiation, this is done as a depth first post-order traversal.

What about those cycles that we talked about before?

In a cyclic dependency, you end up having a loop in the graph. Usually, this is a long loop. But to explain the problem, I’m going to use a contrived example with a short loop.

A complex module graph with a 4 module cycle on the left. A simple 2 module cycle on the right.

Let’s look at how this would work with CommonJS modules. First, the main module would execute up to the require statement. Then it would go to load the counter module.

A commonJS module, with a variable being exported from main.js after a require statement to counter.js, which depends on that import

The counter module would then try to access message from the export object. But since this hasn’t been evaluated in the main module yet, this will return undefined. The JS engine will allocate space in memory for the local variable and set the value to undefined.

Memory in the middle with no connection between main.js and memory, but an importing link from counter.js to a memory location which has undefined

Evaluation continues down to the end of the counter module’s top level code. We want to see whether we’ll get the correct value for message eventually (after main.js is evaluated), so we set up a timeout. Then evaluation resumes on main.js.

counter.js returning control to main.js, which finishes evaluating

The message variable will be initialized and added to memory. But since there’s no connection between the two, it will stay undefined in the required module.

main.js getting its export connection to memory and filling in the correct value, but counter.js still pointing to the other memory location with undefined in it

If the export were handled using live bindings, the counter module would see the correct value eventually. By the time the timeout runs, main.js’s evaluation would have completed and filled in the value.

Supporting these cycles is a big rationale behind the design of ES modules. It’s this three-phase design that makes them possible.

What’s the status of ES modules?

With the release of Firefox 60 in early May, all major browsers will support ES modules by default. Node is also adding support, with a working group dedicated to figuring out compatibility issues between CommonJS and ES modules.

This means that you’ll be able to use the script tag with type=module, and use imports and exports. However, more module features are yet to come. The dynamic import proposal is at Stage 3 in the specification process, as is import.meta which will help support Node.js use cases, and the module resolution proposal will also help smooth over differences between browsers and Node.js. So you can expect working with modules to get even better in the future.


Thank you to everyone who gave feedback on this post, or whose writing or discussions informed it, including Axel Rauschmayer, Bradley Farias, Dave Herman, Domenic Denicola, Havi Hoffman, Jason Weathersby, JF Bastien, Jon Coppeard, Luke Wagner, Myles Borins, Till Schneidereit, Tobias Koppers, and Yehuda Katz, as well as the members of the WebAssembly community group, the Node modules working group, and TC39.

Categorieën: Mozilla-nl planet

Botond Ballo: Trip Report: C++ Standards Meeting in Jacksonville, March 2018

Mozilla planet - wo, 28/03/2018 - 16:00
Summary / TL;DR

Project What’s in it? Status C++17 See list Published! C++20 See below On track Library Fundamentals TS v2 source code information capture and various utilities Published! Parts of it merged into C++17 Concepts TS Constrained templates Merged into C++20 with some modifications Parallelism TS v2 Task blocks, library vector types and algorithms, and more Sent out for PDTS ballot Transactional Memory TS Transaction support Published! Not headed towards C++20 Concurrency TS v1 future.then(), latches and barriers, atomic smart pointers Published! Parts of it merged into C++20, more on the way Executors Abstraction for where/how code runs in a concurrent context Reached design consensus. Ship vehicle not decided yet. Concurrency TS v2 See below Under development. Depends on Executors. Networking TS Sockets library based on Boost.ASIO Publication imminent Ranges TS Range-based algorithms and views Published! Coroutines TS Resumable functions, based on Microsoft’s await design Published! Modules TS A component system to supersede the textual header file inclusion model Voted for publication! Numerics TS Various numerical facilities Under active development Graphics TS 2D drawing API Under design review; some controversy Reflection TS Code introspection and (later) reification mechanisms Initial working draft containing introspection proposal passed wording review Contracts Preconditions, postconditions, and assertions Proposal under wording review, targeting C++20

A few links in this blog post may not resolve until the committee’s post-meeting mailing is published (expected within a few days of April 2, 2018). If you encounter such a link, please check back in a few days.


A couple of weeks ago I attended a meeting of the ISO C++ Standards Committee (also known as WG21) in Jacksonville, Florida. This was the first committee meeting in 2018; you can find my reports on 2017’s meetings here (February 2017, Kona), here (July 2017, Toronto), and here (November 2017, Albuquerque). These reports, particularly the Albuquerque one, provide useful context for this post.

With the final C++17 International Standard (IS) having been officially published, this meeting was focused on C++20, and the various Technical Specifications (TS) we have in flight.


As mentioned, C++17 has been officially published, around the end of last year. The official published version can be purchased from ISO’s website; a draft whose technical content is identical is available free of charge here.

See here for a list of new language and library features in C++17.

The latest versions of GCC and Clang both have complete support for C++17, modulo bugs. MSVC has significant partial support, but full support is still a work in progress.


C++20 is under active development. A number of new changes have been voted into its Working Draft at this meeting, which I list here. For a list of changes voted in at previous meetings, see my Toronto and Albuquerque reports.

Technical Specifications

In addition to the C++ International Standard, the committee publishes Technical Specifications (TS) which can be thought of experimental “feature branches”, where provisional specifications for new language or library features are published and the C++ community is invited to try them out and provide feedback before final standardization.

The committee recently published four TSes – Coroutines, Ranges, Networking, and most recently, Modules – and several more are in progress.

Modules TS

The last meeting ended with the Modules TS close to being ready for a publication vote, but not quite there yet, as the Core Working Group (CWG) was still in the process of reviewing resolutions to comments sent in by national standards bodies in response to the PDTS (“Proposed Draft TS”) ballot. Determined not to leave the resolution of the matter to this meeting, CWG met via teleconference on four different occasions in between meetings to finish the review process. Their efforts were successful; in particular, I believe that the issues that I described in my last report as causing serious implementer concerns (e.g. the “views of types” issue) have been resolved. The revised document was voted for publication a few weeks before this meeting (also by teleconference).

That allowed the time during this meeting to be spent discussing design issues that were explicitly deferred until after the TS’s publication. I summarize that technical discussion below.

Parallelism TS v2

The Parallelism TS v2 has picked up one last major feature: data-parallel vector types and operations, also referred to as “SIMD”. With that in place, Parallelism TS was sent out for its PDTS ballot.

Concurrency TS v2

The Concurrency TS v2 (no working draft yet) is continuing to take shape. There’s a helpful paper that summarizes its proposed contents and organization.

A notable component of the Concurrency TS v2 that I didn’t mention in my last report is a revised version of future::then() (the original version appeared in the Concurrency TS v1, but there was consensus against moving forward with it in that form). This, however, depends on Executors, which will be published independently of the Concurrency TS v2, either in C++20 or a TS of its own.

Library Fundamentals TS v3

The Library Fundementals TS is a sort of a grab-bag TS for library proposals that are not large enough to get their own TS (like Networking did), but experimental enough not to go directly into the IS. It’s now on its third iteration, with v1 and significant components of v2 having merged into the IS.

No new features have been voted into v3 yet, but an initial working draft has been prepared, basically by taking v2 and removing the parts of it that have merged into C++17 (including optional and string_view); the resulting draft will be open to accept new proposals at future meetings (I believe mdspan (a multi-dimensional array view) and expected<T> (similar to Rust’s Result<T>) are headed that way).

Reflection TS

After much anticipation, the Reflection TS is now an official project, with its initial working draft based on the latest version of the reflexpr static introspection proposal. I believe the extensions for static reflection of functions are targeting this TS as well.

It’s important to note that the Reflection TS is not the end of the road for reflection in C++; further improvements, including a value-based (as opposed to type-based) interface for reflection, and metaclasses, are being explored (I write more about these below).

Future Technical Specifications

There are some planned future Technical Specifications that don’t have an official project or working draft yet:


The proposal for a Graphics TS, set to contain 2D graphics primitives with an interface inspired by cairo, continues to be under discussion in the Library Evolution Working Group (LEWG).

At this meeting, the proposal has encountered some controversy. A library like this is unlikely to be used for high-performance production use cases like games and browsers; the target market is more people teaching and learning C++, and non-performance-intensive GUI applications. Some people consider that to be a poor use of committee time (it was observed that a large proposal like this would tie up the Library Working Group for one or two full meetings’ worth of wording review). On the other hand, the proposal’s authors have been “strung along” by the committee for a couple of years now, and have invested significant time into polishing the proposal to be standards-quality.

The committee plans to hold an evening session at the next meeting to decide the future of the proposal.


Executors are a important concurrency abstraction for which the committee has been trying to hash out a suitable design for a long time. There is finally consensus on a design (see the proposal and accompanying design paper), and the Concurrency Study Group had been planning to publish it in its own Technical Specification.

Meanwhile, it became apparent that several other proposals depend on executors, including Networking (which isn’t integrated with executors in its TS form, but people would like it to be prior to merging it into the IS), the planned improvements to future, and new execution policies for parallel algorithms. Coroutines doesn’t necessarily have a dependency, but there are still integration opportunities.

As a result, the Concurrency Study Group is eyeing the possibility of getting executors directly into C++20 (instead of going through a TS), to unblock dependent proposals sooner.

Merging Technical Specifications into C++20

After a TS has been published and has garnered enough implementation and use experience that the committee is confident enough to officially standardize its contents, it can be merged into the standard. This happened with e.g. the Filesystems and Parallelism TSes in C++17, and significant parts of the Concepts TS in C++20.

As the committee has a growing list of published-but-not-yet-merged TSes, there was naturally some discussion of which of these would be merged into C++20.

Coroutines TS

The Coroutines TS was proposed for merger into C++20 at this meeting. There was some pushback from adopters who tried it out and brought up several concerns (these concerns were subsequently responded to).

We had a lively discussion about this in the Evolution Working Group (EWG). I summarize the technical points below, but the procedural outcome was that those advocating for significant design changes will have until the next meeting to bring forward a concrete proposal for such changes, or else “forever hold their peace”.

Some felt that such a “deadline” is a bit heavy-handed, and I tend to agree with that. While there certainly needs to be a limit on how long we wait for hypothetical future proposals that improve on a design, the Coroutines TS was just published in November 2017; I don’t think it’s unreasonable to ask that implementers and users be given more than a few months to properly evaluate it and formulate high-quality proposals to improve it if appropriate.

Ranges TS

The Ranges TS modernizes and Conceptifies significant parts of the standard library (the parts related to algorithms and iterators).

Its merge into the IS is planned to happen in two parts: first, the foundational Concepts that a large spectrum of future library proposals may want to make use of, and then the range-based algorithms and utilities themselves. The purpose of the split is to allow the first part to merge into the C++20 working draft as soon as possible, thereby unblocking proposals that wish to use the foundational Concepts.

The first part is targeting C++20 pretty firmly; the second part is still somewhat up in the air, with technical concerns relating to what namespace the new algorithms will go into (there was previously talk of a std2 namespace to serve as a place to house new-and-improved standard library facilities, but that has since been scrapped) and how they will relate to the existing algorithms; however, the authors are still optimistic that the second half can make C++20 as well.

Networking TS

There is a lot of desire to merge the Networking TS into C++20, but the dependence on executors makes that timeline challenging. As a best case scenario, it’s possible that executors go into C++20 fairly soon, and there is time to subsequently merge the Networking TS into C++20 as well. However, that schedule can easily slip to C++23 if the standardization of executors runs into a delay, or if the Concurrency Study Group chooses to go the TS route with executors.

The remaining parts of the Concepts TS

The Concepts TS was merged into the C++20 working draft in Toronto, but without the controversial abbreviated function templates (AFTs) feature (and some related things).

I mentioned that there was still a lot of demand for AFTs, even if there was no consensus for them in their Concepts TS form, and that alternative AFT proposals targeting C++20 would be forthcoming. Several such proposals were brought forward at this meeting; I discuss them below. While there wasn’t final agreement on any of them at this meeting, there was consensus on a direction, and there is relative optimism about being able to get AFTs in some form into C++20.

What about Modules?

The Modules TS was just published a few weeks ago, so talk of merging it into the C++ IS is a bit premature. Nonetheless, it’s a feature that people really want, and soon, and so there was a lot of informal discussion about the possibility of such a merge.

There were numerous proposals for post-TS design changes to Modules brought forward at this meeting; I summarize the EWG discussion below. On the whole, I think the design discussions were quite productive. It certainly helped that the Modules TS is now published, and design concerns could no longer be postponed as “we’ll deal with this post-TS”.

I think it’s too early to speculate about the prospects of getting Modules into C++20, but there seems to be a potential path forward, which I describe below as well.

Evolution Working Group

I’ll now write in a bit more detail about the technical discussions that took place in the Evolution Working Group, the subgroup that I sat in for the duration of the week.

Unless otherwise indicated, proposals discussed here are targeting C++20. I’ve categorized them into the usual “accepted”, “further work encouraged”, and “rejected” categories:

Accepted proposals:

  • A couple of minor tweaks to the Coroutines TS: symmetric coroutine transfer, and parameter preview for coroutine promise constructor.
  • Clarifications about the behaviour of contract checks that modify observable (e.g. global) state. The outcome was that evaluating such a contract check constitutes undefined behaviour.
  • Class types in non-type template parameters. This is a long-desired feature, with an example use case being format strings checked at compile-time, and one of the few remaining gaps in the language where user-defined types don’t have all the powers of built-in types. The feature had been blocked on the issue of how to determine the equivalence of two non-type template parameters of class type (which is needed to be able to establish the equivalence of template specializations). Default comparisons finally provided a way forward here; class types used as non-type template parameters need to have a defaulted operator<=> (as do their members).
  • Static reflection of functions. This is an extension to the reflexpr proposal to allow reflecting over functions. You can’t reflect over an overload set; rather, reflexpr can accept a function call expression as an argument, perform overload resolution (without evaluating the call), and reflect the chosen overload. This is targeting the Reflection TS, not C++20.
  • Standard containers and constexpr. This proposal aims to allow the use of dynamic allocation in a constexpr context, so as to make e.g. std::vector usable by constexpr functions. This is accomplished by allowing destructors to be constexpr, and allowing new-expressions and std::allocator to be used in a constexpr context. (The latter is necessary because something like std::vector, which maintains a partially initialized dynamic allocation, can’t be implemented using new-expressions alone. operator new itself isn’t supported, because it loses information about the type of the allocated storage; std::allocator::allocate(), which preserves such information, needs to be used instead.) The proposal as currently formulated does not allow dynamic allocations to “survive” beyond constant experssion evaluation; there will be a future extension to allow this, where “surviving” allocations will be promoted to static or automatic storage duration as appropriate.
  • char8_t: a type for UTF-8 characters and strings. This is a combined core language + library proposal; the language parts include introducing a new char8_t type, and changing the behaviour of u8 character and and string literals to use that type. The latter changes are breaking, though the expected breakage is fairly slight, especially for u8 character literals which are new in C++17 and not heavily used yet.

    Discussion of this proposal centered around the big-picture plan of how UTF-8 adoption will work, and whether we can’t just work towards char itself implying a UTF-8 encoding. Several people argued that that’s unlikely to happen, due to large amounts of legacy code that don’t treat char as UTF-8, and due to the special role of char as an “aliasing” type (where an array of char is allowed to serve as the underlying storage for objects of other types) which prevents compilers from optimizing uses of char the way they could optimize char8_t (which, importantly, would be a non-aliasing type).

    In the end, EWG gave the green-light to the direction outlined in the paper. (There was a brief discussion of pursuing this as a TS, but there was no consensus for this, in part because people felt that if we’re going to change the meaning of u8 literals, we might as well do it now before the C++17 meaning gets a lot of adoption.)
  • explicit(bool). This allows constructors to be declared as “conditionally explicit”, based on a compile-time condition. This is mostly useful for wrapper types like pair or optional, where we want their constructors to be explicit iff. the constructors of their wrapped types are.
  • Checking for abstract class types. This tweaks the rules regarding when attempted use of an abstract type as a complete object is diagnosed, to avoid situations where a class definition retroactively makes a previously declared function that uses the type ill-formed.

There were also a few that, after being accepted by EWG, were reviewed by CWG and merged into the C++20 working draft the same week, and thus I already mentioned them in the C++20 section above:

Finally, EWG decided to pull the previously-approved proposal to allow string literals in non-type template parameters, because the more general facility to allow class types in non-type template parameters (which was just approved) is a good enough replacement. (This is a change from the last meeting, when it seemed like we would want both.) The main difference is that you now have to wrap your character array into a struct (think fixed_string or similar), and use that as your template parameter type. (The user-defined literal part of P0424 is still going forward, with a corresponding adjustment to the allowed template parameter types.)

Proposals for which further work is encouraged:

  • C++ stability, velocity, and deployment plans. This is a proposal for a Standing Document (SD; a less-official-than-a-standard committee document, typically with procedural rather than technical content) outlining the procedure by which breaking changes can be made to C++. It classifies breaking changes by level of detectability (e.g. statically detectable and causes a compiler error, statically detectable but doesn’t cause a compiler error, not statically detectable), and issues guidance for whether and how changes in each category can be made. EWG encouraged the authors to come back with specific wording for the proposed SD.
  • Standard library compatibility promises. This is another proposal for a Standing Document, outlining what compatibility promises the C++ standard library makes to its users, and what kind of future changes it reserves to make. (As an example, the committee reserves the right to add new overloads to standard library functions. This may break user code that tries to take the address of a standard library function, and we want to make it clear that such breakage is par for the course; if you want a guarantee that your code will compile without modifications in future standards, you can only call standard library functions, not take their address.)
  • LEWG wishlist for EWG. This is a wishlist of core language issues that the Library Evolution Working Group would like to see addressed to solve problems facing library authors and users. Some of the items included reining in overeager ADL (see below for a proposal to do just that), making it easier to avoid lifetime errors, dealing with ABI breakage, and finding alternatives for the remaining use cases of macros. EWG encouraged future proposals in these areas, or discussion papers that advance our understanding of the problem (for example, a survey of macro use cases that don’t have non-macro alternatives).
  • Extending the offsetof macro to allow computing the offset to a member given a pointer-to-member variable (currently it requires being given the member’s name). EWG thought this was a valid use case, but expressed a preference for a different syntax rather than overloading the offsetof macro.
  • Various proposed extensions to the Modules TS, which I talk about below.
  • Towards consistency between <=> and other comparison operators. The background to this proposal is that when the <=> operator was introduced, there were a few cases where the specified behaviour was a departure from the corresponding behaviour for the existing two-way comparison operators. These were cases where we would have liked to change the behaviour for the existing operators, but couldn’t due to backwards compatibility considerations. <=>, however, being new to the language, had no such backwards compatibility considerations, so the authors specified the more-desirable behaviour for it. The downside is that this introduced inconsistencies between <=> and the two-way comparison operators.

    This proposal aims to resolve those inconsistencies, in some cases by changing the behaviour of the two-way operators after all. There were five specific areas of change:

    • Sign safety. Today, -1 < 1u evaluates to false due to sign conversion, which is not the mathematically correct result. -1 <=> 1u, on the other hand, is a compiler error. EWG decided that both should in fact work and give the mathematically correct result (which for -1 < 1u is a breaking change, though in practice it’s likely to fix many more bugs than it introduces), though whether this will happen in C++20, or after a longer transition period, remains to be decided.
    • Enum safety. Today, C++ allows two-way comparisons between enumerators of distinct enumerator types, and between enumerators and floating-point values. Such comparisons with <=> are ill-formed. EWG felt they should be made ill-formed for two-way comparisons as well, though again this may happen by first deprecating them in C++20, and only actually making them ill-formed in a future standard. (Comparisons between enumerators and integer values are common and useful, and will be permitted for all comparison operators.)
    • Array safety. Two-way comparisons between operands of array type will be deprecated.
    • Null safety. This is just a tweak to make <=> between a pointer and nullptr return strong_equality rather than strong_ordering.
    • Function pointer safety. EWG expressed a preference for allowing all comparisons between function pointers, and requiring implementers to impose a total order on them. Some implementers indicated they need to investigate the implementability of this on some architectures and report back.
  • Chaining comparisons. This proposes making chains of comparisons, such as a == b == c or a < b <= c, have their expected mathematical meaning (which is currently expressed in C++ in a more cumbersome way, e.g. a == b && b == c). This is a breaking change, since such expressions currently have a meaning (evaluate the first comparison, use its boolean result as the value for the second comparison, and so on). It’s been proposed before, but EWG was worried about the silent breaking change. Now, the authors have surveyed a large body of open-source code, and found zero instances of such expressions where the intended meaning was the current meaning, but several instances where the intended meaning was the proposed meaning (and which would therefore be silently fixed by this proposal). Importantly, comparison chains are only allowed if the comparisons in the chain are either all =, all < and <=, or all > and >=; other chains like a < b > c are not allowed, unlike e.g in Python. In the original proposal, such “disallowed” chains would have retained their current meaning, but EWG asked that they be made ill-formed instead, to avoid confusion. The proposal also contained a provision to have folds over comparisons (e.g. a < ..., where a is a function parameter pack) expand to a chained comparison, but EWG chose to defer that part of the proposal until more implementation experience can be gathered.
  • Size feedback in operator new. This proposes overloads of operator new that return how much memory was allocated (which may be more than what was asked for), so the caller can make use of the entire allocation. EWG agreed with the use case, but had some concerns about the explosion of operator new overloads (each new variation that’s added doubles the number of overloads; with this proposal, it would be 8), and the complications around having the new overloads return a structure rather than void*, and asked the authors to come back after exploring the design space a bit more.
  • The assume_aligned attribute. The motivation is to allow authors to signal to the compiler that a variable holds a value with a particular alignment at a given point in time, for purposes such as more efficient vectorization. The alignment is a property of the variable’s value at a point in time, not of the variable itself (e.g. you can subsequently increment the pointer and it will no longer have that alignment). EWG liked the idea but felt that the proposed semantics about where the attribute could apply (for example, that it could apply to parameter variables but not local variables) were confusing. Suggested alternatives included a magic library function (which would more clearly apply at the time it’s called), and something you can place into a contract check.
  • Fixing ADL. This is a resurrection of a proposal that’s more than a decade old, to fix argument-dependent lookup (ADL). ADL often irks people because it’s too eager, and often finds overloads in other namespaces that you didn’t intend. This proposal to fix it was originally brought forward in 2005, but was deferred at the time because the committee was behind in shipping C++0x (which became C++11); it finally came back now. It aims to make two changes to ADL:
    • Narrow the rules for what makes a namespace an associated namespace for the purpose of ADL. The current rules are very broad; in particular, it includes not only the namespaces of the arguments of a function call, but the namespaces of the template parameters of the arguments, which is responsible for a lot of unintended matches. The proposal would axe the template parameters rule.
    • Even if a function is found in an associated namespace, only consider it a match if it has a parameter matching the argument that caused the namespace to be associated, in the relevant position.

    This is a scary change, because it has the potential to break a lot of code. EWG’s main feedback was that the authors should try implementing it, and test some large codebases to understand the scope of breakage. There were also some concerns about the how the second change would interact with Concepts (and constrained templates in general). The proposal will come back for further review.

  • A proposed language-level mitigation for Spectre variant 1, which I talk about below.
  • Allow initializing aggregates from a parenthesized list of values. This aims to solve a long-standing issue where e.g. vector::emplace() didn’t work with aggregate types, because the implementation of emplace() would do new T(args...), while aggregates required new T{args...}. A library solution was previously proposed for this, but the library groups were unhappy with it because it felt like a workaround for a language deficiency, and it would have had to be applied everywhere in the library where it was a problem (with vector::emplace() being just one example). This proposal fixes the deficiency at the language level. EWG generally liked the idea, though there was also a suggestion that a related problem with aggregate initialization (deleted constructors not preventing it) be solved at the same time. There was also a suggestion that the proposal only apply in dependent contexts (since in non-dependent contexts, you know what kind of initialization you need to use), but that was shot down.
  • Signed integers are two’s complement. The standard currently allows various representations for signed integers, but two’s complement is the only one used in practice, on all modern architectures; this proposal aims to standardize on that, allowing code to portably rely on the representation (and e.g. benefit from hardware capabilities like an arithmetic right shift). EWG was supportive of the idea, but expressed a preference for touching base with WG14 (the C standards committee) to make sure they’re on board with this change. (The original version of this proposal would also have defined the overflow behavior for signed integers as wrapping; this part was rejected in other subgroups and never made it to EWG.)
  • Not a proposal, but the Core Working Group asked EWG whether non-template functions should be allowed to be constrained (with a requires-clause). There are some use cases for this, such as having multiple implementations of a function conditioned on some compile-time condition (e.g. platform, architecture, etc.). However, this would entail some specification work, as the current rules governing overloading of constrained functions assume they are templates, and don’t easily carry over to non-templates. EWG opted not to allow them until someone writes a paper giving sufficient motivation.

Rejected proposals:

  • Supporting offsetof for all classes. offsetof is currently only guaranteed to work for standard-layout classes, but there are some use cases for it related to memory-mapped IO, serialization, and similar low-level things, that require it to work for some classes that aren’t standard-layout. EWG reiterated the feedback it gave on the previous proposal on this topic: to expand the definition of standard-layout to include the desired types. EWG was disinclined to allow offsetof for all classes, including ones with virtual bases, as proposed in this paper; it was felt that this more general goal could be accomplished with a future reflection-based facility.
  • Structured bindings with polymorphic lambdas. This would have allowed a structured binding declaration (e.g. auto [a, b]) as a function parameter, with the semantics that it binds to a single argument (the composite object), and is decomposed into the named consituents on the callee side. EWG sympathized with the goal, but had a number of concerns including visual ambiguity with array declarators, and encouraging the use of templates (and particularly under-constrained templates, until structured bindings are extended to allow a concept in place of auto) where otherwise you might use a non-template.
  • Structured binding declaration as a condition. This would have allowed a condition like if (auto [a, b] = f()), where the condition evaluates to the composite object returned to f() (assuming that object is already usable as a condition, e.g. by having a conversion operator to bool). EWG felt that the semantics weren’t obvious (in particular, people might think one of the decomposed variables is used as the condition). There were also unanswered questions like, in the case of a composite object that uses get<>() calls to access the decomposed variables, whether those calls happen before or after the call to the conversion operator. It was pointed out that you can already use a structured binding in a condition if you use the “if with initializer” form added in C++17, e.g. if (auto [result, ok] = f(); ok), and this is preferable because it makes clear what the condition is. (Some people even expressed a desire for deprecating the declaration-as-condition form altogether, although there was also opposition to that.)

No significant meeting of software engineers in the past few months has gone without discussion of Spectre, and this standards meeting was no exception.

Google brought forward a proposal for a language-level mitigation for variant #1 of Spectre (which, unlike variant #2, has no currently known hardware-level mitigation). The proposal allows programmers to harden specific branches against speculation, like so:

  if [[protect_from_speculation(args...)]] (predicate) {
    // use args

args... here is a comma-separated list of one or more variables that are in scope. The semantics is that, if predicate is false, any speculative execution inside the if block treats each of the args as zero. This protects against the exploit, which involves using side channels to recover information accessed inside (misspeculated execution of) the branch at a location that depends on args.

The described semantics can be implemented in assembly; see this llvm-dev post for a description of the implementation approach.

For performance reasons, the proposed hardening is opt-in (as opposed to “harden all branches this way”, although compilers can certainly offer that as an option for non-performance-critical programs), and only as aggressive as it needs to be (as opposed to “disable speculation entirely for this branch”).

The language-level syntax to opt a branch into the hardening remains to be nailed down; the attribute syntax depicted above is one possibility. One complication is that if statements are not the only language constructs that compile down to branches; there are others, including some subtler ones like virtual function dispatch. The chosen syntax should be flexible enough to allow hardening all relevant constructs.

In terms of standardizing this feature, one roadblock is that the C++ standard defines the behavior of programs in terms of an abstract machine, and the semantics of the proposed hardening concern lower-level notions that cannot be described in such terms. As the committee is unlikely to reinvent the C++ abstract machine to allow reasoning about such things as speculative execution in normative wording, it may end up being the case that the syntax of the language construct is described normatively, while its semantics is described non-normatively.

This proposal will return to EWG in a more concrete form at the next meeting. As portably mitigating Spectre is a rather urgent desire in the C++ community, there was some talk of somehow standardizing this feature “out of band” rather than waiting for C++20, though it wasn’t clear what that might look like.


EWG had an evening session to discuss proposals related to Concepts, particularly abbreviated function templates (AFTs).

To recap, AFTs are function templates declared without a template parameter list, with concept names used instead of type names in the signature. An example is void sort(Sortable& s);, which is a shorthand for template <Sortable __S> void sort(__S& s);. Such use of a concept name in place of a type name is called a constrained-type-specifier. In addition to parameter types, the Concepts TS allowed constrained-type-specifiers in return types (where the meaning was “the function’s return type is deduced, but also has to model this concept”), and in variable declarations (where the meaning was “the variable’s type is deduced, as if declared with auto, but also has to model this concept”).

constrained-type-specifiers did not make it into C++20 when the rest of the Concepts TS was merged, mostly because there were concerns that you can’t tell apart an AFT from a non-template function without knowing whether the identifiers that appear in the parameter list name types or concepts.

Four proposals were presented at this evening session, which aimed to get AFTs and/or other forms of constrained-type-specifiers into C++20 in some form.

I’ll also mention that the use of a concept name inside a template parameter list, such as template <Sortable S> (which is itself a shorthand for template <typename S> requires Sortable<S>), is called a constrained-parameter. constrained-parameters have been merged into the C++20 working draft, but some of the proposals wanted to make modifications to them as well, for consistency.

Three of the discussed proposals took the approach of a inventing a new syntax for constrained-type-specifiers (and in some cases constrained-parameters) that wasn’t just an identifier, thus syntactically distinguishing AFTs from non-template functions.

  • Concept-constrained auto proposed the syntax auto<Sortable>. The proposal as written concerned variable declarations only, but one could envision extending this to other uses of constrained-type-specifiers.
  • An adjective syntax for concepts proposed Sortable typename S as an alternative syntax for constrained-parameters, with a possible future extension of Sortable auto x for constrained-type-specifiers. The idea is that the concept name is tacked, like an adjective, onto the beginning of what you’d write without concepts.
  • Concepts in-place syntax proposed Sortable{S} for constrained-parameters, and Sortable{S} s for constrained-type-specifiers (where S would be an additional identifier the declaration introduces, that names the concrete type deduced for the parameter/variable). You could also write Sortable{} s if you didn’t want/need to name the type. One explicit design goal of this proposal was that if, in the future, the committee changes its mind about AFTs needing to be syntactically distinguishable from non-template functions (because we get more comfortable with them, or are happy to rely more on tooling to tell them apart), the empty braces could be dropped altogether, and we’d arrive precisely at the Concepts TS syntax.

An additional idea that was floated, though it didn’t have a paper, was to just use the Concepts TS syntax, but add a single syntactic marker, such as a bare template keyword before the function declaration (as opposed to per-parameter syntactic markers, as in the above proposals).

Of these ideas, Sortable{S} had the strongest support, with “Concepts TS syntax + single syntatic marker” coming a close second. The proponents of these ideas indicated that they will try to collaborate on a revised proposal that can hopefully gain consensus among the entire group.

The fourth paper that was discussed attacked the problem from a different angle: it proposed adopting AFTs into C++20 without any special syntactic marker, but also changing the way name lookup works inside them, to more closely resemble the way name lookup works inside non-template functions. The idea was that, perhaps if the semantics of AFTs are made more similar to non-template functions (name lookup is one of the most prominent semantic differences between template and non-template code), then we don’t need to syntactically distinguish them. The proponents of having a syntactic marker did not find this a convincing argument for adopting AFTs without one, but it was observed that the proposed name lookup change might be interesting to explore independently. At the same time, others pointed out similarities between the proposed name lookup rules and C++0x concepts, and warned that going down this road would lead to C++0x lookup rules (which were found to be unworkable).

(As an aside, one topic that seems to have been settled without much discussion was the question of independent resolution vs. consistent resolution; that is, if you have two uses of the same concept in an AFT (as in void foo(Number, Number);), are they required to be the same concrete type (“consistent”), or two potentially different types that both model the concept (“independent”). The Concepts TS has consistent resolution, but many people prefer independent resolution. I co-authored a paper arguing for independent resolution a while back; that sentiment was subsequently reinforced by another paper, and also in a section of the Sortable{S} proposal. Somewhat to my amusement, the topic was never actually formally discussed and voted on; the idea of independent resolution just seemed to slowly, over time, win people over, such that by this meeting, it was kind of treated as a done deal, that any AFT proposal going into C++20 will, in fact, have independent resolution.)


As mentioned above, EWG had a discussion about merging the Coroutines TS into C++20.

The main pushback was due to a set of concerns described in this paper (see also this response paper). The concerns fell into three broad categories:

  • Performance concerns. As currently specified, coroutines perform a dynamic allocation to store the state that needs to be saved in between suspensions. The dynamic allocation can be optimized away in many cases, but it was argued that for some use cases, you want to avoid the dynamic allocation by construction, without relying on your optimizer. An analogy can be made to std::vector: sure, compilers can sometimes optimize the dynamic allocation it performs to be a stack allocation, but we still have stack arrays in the language to guarantee stack allocation.

    One particularly interesting use case that motivates this performance guarantee, is using coroutines to implement a form of error handling similar to Rust’s try! macro / ? operator. The general idea is to hook the coroutine customization points for a type like expected<T> (the proposed C++ analogue of Rust’s Result), such that co_await e where e has type expected<T> functions like try!(e) would in Rust (see the paper for details). However, no one would contemplate using such an error handling mechanism if it didn’t come with a guarantee of not introducing a dynamic allocation.
  • Safety concerns. The issue here is that reference parameters to a coroutine may become dangling after the coroutine is suspended and resumed. There is a desire to change the syntax of coroutines to make this hazard more obvious.
  • Syntax concerns. There are several minor syntactic concerns related to the choice of keywords (co_await, co_yield, and co_return), having to use co_return instead of plain return, and the precedence of the co_await operator. There is a suggestion to address these by replacing co_await with a punctuation-based syntax, with both prefix and postfix forms for better composition (compare having both * and -> operators for pointer dereferencing).

The paper authors plan to bring forward a set of modifications to the Coroutines TS that address these concerns. I believe the general idea is to change the syntax in such a way that you can explicitly access / name the object storing the coroutine state. You can then control whether it’s allocated on the stack or the heap, depending on your use case (e.g. passing it across a translation unit boundary would require allocating it on the heap, similar to other compiler-generated objects like lambdas).

EWG expressed interest in seeing the proposed improvements, while also expressing a strong preference for keeping coroutines on track to be merged into C++20.


EWG spent an entire day on Modules. With the Modules TS done, the focus was on post-TS (“Modules v2”) proposals.

  • Changing the term “module interface”. This paper argued that “module interface” was a misnomer because a module interface unit can contain declarations which are not exported, and therefore not conceptually part of the module’s interface. No functional change was proposed. EWG’s reaction was “don’t care”.
  • Modules: dependent ADL. The current name lookup rules in the Modules TS have the consequence that argument-dependent lookup can find some non-exported functions that are declared in a module interface unit. This proposal argued this was surprising, and suggested tightening the rules. EWG was favourable, and asked the author to come back with a specific proposal.
  • Modules: context-sensitive keyword. This proposed making module a context-sensitive keyword rather than a hard keyword, to avoid breaking existing code that uses module as an identifier. The general approach was that if a use of module could legally be a module declaration, it is, otherwise it’s an identifier. EWG disliked this direction, because the necessary disambiguation rules were too confusing (e.g. two declarations that were only subtly different could differ in whether module was interpreted as a keyword or an identifier). It was suggested that instead an “escape mechanism” be introduced for identifiers, where you could “decorate” an identifier as something like __identifier(module) or @module to keep it an identifier. It was also pointed out that adopting relevant parts of the “Another take on modules” proposal (see below) would make this problem moot by restricting the location of module declarations to a file’s “preamble”.
  • Unqualified using declarations. This proposed allowing export using name;, where name is unqualified, as a means of exporting an existing name (such as a name from an included legacy header). EWG encouraged exploration of a mechanism for exporting existing names, but wasn’t sure this would be the right mechanism.
  • Identifying module source code. This requires that any module unit either start with a module declaration, or with module; (which “announces” that this is a module unit, with a module declaration to follow). The latter form is necessary in cases where the module wants to include legacy headers, which usually can’t be included in the module’s purview. This direction was previously approved by EWG, and this presentation was just a rubber-stamp.
  • Improvement suggestions to the Modules TS. This paper made several minor improvement suggestions.
    • Determining whether an importing translation unit sees an exported type as complete or incomplete, based on whether it was complete or incomplete at the end of the module interface unit, rather than at the point of export. This was approved.
    • Exporting the declaration of an inline function should not implicitly export the definition as well. There was no consensus for this change.
    • Allow exporting declarations that don’t introduce names; an example is a static_assert declaration. Exporting such a declaration has no effect; the motivation here is to allow enclosing a group of declarations in export { ... }, without having to take care to move such declarations out of the block. This was approved for static_assert only; EWG felt that for certain other declarations that don’t introduce names, such as using-directives, allowing them to be exported might be misleading.
    • A tweak to the treatment of private members of exported types. Rejected because private members can be accessed via reflection.

That brings us to what I view as the most significant Modules-related proposal we discussed: Another take on modules (or “Atom” for short). This is a proposal from Google based on their deployment experience with Clang’s implementation of Modules; it’s a successor to previous proposals like this one. It aims to make several changes – some major, some minor – to the Modules TS; I won’t go through all of them here, but they include changes to name lookup and visibility rules, support for module partitions, and introducing the notion of a “module preamble”, a section at the top of a module file that must contain all module and import declarations. The most significant change, however, is support for modularized legacy headers. Modularized legacy headers are legacy (non-modular) headers included in a module, not via #include as in the Modules TS, but via import (as in import "file" or import <file>). The semantics is that, instead of textually including the header contents as you would with an #include, you process them as an isolated translation unit, produce a module interface artefact as-if it was a module (with all declarations exported, I assume), and then process the import as if it were an actual module import.

Modularized legacy headers are primarily a transition mechanism for incrementally modularizing a codebase. The proposal authors claim that without them, you can’t benefit from compile-time improvements of Modules in a codebase (and in fact, you can take a compile time hit!) unless you bottom-up modularize the entire codebase (down to the standard library and runtime library headers), which is viewed as infeasible for many large production codebases.

Importantly, modularized legacy headers also offer a way forward in the impasse about whether Modules should support exporting macros. In the Atom proposal, modularized legacy headers do export the macros they define, but real modules do not. (There is an independent proposal to allow real modules to selectively export specific macros, but for transition purposes, that’s not critical, since for components that have macros as part of their interface, you can just use them as a modularized legacy header.)

There was some discussion of whether the Atom proposal is different enough from the Modules TS that it would make sense to pursue it as a separate (competing) TS, or if we should try to integrate the proposed changes into the Modules TS itself. The second approach had the stronger consensus, and the authors plan to come back with a specific proposed diff against the Modules TS.

It’s too early to speculate about the impact of pursuing these changes on the schedule for shipping Modules (such as whether it can be merged into C++20). However, one possible shipping strategy might be as follows (disclaimer: this is my understanding of a potential plan based on private conversation, not a plan that was approved by or even presented to EWG):

  • Modules v1 is the currently shipping Modules TS. It is not forward-compatible with v2 or v3.
  • Modules v2 would be a modified version of v1 that would not yet support modularized legacy headers, but would be forward-compatible with v3. Targeting C++20.
  • Modules v3 would support modularized legacy headers. Targeting post-C++20, possibly a second iteration of the Modules TS.

Such a way forward, if it becomes a reality, would seem to satisfy the concerns of many stakeholders. We would ship something in the C++20 IS, and people who are able to bottom-up modularize their codebases can start doing so, without fear of further breaking changes to Modules. Others who need the power of modularized legacy headers can wait until Modules v3 to get it.

I’m pretty happy with the progress made on Modules at this meeting. With the Atom proposal having been discussed and positively received, I’m more optimistic about the feature than I have been for the past few meetings!

Papers not discussed

With the meeting being fairly heavily focused on large proposals like Concepts, Modules, and Coroutines, there were a number of others that EWG didn’t get a chance to look at. I won’t list them all (see the pre-meeting mailing for a list), but I’ll call out two of them: feature-test macros are finally on the formal standards track, and there’s an revised attempt to tackle named arguments in C++ that’s sufficiently different from previous attempts that I think it at least might not be rejected out of hand. I look forward to having these, and the other proposals on the backlog, discussed at the next meeting.

Other Working Groups Library Groups

Having sat in EWG all week, I can’t report on technical discussions of library proposals, but I’ll mention where various proposals are in the processing queue.

I’ve already listed the library proposals that passed wording review and were voted into the C++20 working draft above.

A few proposals targeting Technical Specifications also passed wording review and were merged into the relevant TS working drafts:

The following proposals are still undergoing wording review:

The following proposals have passed design review and await wording review at future meetings:

The following proposals are still undergoing design review:

In addition, there is a fairly long queue of library proposals that haven’t started design review yet. See the committee’s website for a full list of proposals.

Finally, I’ll mention that the Library Evolution Working Group had a joint evening session with SG 14 (Low Latency Programming) to discuss possible new standard library containers in C++20. Candidates included a fixed capacity vector, a vector with a small object optimization, ring buffer, colony, and slot map; the first three had the greatest support.

Study Groups SG 6 (Numerics)

SG 6 met for a day, and reviewed a number of numerics-related proposals. In addition to the “signed integers are two’s complement” proposal that later came to EWG, it looked at several library proposals. Math constants, constexpr for <cmath> and <cstdlib>, letting strong_order truly be a customization point, and interpolation were forwarded to LEWG (in some cases with modifications). More better operators and floating point value access for std::ratio remain under discussion. Safe integral comparisons have been made moot by operator<=> (the proposal was “abducted by spaceship”).

SG 7 (Compile-Time Programming)

SG 7, the Compile-Time Programming (previously Reflection) Study Group, met for an evening session and reviewed three papers.

The first, called constexpr reflexpr, was an exploration of what the reflexpr static introspection proposal might look like formulated in terms of value-based constexpr programming, rather than template metaprogramming. SG 7 previously indicated that this is the direction they would like reflection proposals to take in the longer term. The paper was reviewed favourably, with encouragement to do further work in this direction. One change that was requested was to make the API value-based rather than pointer based. Some implementers pointed out that unreflexpr, the operator that takes a meta-object and reifies it into the entity it represents, may need to be split into multiple operators for parsing purposes (since the compiler needs to know at parsing time whether the reified entity is a value, a type, or a template, but the meta-object passed as argument may be dependent in a template context). Finally, some felt that the constexpr for facility proposed in the paper (which bears some resemblance to the previously-proposed tuple-based for loop) may be worth pursuing independently.

The second was a discussion paper called “What do we want to do with reflection?” It outlines several basic / frequently requested reflection use cases, and calls for facilities that address these use cases to be added to C++20. SG 7 observed that one such facility, source code information capture, is already shipping in the Library Fundamentals TS v2, and could plausibly be merged into C++20, but for the rest, a Reflection TS published in the 2019-2020 timeframe is probably the best we can do.

The third was an updated version of the metaclasses proposal. To recap, metaclasses are compile-time transformations that can be applied to a class definition, producing a transformed class (and possibly other things like helper classes / functions). At the last meeting, SG 7 discussed how a metaclass should be defined, and decided on it operating at the “value level” (where the input and output types are represented as meta-objects, and the metaclass itself is more or less just a constexpr function). At this meeting, SG 7 focused on the invocation syntax: how you apply a metaclass to your class. The syntax that appeared to have the greatest consensus was class<interface> Foo { ... }; (where interface is an example metaclass name).

SG 15 (Tooling)

This week was the inaugural meeting of the new Tooling Study Group (SG 15), also in an evening session.

Unsurprisingly, the meeting was well attended, and the people there had many, many different ideas for how C++ tooling could be improved, ranging from IDEs, through refactoring and code analysis tools, to build systems and package managers. Much of the meeting was spent trawling through this large idea space to try to narrow down and focus the group’s scope and mission.

One topic of discussion was, what is the best representation of code for tools to consume? Some argued that the source code itself is the only sufficiently general and powerful representation, while others were of the opinion that a more structured, easy-to-consume representation would be useful, e.g. because it would avoid every tool that consumes it being (or containing / invoking) a C++ parser. It was pointed out that the “binary module interface” representation that module files compile into may be a good representation for tools to consume, and we may want to standardize it. Others felt that instead of standardizing the representation, we should standardize an API for accessing it.

In the space of build systems and package managers, the group recognized that building “one build system” or “one package manager” to rule them all is unlikely to happen. Rather, a productive direction to focus efforts might be some sort of protocol that any build or package system can hook into, and produce some sort of metadata that different tools can consume. Clang implementers pointed out that compilation databases are a primitive form of this, but obviously there’s a lot of room for improvement.

In the end, the group articulated a mission: that in 10 years’ time, it would like the C++ community to be in a state where a “compiler-informed” (meaning, semantic-level) code analysis tool can run on a significant fraction of open-source C++ code out there. This implies having some sort of metadata format (that tells the tool “here’s how you run on this codebase”) that a significant enough fraction of open-source projects support. One concrete use case for this would be the author of a C++ proposal that’s a breaking change, to run a query on open-source projects to see how much breakage the change would cause; but of course the value of such infrastructure / tooling goes far beyond this use case.

It’s a fair question to ask what the committee’s role is in all this. After all, the committee’s job is to standardize the language and its libraries, and not peripheral things like build tools and metadata formats. Even the binary module interface format mentioned above couldn’t really be part of the standard’s normative wording. However, a format / representation / API could conceivably be published in the form of a Standing Document. Beyond that, the Study Group can serve as a place to coordinate development and specification efforts for various peripheral tools. Finally, the Standard C++ Foundation (a nonprofit consortium that contributes to the funding of some commitee meetings) could play a role in funding critical tooling projects.

New Study Group: SG 16 (Unicode)

The committe has decided to form a new study group for Unicode and Text Handling. This group will take ownership of proposals such as std::text and std::text_view (types for representing text that know their encoding and expose functions that operate at the level of code points and grapheme clusters), and other proposals related to text handling. The first meeting of this study group is expected to take place at a subsequent committee meeting this year.


I think this was a productive meeting with good progress made on many fronts. For me, the highlights of the meeting included:

  • Tackling important questions about Modules, such as how to transition large existing codebases, and what to do about macros.
  • C++20 gaining foundational Concepts for its standard library, with the rest of the Ranges TS hopefully following soon.
  • C++20 gaining a standard calendar and timezone library
  • An earnest design discussion about Coroutines, which may see an improved design brought forward at the next meeting.

The next meeting of the Committee will be in Rapperswil, Switzerland, the week of June 4th, 2018. Stay tuned for my report!

Other Trip Reports

Some other trip reports about this meeting include Vittorio Romeo’s, Guy Davidson’s (who’s a coauthor of the 2D graphics proposals, and gives some more details about its presentation), Bryce Lelbach’s, Timur Doumler’s, Ben Craig’s, and Daniel Garcia’a. I encourage you to check them out as well!

Categorieën: Mozilla-nl planet

Firefox Test Pilot: Min Vid Graduation Report

Mozilla planet - wo, 28/03/2018 - 15:01

Min Vid is becoming a Shield experiment, and will remain installed for current users for as long as possible while we explore implementing the feature natively in Firefox. We have no set timeline for this work yet, but will continue to provide updates on this blog.

<figcaption>The Min Vid UI</figcaption>

We launched the Min Vid experiment in Test Pilot in the Fall of 2016. Min Vid created a pop-out video player that let participants play videos in a small, standalone window that would sit on top of any other content on the screen.

Min Vid has been a success in Test Pilot, both in terms of usage, and in terms of what we learned in the process of building it. From the start, the feature proved extremely popular with our audience. It’s consistently been our most installed experiment since Page Shot left Test Pilot to become Firefox Screenshots.

At the same time, developing Min Vid was challenging for our team. In order to ship the feature quickly, we relied on older add-on code that first proved unstable and finicky, and later became completely unusable. The bugs that cropped up in Min Vid were sometimes hard to reproduce, sometimes hard to diagnose, and quite often required significant engineering effort to fix. Beyond this, the engineering, design, and product management efforts required for taking Screenshots to market in Firefox while maintaining a pipeline of new experiments meant that Min Vid often had less attention than it deserved given its popularity in Test Pilot.

In this post, I’ll discuss what Min Vid was, what we learned from the experience as a team, and what will happen next for the experiment. Unlike most graduation posts, I won’t dive into metrics here. I’ll save that for a later post after we test Min Vid with more Firefox users.

How it went down

Min Vid is the brainchild of Test Pilot engineer Dave Justice. Dave’s original pitch was something like: “I want to be able to play software development videos from YouTube in the corner of my text editor while I’m writing code.” The idea had instant appeal for members of our team.

Prior user research suggested that secondary media consumption — watching YouTube, Twitch or a baseball game in separate browser window while working on a distinct primary task— is a common use-case among Firefox users. It also seemed like an idea that would be easily understandable to users given that picture-in-picture video windows are a long time staple on mobile operating systems and televisions.

<figcaption>Probably the first Min Vid concept sketch</figcaption>

The initial proof of concept for Min Vid was actually rather simple to write. At the time, Firefox included an extremely permissive framework for building add-ons called the add-on SDK which was slated for demise in favor of the new, more stable, better documented, more performance friendly (but less permissive) WebExtensions framework. While we knew the SDK would disappear in future Firefox releases, we built the first Min Vid release using the older framework because of the speed and flexibility it afforded at the time. Taking this approach allowed us to ship a lot faster, but it ultimately lead to a lot of down-the-road refactoring work.

The first version of the player worked by registering a little button over YouTube video players and video preview thumbnails. Clicking this button would pop the player out into a standalone window allowed users to drag, minimize, scrub playback, and adjust video volume. We used YouTube’s public API to send videos into an iframe embedded in the Min Vid player window.

<figcaption>We put the Min Vid control in the upper left corner of videos</figcaption>

While we launched with just YouTube support, we went on to add several other features into Min Vid, most notably history and queue features. We also added support for Min Vid in Vimeo and SoundCloud. Because we relied on public APIs from these sites to pass media content into the Min Vid player, each new integration required a bespoke engineering effort.

<figcaption>Min Vid’s queue and history features</figcaption>What we learned

The Test Pilot program was pretty young and still a little inchoate when we launched Min Vid. From a personal perspective, I was still transitioning from my role as Test Pilot’s Design Lead to Product Manager when we launched, and I struggled with limiting the timeline and scope of the project.

Nevertheless, the experiment found its audience. That said, there are some things I wish we’d done differently for the sake of the experiment, its users, and our team.

Set goals from the start and then use them to limit work later

From the beginning, we identified some very simple success goals for Min Vid in Test Pilot: that it should show a steady retention curve, and that it should make Firefox more sticky for those users in the experiment cohort.

The problem was that at the time, we didn’t quite understand that setting goals like this can save time by limiting the eventual scope of work on a project. Instead, we got a small case of wouldn’t-it-be-cool-if-itis about new features in Min Vid.

Take for example our Vimeo and SoundCloud integrations. Seemingly pretty cool things do to? Well, this graph shows where Min Vid sessions originated last month.


Basically, these features got zero traction whatsoever. In hindsight, we should have been more aggressive about trying to move Min Vid out of Test Pilot and into the hands of test audiences in release Firefox before we started adding features to the experimental project. It would have saved us time and shaved off a lot of complexity in our codebase.

You’ll always underestimate how much refactoring costs

As I mentioned above, we originally wrote Min Vid as an SDK add-on. Over the last few months we’ve migrated it to the new WebExtensions framework (which, disclaimer, we’ve had to supplement with bootstrap code).

This kind of work — changing a codebase without changing functionality — is called refactoring, and it happens a lot in software development: libraries of code expire, frameworks become deprecated, code maintainers move on to new jobs, and so on.

While refactoring is a part of life in software development, it’s also one of the most difficult tasks in terms of estimating time and cost. There are always going to be unknown obstacles faced while migrating code from one paradigm to another: things don’t work as promised, unexpected errors arise, tasks that seemed simple from the outside become more complex in the process.

For Min Vid, the big migration from SDK to WebExtension was just such a slog. Dave made it happen — heroically — but it took a ton of work. We might have been better served holding the experiment for awhile and letting Min Vid ship later rather than writing the whole thing twice.

Focus on fewer things, and do them better together

This may be the biggest lesson of the first two years of Test Pilot. During this time, we shipped two major Firefox features, built an experiments platform, shipped more than a dozen large experiments, ran design sprints, and a university design course to boot. Our team has simply done too many things.

This hit me pretty squarely over the summer as I was deep in implementation details for Screenshots while managing the launch of Send, helping with priorities and metrics dashboards for Notes, and writing style code for Voice Fill at the same time.

In some ways, Min Vid has been a casualty of our team’s overeager schedule. With Screenshots’ launch imminent last summer we had to delay design, engineering, and product management tasks on Min Vid for long stretches of time. We simply did not have the resourcing.

The fact that Min Vid stayed successful is an absolute testament to the hard work of Dave Justice who dove into one complex coding task after another, often with very little direct help from anyone on our team.

As Min Vid leaves Test Pilot and moves on to the next round of testing on Firefox users, the Test Pilot team is busy prepping the next wave of experiments with this lesson in mind. We’ll be shipping fewer new projects, but we will be significantly more attentive to each.

What happens next

Our users love Min Vid and our team loves Min Vid, but to do the feature right would mean closer consideration of core browser components like windowing and media playback. Because of this, taking the feature from Test Pilot to release Firefox in a maintainable, performant manner will require a complete rewrite of the project’s codebase.

The next step toward figuring out if the engineering effort will be worthwhile is to ship a version of Min Vid to a small population of Firefox users through the Shield experiments platform. For our upcoming Shield experiment we’ve built a pared-down Min Vid experience by removing the queue feature, limiting the experience to YouTube, and reducing the Telemetry we’re collecting to a bare-minimum of event collections. We’ll also display a small on-boarding pop-up for one of our experimental cohorts to inform users about the feature. In our Shield experiment, we want to learn whether Min Vid usage is covariant with increased Firefox usage, retention, or user-reported satisfaction.

Depending on the outcome of this experiment, we will work with the Firefox Product and Platform teams to define specific goals and use cases for a Firefox version of Min Vid. We’ll continue to provide updates on this blog.

Thanks for everyone who participated in this experiment. And thanks in particular to Dave Justice for keeping it alive for so long!

Min Vid Graduation Report was originally published in Firefox Test Pilot on Medium, where people are continuing the conversation by highlighting and responding to this story.

Categorieën: Mozilla-nl planet

Ryan Harter: Don't make me code in your text box!

Mozilla planet - wo, 28/03/2018 - 09:00

Whenever I start a new data project, my first step is rooting out any false assumptions I have about the data.

The key here is iterating quickly. My workflow looks like this: Code a little, plot the data, what do you see? Ah, outliers. Code a little, plot the data …

Categorieën: Mozilla-nl planet

Andy McKay: Leaving Mozilla

Mozilla planet - wo, 28/03/2018 - 09:00

Today is my last day at Mozilla as a paid employee. Seven and a half years at Mozilla has been a heck of ride. I feel lucky and honoured to have had such an awesome opportunity.

In terms projects I've gone from AMO, through the Firefox OS Marketplace, through Marketplace Payments, then back to AMO and WebExtensions. Those last couple of years, as we rebooted the add-ons ecosystem, was probably my proudest moment professionally.

But more importantly I've met so many great people here. I was so pleasantly surprised to find my journey at Mozilla produce so many good friends. These are smart, caring, dedicated professionals who make a difference to Mozilla, Firefox and the open web. I think Mozilla is in great hands and I'll miss every single one of them.

What's next? GitHub.

Categorieën: Mozilla-nl planet

Mozilla Thunderbird: We’re Hiring a Build Engineer

Mozilla planet - wo, 28/03/2018 - 03:04

We at the Thunderbird project are hiring a Build and Release Engineer. Interested in getting paid to work on Thunderbird? You’ll find information about the role ,as well as how to apply, below!

Thunderbird Build & Release Engineer

About Thunderbird
Thunderbird is a email client depended on daily by 25 million people on three platforms: Windows, Mac and Linux (and other *nix). It was developed under the Mozilla Corporation until 2014 when development was handed over to the community. The Mozilla Foundation is now the fiscal home of Thunderbird. The Thunderbird Council, who lead the community effort, has begun hiring contractors through Mozilla in support of this venture and to guarantee that all vital services are provided in a reliable fashion.

You will join the team that is leading Thunderbird into a bright future. As a build engineer you will be serving the community, empowering them to make their contributions available to over 25 million people.

The Thunderbird team works openly using public bug trackers and repositories, providing you with a premier chance to show your work to the world.

About the Contract
The Mozilla Thunderbird project is looking to hire a build and release engineer to help maintain Thunderbird. You’ll be expected to work with community volunteers, the Thunderbird Council, and other employees to maintain and improve the Thunderbird build and release process.

This is a remote, hourly 6-month contract (with the possibility of continuing). Hours will be up to 40 a week. You will be expected to have excellent written communication skills and coordinate your work over email, IRC, and Bugzilla.

As a build & release engineer for Thunderbird you will

  • Maintain and improve the Thunderbird build system to ensure that both nightly builds and releases are always possible.
  • Finalise the migration if Thunderbird’s continuous integration/deployment (CI/CD) service from Buildbot to TaskCluster.
  • Procure and maintain build infrastructure in tandem with Thunderbird’s infrastructure engineer (who is currently focused on web-based services).
  • Work with both volunteers and employees across the world to fix build issues.
  • Follow improvements made by Mozilla engineers for the Firefox build & release process and implement those for Thunderbird.
  • Collaborate with QA, Security, Localization, and Engineering for coordinated code releases for “release” builds (known as ESR) and beta builds.

Your Previous Experience

  • Have experience using build systems (preferably make).
  • Have experience setting up a continuous integration service.
  • Have solid scripting knowledge (shell, Python).
  • Experience with Buildbot and TaskCluster is highly desirable.
  • Have experience using distributed version control systems (preferably Mercurial, Git would be acceptable).
  • Some development background with Python and C is highly preferred.
  • Experience building and releasing cross-platform applications is a plus.
  • B.S. in Computer Science would be lovely, but real-world experience is preferred.

Next Steps
If this position sounds like a good fit for you, please send us your resume with a cover letter to

A cover letter is essential to your application, as it shows us how you envision Thunderbird’s technical future. Tell us about why you’re passionate about Thunderbird and this position. Also include samples of your work as a programmer, either directly or a link. If you contribute to any open source software, or maintain a blog we’d love to hear about it.

Please note that while the Thunderbird project is a group of individuals separate from the Mozilla Foundation that works to further the Thunderbird email client, the Mozilla Foundation is the Project’s fiscal home. The Thunderbird Council, separate from Mozilla, manages the Project and will direct the software engineer’s work.

The successful applicant will be hired as freelancer (independent contractor) through the Mozilla Foundation’s third-party service Upwork. By applying to this job, you are agreeing to have your applications reviewed by Thunderbird contractors and volunteers who are a part of the hiring committee as well as by staff members of the Mozilla Foundation.

Mozilla values diversity. We do not discriminate on the basis of race, religion, color, national origin, gender, sexual orientation, age, marital status, veteran status, or disability status.

Categorieën: Mozilla-nl planet

Mozilla Marketing Engineering & Ops Blog: Bedrock: The SQLitening

Mozilla planet - wo, 28/03/2018 - 02:00

On it’s face doesn’t look like it’d be a complex application to write, maintain, or run. But when you throw over 100 million unique visitors per week at any site it can complicate things quickly. Add to that translations of the content into over 100 languages and you can start to get the idea of where it might get interesting. So we take every opportunity to simplify and reduce hosting complexity and cost we can get. This is the place from which the idea to switch to using SQLite for our database needs in production was born.

The traditional answer to the question “should we use SQLite for our web application in production?” is an emphatic NO. But, again, bedrock is different. It uses its database as a read-only data store as far as the web application is concerned. We run a single data updater process (per cluster) that does the writing of the updates to the DB server that all of the app instances use. Most of bedrock is static content coded directly into templates, but we use the database to store things like product release notes, security advisories, blog posts, twitter feeds, and the like; basically anything that needs updating more often than we deploy the site. SQLite is indeed a bad solution for a typical web application which is writing and reading data in its normal function because SQLite rightly locks itself to a single writer at a time, and a web app with any traffic almost certainly needs to write more than one thing at a time. But when you only need to read data then SQLite is an incredibly fast and robust solution to data storage.

Data Updates

The trick with a SQLite store is refreshing the data. We do still need to update all those bits of data I mentioned before. Our solution to this is to keep the aforementioned single process updating the data, but this time it will update a local SQLite file, calculate a hash of said file, and upload the database and its metadata (a JSON file that includes the SHA256 hash) to AWS S3. The Docker containers for the web app will also have a separate process running that will check for a new database file on a schedule (every 5 min or so), compare its metadata to the one currently in use, download the newer database, check its hash against the one from the metadata to ensure a successful download, and swap it with the old file atomically with the web app none the wiser. Using Python’s os.rename function to swap the database file ensures an atomic switch with zero errors due to a missing DB file. We thought about using symlinks for this but it turns out to be harder to re-point a symlink than to just do the rename which atomically overwrites the old file with the new (I’m pretty sure it’s actually just updating the inode to which the name points but I’ve not verified that).

When all of this is working it means that bedrock no longer requires a database server. We can turn off our AWS RDS instances and never have to worry about DB server maintenance or downtime. The site isn’t all that much faster since like I said it’s mostly spending time rendering Jinja templates, but it is a lot cheaper to run and less likely to go down. We are also making DB schema changes easier and more error-free since the DB filenames include the git hash of the version of bedrock that created it. This means that the production Docker images contain an updated and migrated database file, and it will only download an update once the same version of the site is the one producing database files.

And production advantages aren’t the only win: we also have a much more simple development bootstrap process now since getting all of the data you need to run the full site is a simple matter of either running bin/ or pulling the prod docker image (mozorg/bedrock:latest) which will contain a decently up-to-date database and the machinery to keep it updated that requires no AWS credentials since the database is publicly available.

Verifying Updates

Along with actually performing the updates in every running instance of the site we also need to be able to monitor that said updates are actually happening. To this end we created a page on the site that will give us some data on when the last time that instance ran the update, the git hash of bedrock that is currently running, the git hash used to create the database in use, and how long ago said database was updated. This page will also respond with a 500 code instead of the normal 200 if the DB and L10n update tasks happened too long ago. At the time of writing the updates happen every 5 minutes, and the page would start to fail at 10 minutes of no updates. Since the updates and the site are running in separate processes in the Docker container, we need a way for the cron process to communicate to the web server the time of the last run for these tasks. For this we decided on files in /tmp that the cron jobs will simply touch, and the web server can get the mtime (check out the source code for details).

To actually monitor this view we are starting with simply using New Relic Synthetics pings of this URL at each of our clusters (currently oregon-b, tokyo, and frankfurt). This is a bit suboptimal because it will only be checking whichever pod happens to respond to that particular request. In the near future our plan is to move to creating another process type for bedrock that will query Kubernetes for all of the running pods in the cluster and ping each of them on a schedule. We’ll then ping Dead Man’s Snitch (DMS) on every fully successful round of checks, and if they fail more than a couple of times in a cluster we’ll be notified. This will mean that bedrock will be able to monitor itself for data update troubles. We also ping DMS on every database update run, so we should know quickly if either database uploading or downloading is having trouble.


We obviously don’t yet know the long-term effects and consequences of this change (as of writing it’s been in production less than a day), but for now our operational complexity and costs are lower. I feel confident calling it a win for our deployment reliability for now. Bedrock may eventually move toward having a large part of it pre-generated and hosted statically, but for now this version feels like the one that will be as robust, resilient, and reliable as possible while still being one big Django web application.

Categorieën: Mozilla-nl planet

Mozilla Addons Blog: Improving the Add-ons Linter

Mozilla planet - di, 27/03/2018 - 21:53

For the last five years, Mozilla has participated in the Outreachy program to provide three-month internships for people from groups traditionally underrepresented in tech. From December 2017 – March 2018, interns Ravneet Kaur and Natalia Milavanova worked under the guidance of Christopher Grebs and Luca Greco to improve the add-ons linter.

The add-ons linter is a tool that warns of potential problems within an extension’s code. Developers can use the web-ext command line tool to run the linter locally to check for potential issues during development, and (AMO) uses it to validate an extension during the submission process.

Ravneet’s internship project was to land a localization feature to the add-ons linter. By offering the option to see errors and warnings in multiple languages, the linter can be more accessible to add-on developers who prefer to work with non-English languages. Ravneet successfully adapted Pontoon’s localization method for the add-ons linter and extracted about 19,000 lines of code in the process.

Outreachy internships are a great way to gain real-world experience working with distributed teams and grow software development skills. “This project was the first time I was introduced to the idea of bundling code using technologies like webpack,” Ravneet says. “After going through its documentation and reading blog posts about it, I was fascinated about the idea of bundling code together and building small, minimalistic projects in production, while having a wide variety of maintainable files and folders in the project’s source.”

Natalia tackled a different challenge: improve the linter’s validation by rewriting a large chunk of the code and test suite into async functions. For a long time, the linter’s code had been cumbersome to work with. After refactoring the code, Natalia removed approximately 3% of the code. Luca notes, “Thanks to Natalia’s hard work on this this project, our ability to debug and handle errors in the add-ons linter has been greatly improved. It’s now easier to read and understand the sources, and prepare for the changes that are coming next.”

Thank you for all of your contributions, Natalia and Ravneet! We look forward to seeing your future accomplishments.

If you’re interested in learning more about Outreachy, please visit their website. While you’re at it, check out our own Outreachy alum Shubheksha Jalan’s blog post about her experience applying to the program.

The post Improving the Add-ons Linter appeared first on Mozilla Add-ons Blog.

Categorieën: Mozilla-nl planet

Air Mozilla: Exploring Policies and on the ground solutions in the US and across the globe

Mozilla planet - di, 27/03/2018 - 21:00

Exploring  Policies and on the ground solutions in the US and across the globe Amina Fazlullah Tech Policy Fellow and Jochai Ben-Avie Mozilla Senior Global Policy Manager discuss the latest policies impacting access and digital inclusion in the US...

Categorieën: Mozilla-nl planet