Mozilla Nederland LogoDe Nederlandse

Abonneren op feed Mozilla planet
Planet Mozilla -
Bijgewerkt: 1 week 6 dagen geleden

Manish Goregaokar: Rust in 2018

wo, 10/01/2018 - 01:00

A week ago we put out a call for blog posts for what folks think Rust should do in 2018.

This is mine.

Overall focus

I think 2017 was a great year for Rust. Near the beginning of the year, after custom derive and a bunch of things stabilized, I had a strong feeling that Rust was “complete”. Not really “finished”, there’s still tons of stuff to improve, but this was the first time stable Rust was the language I wanted it to be, and was something I could recommend for most kinds of work without reservations.

I think this is a good signal to wind down the frightening pace of new features Rust has been getting. And that happened! We had the impl period, which took some time to focus on getting things done before proposing new things. And Rust is feeling more polished than ever.

Like Nick, I feel like 2018 should be boring. I feel like we should focus on polishing what we have, implementing all the things, and improving our approachability as a language.

Basically, I want to see this as an extended impl period.

This doesn’t mean I’m looking for a moratorium on RFCs, really. Hell, in the past few days I’ve posted one pre-pre-RFC1, one pre-RFC, and one RFC (from the pre-RFC). I’m mostly looking for prioritizing impl work over designing new things, but still having some focus on design.


I think Rust still has some “missing bits” which make it hard to justify for some use cases. Rust’s async story is being fleshed out. We don’t yet have stable SIMD or stable inline ASM. The microcontroller story is kinda iffy. RLS/clippy need nightly. I’d like to see these crystallize and stabilize this year.

I think this year we need to continue to take a critical look at Rust’s ergonomics. Last year the ergonomics initiative was really good for Rust, and I’d like to see more of that. This is kind of at odds with my “focus on polishing Rust” statement, but fixing ergonomics is not just new features. It’s also about figuring out barriers in Rust, polishing mental models, improving docs/diagnostics, and in general figuring out how to best present Rust’s features. Starting dialogues about confusing bits of the language and figuring out the best mental model to present them with is something we should continue doing. Sometimes this may need new features, indeed, but not always. We must continue to take a critical look at how our language presents itself to newcomers.


I’d like to see a stronger focus in mentoring. Mentoring on rustc, mentoring on major libraries, mentoring on Rust tooling, mentoring everywhere. This includes not just the mentors, but the associated infrastructure – contribution docs, sites like servo-starters and findwork, and similar tooling.

I’m also hoping for more companies to invest back into Rust. This year Buoyant became pretty well known within the community, and many of their employees are paid to work on various important parts of the Rust ecosystem. There are also multiple consulting groups that contribute to the ecosystem. It’s nice to see that “paid to work on Rust” is no longer limited to Mozilla, and this is crucial for the health of the language. I hope this trend continues.

Finally, I want to see more companies talk about Rust. Success stories are really nice to hear. I’ve heard many amazing success stories this year, but a lot of them are things which can’t be shared.


Last year we started seeing the limits of the RFC process. Large RFCs were stressful for both the RFC authors and participating community members, and rather opaque for newer community members wishing to participate. Alternative models have been discussed; I’d like to see more movement on this front.

I’d also like to grow the moderation team; it is currently rather small and doesn’t have the capacity to handle incidents in a timely fashion.

Docs / Learning

I’d like to see a focus on improving Rust for folks who learn the language by trying things over reading books 2 3.

This means better diagnostics, better alternative resources like rustbyexample, etc. Improving mentorship helps here as well.

Of course, I’d like to see our normal docs work continue to happen.

I’m overall really excited for 2018. I think we’re doing great on most fronts so far, and if we maintain the momentum we’ll have an even-more-awesome Rust by the end of this year!

  1. This isn’t a “pre rfc” because I’ve written it as a much looser sketch of the problem and a solution

  2. There is literally no programming language I’ve personally learned through a book or formal teaching. I’ve often read books after I know a language because it’s fun and instructive, but it’s always started out as “learn extreme basics” followed by “look at existing code, tweak stuff, and write your own code”.

  3. Back in my day Rust didn’t have a book, just this tiny thing called “The Tutorial”. grouches incessantly

Categorieën: Mozilla-nl planet

Robert O'Callahan: On Keeping Secrets

di, 09/01/2018 - 23:37

Once upon a time I was at a dinner at a computer science conference. At that time the existence of Chrome was a deeply guarded secret; I knew of it, but I was sworn to secrecy. Out of the blue, one of my dinner companions turned to me and asked "is Google working on a browser?"

This was a terrible dilemma. I could not answer "no" or "I don't know"; Christians mustn't lie. "Yes" would have betrayed my commitment. Refusing to answer would obviously amount to a positive answer, as would any obvious attempt to dodge the question ("hey I think that's Donald Knuth over there!").

I can't remember exactly what I said, but it was something evasive, and I remember feeling it was not satisfactory. I spent a lot of time later thinking about what I should have said, and what I should say or do if a similar situation arises again. Perhaps a good answer would have been: "aren't you asking the wrong person?" Alternatively, go for a high-commitment distraction, perhaps a cleverly triggered app that self-dials a phone call. "You're going into labour? I'll be right there!" (Note: not really, this would also be a deception.) It's worth being prepared.

One thing I really enjoyed about working at Mozilla was that we didn't have many secrets to keep. Most of the secrets I had to protect were about other companies. Minimizing one's secrecy burden generally seems like a good idea, although I can't eliminate it because it's often helpful to other people for them to be able to share secrets with me in confidence.

Update The situation for Christians has some nuance.

Categorieën: Mozilla-nl planet

Mark Surman: The internet doesn’t suck

di, 09/01/2018 - 20:50

It’s easy to think the internet sucks these days. My day job is defending net neutrality and getting people to care about privacy and the like. From that perch, it more often than not feels like things are getting worse on the internet.

So, I thought I’d share an experience that reminded me that the internet doesn’t suck as much as we might think. In fact, in many moments, the internet still delivers all the wonder and empowerment that made me fall in love with it 25 years ago.

The experience in question: my two sons Facetimed me into their concert in Toronto last week, lovingly adding me to a show that I almost missed.

Photo of band playing music.

A little more context: my eldest son was back from college for Christmas. He and his brother were doing a reunion show with their high school band (listen to them on Spotify). I was happy for them — and grumpy that the show was scheduled for the one night over my son’s holiday visit that I had to be on a work trip. My son felt bad, but the show must go on.

While in Chicago on my trip, I got a text message. “Dad, can you be on Facetime around 9pm central?” Smile. “Yup,” I texted back.

I eagerly waited for the call at the appointed time, but was distracted by a passionate conversation with a colleague about All The Things We Need to Do to Save the Internet. I looked at my phone about 9:20. Gulp. I’d missed two calls. Frown.

I wished the kids well with the concert by text — and headed back to my hotel room. As I kicked back on the bed, the phone rang. I picked it up. There was Tristan. “Hey guys, here’s my dad, from Chicago.” He waves the phone over his head. I see the audience blur by. They scream and clap. I was at the concert!

Tristan then handed the phone to a young woman in the audience. She looked at me quizzically and smiled. Then she pressed the screen to flip to the front camera on the phone. She shakily held me through the last two songs of the show. Which, by the way, was great. That band is tight. Tristan grabbed the camera and waved goodbye. My version of the show was over as quickly as it began.

I felt so good for those 10 minutes. I was so proud and in love with my two sons. So grateful and impressed that Tristan had turned the challenge of me being away into a cool part of his live show schtick. And, so happy — and a bit reflective — about how skilled we’ve started to become as a society that loves and cares for each other using the internet.

At this moment, the internet did not suck. Far from it. Tristan and I each had powerful computers and cameras in our pockets with high speed internet connections. We were easily able to make secure, ad-free, flawless point to point television for each other. And, each of us, including the young woman in the audience, knew how to make all this happen on a whim.

Thinking back from my crazy activist camcorder days in the early 1990s, when I also heard the first crack of a modem, it’s hard to believe that we have built the digital world we have. And, thinking about it as a father in 2018, it feels like: this is an awesome way to be a family. This is the internet I wanted — and want more of. It is what drives me to do the work I do.

Yes, of course, there are lots of ways in which the internet sucks more than it used to. And, as the rest of the world (yes, all 7 billion of us) comes online, there will likely be an increasing gap between slower, ad-laden, compromised internet for most people, and a faster, private, secure internet for those who can pay for it. If trends continue, that’s where we are headed.

Still, there are so many ways that the internet does not suck. In fact, often, it enriches us. We need to keep our eyes on the prize: making sure the internet does not suck for as many people as possible for as long as possible. That’s the work we need to be doing. And we should do it not from a place of fear or despair, but from a place of joy.

The post The internet doesn’t suck appeared first on Mark Surman.

Categorieën: Mozilla-nl planet

Mozilla B-Team: happy bmo push day

di, 09/01/2018 - 15:56

release tag

the following changes have been pushed to

  • [1428166] Move <meta charset> to start of <head>
  • [1428156] BMO will mark a revision as public if it sees a new one created without a bug id associated with it
  • [1428227] Google doesn’t index any more
  • [1428079] No horizontal scrollbar when bug summary is very long or narrow browser width
  • [1428146] Fix static assets url rewriting rule
  • [1427800] Wrong anchor scrolling with old UI
  • [1428642] Fix minor bugs on new global header
  • [1428641] Implement Requests quick look dropdown on global header
  • [1429060] Fix nagios checker blocker breakage from PR #340

discuss these changes on

Categorieën: Mozilla-nl planet

This Week In Rust: This Week in Rust 216

di, 09/01/2018 - 06:00

Hello and welcome to another issue of This Week in Rust! Rust is a systems language pursuing the trifecta: safety, concurrency, and speed. This is a weekly summary of its progress and community. Want something mentioned? Tweet us at @ThisWeekInRust or send us a pull request. Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub. If you find any errors in this week's issue, please submit a PR.

Updates from Rust Community News & Blog Posts #Rust2018 Crate of the Week

This week's crate is artifact, a design documentation tool. Thanks to musicmatze for the suggestion!

Submit your suggestions and votes for next week!

Call for Participation

Always wanted to contribute to open-source projects but didn't know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

If you are a Rust project owner and are looking for contributors, please submit tasks here.

Updates from Rust Core

130 pull requests were merged in the last week

New Contributors
  • aheart
  • BurntPizza
  • Johannes Boczek
  • keatinge
  • Sam
Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

No RFCs were approved this week.

Final Comment Period

Every week the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now. This week's FCPs are:

New RFCs Upcoming Events

If you are running a Rust event please add it to the calendar to get it mentioned here. Email the Rust Community Team for access.

Rust Jobs

No jobs listed for this week.

Quote of the Week

No quote was selected for QotW.

Submit your quotes for next week!

This Week in Rust is edited by: nasa42 and llogiq.

Categorieën: Mozilla-nl planet

Mozilla GFX: Retained Display Lists

di, 09/01/2018 - 03:26

Hot on the heels of Off-Main-Thread Painting, our next big Firefox graphics performance project is Retained Display Lists!

I you haven’t already read it, I highly recommend reading David’s post about Off-Main-Thread Painting as it provides a lot of background information on how our painting pipeline works.

Display list building is the process in which we collect the set of high-level items that we want to display on screen (borders, backgrounds, text and many, many more) and then sort it according to the CSS painting rules into the correct back-to-front order. It’s at this point that we figure out which parts of the page are currently visible on-screen.

Currently, whenever we want to update what’s on the screen, we build a full new display list from scratch and then we use it paint everything on the screen. This is great for simplicity, we don’t to worry about figuring out which bits changed or went away. Unfortunately, it can take a really long time. This has always been a problem, but as websites gets more complex and users get higher resolution monitors the problem has magnified.

The solution is to retain the display list between paints, only build a new display list for the parts of the page that changed since we last painted and then merge the new list into the old to get an updated list. This adds a lot more complexity, since we need to figure out which items to remove from the old list, and where to insert new items. The upside is that in a lot of cases the new list can be significantly smaller than a full list, and we have the opportunity to save a lot of time.

If you’re interested in the lower level details on how the partial updates and merging works, take a look at the project planning document.


As part of the lead up to Firefox Quantum, we added new telemetry to Firefox to help us measure painting performance, and to let us make more informed decisions as to where to direct our efforts. One of these measurements defined a minimum threshold for a ‘slow’ paint (16ms), and recorded percentages of time spent in various paint stages when it occurred. We expected display list building to be significant, but were still surprised with the results: On average, display list building was consuming more than 40% of the total paint time, for work that was largely identical to the previous frame. We’d long been planning on an overhaul of how we built and managed display lists, but with this new data we decided that it needed to be a top priority for our Painting team.


Once we had everything working, the next step was to see how much of an effect it had on performance! We ran an A/B test on the Beta 58 population so that we could collect telemetry for the two groups, and compare the results.

The first and most significant change is that the frequency of slow paints dropped by almost 30%!


The horizontal axis shows the duration of the paints, and the vertical axis shows how frequently (as a percent) this duration happened. As you can see, paints in the 2-7ms range became significantly more frequent, and paints that took 8ms or longer became significantly less frequent.

We also see movement in the breakdown percentages for slow paints. As this only includes data for slow paints, it doesn’t include data for the all the slow paints that stopped happening as a result of retaining the display list, and instead shows how we performed when retaining the display list wasn’t enough to make us fast, or we were unable to retain the display list at all.

The horizontal axis is the percentage of time spent display list building, and the vertical axis shows how frequently that occurred (during a slow paint). You can see the 38-50% range dropped significantly, with a corresponding rise in all the buckets below that. The 51%+ actually got a bit worse, but that’s expected since the truly slow cases are the ones where we either fixed the problem (and got excluded from this data) or were unable to help. More on that later.

We also developed a stress test for display list building, as part of our ’Talos’ automated testing infrastructure, known as “displaylist_mutate”. This creates a display list with over 10 thousand items, and repeatedly modifies it one item at a time. As expected, we’re seeing more than a 30% drop in time taken to run this test, with very little time spent in display list building.

Future Work:

As mentioned above, we aren’t always able to retain the display list. We spent time working out what parts of the page changed, and if that ends up being everything (or close to), then we still have to rebuild the full display list and the time spent doing the analysis was wasted. Work is ongoing to try detect this as early as possible, but it’s unlikely that we’ll be able to entirely prevent it. We’re also actively working to minimize how long the preparation work takes, so that we can make the most of opportunities for a partial update.

Retaining the display list also doesn’t help for the first time we paint a webpage when it loads. The first paint always has to build the full list from scratch, so in the future we’re going to be looking at ways to make that faster across the board.

Thanks to everyone who has helped work on this, including: Miko Mynttinen, Timothy Nikkel, Markus Stange, David Anderson, Ethan Lin and Jonathan Watt.

Categorieën: Mozilla-nl planet

Marco Castelluccio: How to collect code coverage on Windows with Clang

di, 09/01/2018 - 01:00

With the upcoming version of Clang 6, support for collecting code coverage information on Windows is now mature enough to be used in production. As a proof, we can tell you that we have been using Clang to collect code coverage information on Windows for Firefox.

In this post, I will show you a simple example to go from a C++ source file to a coverage report (in a readable format or in a JSON format which can be parsed to generate custom nice reports or upload results to Coveralls/Codecov).


Let’s say we have a simple file, main.cpp:

#include <iostream>

int main() {
int reply = 42;

if (reply == 42) {
std::cout << "42" << std::endl;
} else {
std::cout << "impossible" << std::endl;

return 0;

In order to make Clang generate an instrumented binary, pass the ‘--coverage’ option to clang:

clang-cl --coverage main.cpp

In the directory where main.cpp is, both the executable file of your program and a file with extension ‘gcno’ will be present. The gcno file contains information about the structure of the source file (functions, branches, basic blocks, and so on).

09/01/2018 16:21 <DIR> . 09/01/2018 16:21 <DIR> .. 09/01/2018 16:20 173 main.cpp 09/01/2018 16:21 309.248 main.exe 09/01/2018 16:21 88.372 main.gcno Run

Now, the instrumented executable can be executed. A new file with the extension ‘gcda’ will be generated. It contains the coverage counters associated with the ‘gcno’ file (how many times a line was executed, how many times a branch was taken, and so on).

09/01/2018 16:22 <DIR> . 09/01/2018 16:22 <DIR> .. 09/01/2018 16:20 173 main.cpp 09/01/2018 16:21 309.248 main.exe 09/01/2018 16:22 21.788 main.gcda 09/01/2018 16:21 88.372 main.gcno

At this point, we need a tool to parse the gcno/gcda file that was generated by Clang. There are two options, llvm-cov and grcov. llvm-cov is part of LLVM and can generate bare bones reports, grcov is a separate tool and can generate LCOV, Coveralls and Codecov reports.

We had to develop grcov (in Rust!) to have a tool that could scale to the size of Firefox.

Parse with llvm-cov

You can simply run:

llvm-cov gcov main.gcno

Several files with extension gcov will be generated (one for each source file of your project, including system header files). For example, here’s main.cpp.gcov:

-: 0:Source:main.cpp -: 0:Graph:main.gcno -: 0:Data:main.gcda -: 0:Runs:1 -: 0:Programs:1 -: 1:#include <iostream> -: 2: -: 3:int main() { 1: 4: int reply = 42; -: 5: 1: 6: if (reply == 42) { 1: 7: std::cout << "42" << std::endl; 1: 8: } else { #####: 9: std::cout << "impossible" << std::endl; -: 10: } -: 11: 1: 12: return 0; -: 13:} Parse with grcov

grcov, can be downloaded from GitHub (in the Releases page).

Simply execute grcov with the ‘--llvm’ option, pointing it to the directory containing your gcda/gcno files. The “-t” option allows you to specify the output format:

  • “lcov” for the LCOV format, which you can then translate to a HTML report using genhtml;
  • “coveralls” for a JSON format compatible with Coveralls/Codecov;
  • “coveralls+” for an extension of the former, with addition of function information.


grcov --llvm PATH_TO_YOUR_DIRECTORY -t coveralls+ --token unused --commit-sha unused > report.json

grcov has other options too, simply run it with no parameters to list them. The most important one probably being “--branch”, which adds the branch information to the output.

In production

We are using Clang to collect code coverage on Windows for Firefox. The grcov tool is used to parse the gcno/gcda files generated by LLVM and to emit a report that can be converted to HTML or uploaded to services such as Coveralls or Codecov.

We have fixed several bugs in LLVM, Clang and compiler-rt (rL314201, rL315677, rL316048, rL317705, rL317709, rL321702, rL321703) that were preventing GCOV coverage to be usable in Clang 5 (and previous versions), but with Clang 6 the situation should be pretty good (if it works with the millions of lines of code of Firefox, with its highly parallel architecture and with several diverse kinds of test suites, you can rest assured that it will work for most projects!).

Categorieën: Mozilla-nl planet

Niko Matsakis: #Rust2018

di, 09/01/2018 - 00:00

As part of #Rust2018, I thought I would try to writeup my own (current) perspective. I’ll try to keep things brief.

First and foremost, I think that this year we have to finish what we started and get the “Rust 2018” release out the door. We did good work in 2017: now we have to make sure the world knows it and can use it. This primarily means we have to do stabilization work, both for the recent features added in 2017 as well as some, ahem, longer-running topics, like SIMD. It also means keeping up our focus on tooling, like IDE support, rustfmt, and debugger integration.

Looking beyond the Rust 2018 release, we need to continue to improve Rust’s learning curve. This means language changes, yes, but also improvements in tooling, error messages, documentation, and teaching techniques. One simple but very important step: more documentation targeting intermediate-level Rust users.

I think we should focus on butter-smooth (and performant!) integration of Rust with other languages. Enabling incremental adoption is key.1 This means projects like Helix but also working on bindgen and improving our core FFI capabilities.

Caution is warranted, but I think there is room for us to pursue a select set of advanced language features. I am thinking primarily of const generics, procedural macros, and generic associated types. Each of these can be a massive enabler. They also are fairly obvious generalizations of things that the compiler currently supports, so they don’t come at a huge complexity cost to the language.

It’s worth emphasizing also that we are not done when it comes to improving compiler performance. The incremental infrastructure is working and en route to a stable compiler near you, but we need to shoot for instantaneous build times after a small change (e.g., adding a println! to a function).

(To help with this, I think we should start a benchmarking group within the compiler team (and/or the infrastruture team). This group would be focused on establishing and analyzing important benchmarks for both compilation time and the performance of generated code. Among other things, this group would maintain and extend the site. I envision people in this group both helping to identify bottlenecks and, when it makes sense, working to fix them.)

I feel like we need to do more production user outreach. I would really like to get to the point where we have companies other than Mozilla paying people to work full-time on the Rust compiler and standard library, similar to how Buoyant has done such great work for tokio. I would also really like to be getting more regular feedback from production users on their needs and experiences.

I think we should try to gather some kind of limited telemetry, much like what Jonathan Turner discussed. I think it would be invaluable if we had input on typical compile times that people are experiencing or – even better – some insight into what errors they are getting, and maybe the edits made in response to those errors. This would obviously require opt-in and a careful attention to privacy!

Finally, I think there are ways we can offer a clearer path for contributors and in turn help grow our subteams. In general, I would like to see the subteams do a better job of defining the initiatives that they are working on – and, for each initiative, forming a working group dedicated to getting it done. These “active initiatives” would be readily visible, offering a clear way to simultaneously find out what’s going on in Rust land and how you can get involved. But really this is a bigger topic than I can summarize in a paragraph, so I will try to revisit it in a future blog post.

A specific call out

If you are someone who would consider using Rust in production, or advocating for your workplace to use Rust in production, I’d like to know how we could help. Are there specific features or workflows you need? Are there materials that would help you to sell Rust to your colleagues?

  1. If you’ve had altogether too cheerful of a day, go and check out Joe Duffy’s RustConf talk on Midori. That ought to sober you right up. But the takeaway here is clear: enabling incremental adoption is crucial.

Categorieën: Mozilla-nl planet

Zibi Braniecki: Multilingual Gecko in 2017

ma, 08/01/2018 - 21:53
The outline

In January 2017, we set the course to get a new localization framework named Fluent into Firefox.

Below is a story of the work performed on the Firefox engine – Gecko – over the last year to make Fluent in Firefox possible. This has been a collaborative effort involving a lot of people from different teams. It’s impossible to document all the work, so keep in mind that the following is just the story of the Gecko refactor, while many other critical pieces were being tackled outside of that range.

Also, the nature of the project does make the following blog post long, text heavy and light on pictures. I apologize for that and hope that the value of the content will offset this inconvenience and make it worth reading.


The change is necessary and long overdue – our aged localization model is brittle, is blocking us from adding new localizability features for developers and localizers, doesn’t work with the modern Intl APIs and doesn’t scale to other architectural changes we’re planning like migration away from XBL/XUL.

Fluent is a modern, web-ready localization system developed by Mozilla over the last 7 years. It was initially intended for Firefox OS, but the long term goal was always to use it in Firefox (first attempt from 7 years ago!).

Unfortunately, replacing the old system with Fluent is a monumental task similar in scope only to other major architectural changes like Electrolysis or Oxidation.

The reason for that is not the localization system change itself, but rather that localization in Gecko has more or less been set in stone since the dawn of the project.  Most of the logic and some fundamental paradigms in Gecko are deeply rooted in architectural choices from 1998-2000.  Since then, a lot of build system choices, front end code, and core runtime APIs were written with assumptions that hold true only for this system.

Getting rid of those assumptions requires major refactors of many of the Internationalization modules, build system pieces, language packs, test frameworks, and resource handling modules before we even touch the front end. All of this will have to be migrated to the new API.

On top of that, the majority of the Internationalization APIs in Gecko were designed at the time when Gecko could not carry its own internationalization system. Instead, it used host operating system methods to format dates, times, numbers etc. Those approaches were incomplete and the outcome differed from setup to setup, making the Gecko platform unpredictable and hard to internationalize.

Fluent has been designed to be aligned with the modern Internationalization API for Javascript developed as the ECMA402 standard by the TC39 Working Group. This is the API available to all web content and fortunately, by 2016, Gecko was already carrying this API powered by a modern cross-platform internationalization library designed by the Unicode Consortium – ICU.

That meant that we were using a modern standard to supply an internationalization API for the Web, but internally we relied on a different, much older, set of APIs for the Firefox UI.

Not only do we have to maintain two sets of internationalization APIs, but also we carry two sets of data for it in the product!

Since Fluent is aligned with the new model, and not much with the old one, as part of the shift toward Fluent we had to migrate more of our internal APIs to use ICU and its internationalization database called CLDR (think – Wikipedia for internationalization data), all while slowly deprecating the old Gecko Intl APIs and data sets.

To make things a bit harder, our implementation of the ECMA402 Intl API as of a year ago wasn’t very complete. Moving Firefox to use it required not just shifting the code base, but also adding remaining features like timezone support, case sensitive collations etc.

If all of that doesn’t sound ambitious enough, we also got a request from the Gecko owners not to use the main interface Gecko offered for resource selection – Chrome Registry – in the new approach.

It’s January 2017 and we not only have to remodel all of the locale selection, remodel the whole Intl layer and prepare for replacing the whole localization layer of Gecko, but we also need a new resource management API for l10n.

Fluent in Gecko - scheme<figcaption class="wp-caption-text">Fluent in Gecko – scheme</figcaption> Warm up

More or less at the same time we got contacted by a team at Mozilla working on date/time pickers. They reached out asking us how to internationalize them properly.

Part of the project culture at the time when we worked on Firefox OS was to take everything we need, and push for it to become a web standard. This strategy – standardize everything you need – is much slower than just writing a custom API, but has the benefit of making the Web a better platform, rather than just supplying the APIs for your product.

Firefox OS is no more, but many of the APIs designed and proposed for standardization got through the standardization process and are now getting finalized. That includes many Internationalization APIs that we kickstarted back then.

Internationalization of the date/time picker required a flock of APIs and our team decided to propose building on, and extending, the standardized ECMA402 API set with the features required for the pickers.

Now with the date/time picker as a short term goal, and moving Firefox to Fluent in the long term, all while unifying our underlying internationalization infrastructure behind the scenes, the stage was set.


The last piece of the puzzle that is important for the reader to know is that over the last year the main focus of the whole organization was Firefox Quantum, and it was necessary for our effort to ensure we do not affect Quantum’s stability, don’t introduce regressions, and generally speaking, operate under the radar of the release management and core engineering team.

Below is an incomplete timeline of changes that happened between January 2017 and today. It leads us through this major refactor and getting Gecko ready for Fluent, all while making sure we do not hinder the Quantum effort in the slightest.

Firefox 51 (January)

The first release of the year was rather modest, but I hope it’ll fit into the story well. Knowing that we want to get things via Intl JS APIs, but also realizing that the standardization process will take a long time, we introduced a new non-public API called mozIntl.

The role of mozIntl is to extend the JS Intl with pre-release APIs like Intl.getCalendarInfo, Intl.getLocaleInfo, etc., provide the functionality needed for Firefox, while at the same time being a test subject for the standard proposal itself.

This created a really cool two-way dynamic where we were able to identify a need, work within ECMA402 to draft the spec, implement an early proposal for the use in our UI while simultaneously working on advancing it as a Web Standard.

I can’t stress enough how pivotal this API became for advancing JS Intl API and shifting our platform to use JS Intl API for internationalization, and Firefox 51 was the first one to use it!

Notable changes [my work] [intl]:

Firefox 52 (March)

In Firefox 52 we set our target to embrace CLDR/ICU and started work to migrate our internal APIs to use ICU. Jonathan Kew made the first push, switching nsIScriptableDateFormat to ICU and then followed up moving another set of APIs.

At the same time André Bargull started taking on missing items from our implementation of the JS Intl API, tackling a major one first – IANA Timezone support.

The direction set in Firefox 52 was to move Gecko to use the same internationalization APIs internally and for the Web content, and to make our JS Intl API complete and robust – ready to handle Firefox UI.

Notable changes [my work][intl]:

Firefox 53 (April)

In Firefox 53 Jonathan updated our platform to Unicode 9.0, Gregory Moore moved nsIDateTimeFormat (one of the biggest Intl APIs in Gecko) to use ICU and I landed the first major new API needed for Fluent – mozIntl.PluralRules.

This set a precedent where, if we’re certain that an API will end up exposed to the Web, we write most of its code in SpiderMonkey (our JS engine), and only expose it via mozIntl until the API becomes part of the standard.

When the standard matures, we only switch the bit to expose the API, rather than having to move the code from Gecko to SpiderMonkey.

Notable changes [my work][intl]:

Firefox 54 (June)

In Firefox 54 we landed two major new APIs:

mozilla::intl::LocaleService is a new core API for managing languages and locales in Gecko. Its purpose is to become a central place to handle language selection and negotiation.

mozilla::intl::OSPreferences is a new core API for retrieving Intl related information from the operating system. That includes OS language selection, regional preferences etc.

Those two new APIs were intended to replace an aged nsILocaleService – which kind of did those two tasks together via many OS-specific APIs – and take away language negotiation that until this point had been performed primarily by ChromeRegistry.

They also introduced a new paradigm – instead of operating on single locale, like en-US, we started operating on locale fallback chains. All new APIs took lists, making it possible for us to identify not just the best match locale that the user requested, but also understand what is the fallback that the user wants, rather than falling back on en-US as a hardcoded locale.

This is a fairly recent development in the internationalization industry and you can find fallback locale lists in new Windows, macOS, Android and now in Firefox as well!

With this change, we removed the tablecloth from under the dishes – nothing observable changed, but the “decision” center was moved and we gained new, modern, central APIs to manage languages and communicate with the OS layer.

In 54 we also added a couple new mozIntl APIs:

  • mozIntl.getLocaleInfo became a central place to get information about a locale – what are the weekend days, what is the first day of the week, is this locale left-to-right or right-to-left and so on.
  • mozIntl.DateTimeFormat became the first example of a wrapper over an already existing Intl.DateTimeFormat that extends it and adds features necessary for Firefox UI, but not yet available in ECMA402 spec – primarily, we added the ability to adjust the formatted date and time to the regional preferences user set in the Operating System. It’s a good example of where mozilla::intl::OSPreferences, mozIntl, ICU/CLDR and JS Intl API create a layered model that incentivizes us to standardize as much as we can, without blocking us in until the standardization is complete.

By that release, we had all of our new core ready, we knew the direction, and were able to refactor major pieces of our low level intl infrastructure basically without any observable output.

Notable changes [my work][intl]:

Firefox 55 (August)

While several elements of the ecosystem were still limiting us, the primary focus of work now shifted to fixing edge cases and adding new features.

LocaleService gained a robust language negotiation API which made it possible to reason about non-perfect matches between requested and available language sets.

Before that point, if the user requested en-GB, we weren’t very good at matching it against anything else than a perfect match. So if we had en-ZA or en-AU, we might not know what to do and that made many of our locale selection systems very brittle.

Centralized, strong language negotiation allowed us to freely reason about asking the operating system for locales, matching them against available language resources, selecting the right fonts, or picking up languages for extensions.

55 brought many more new features to LocaleService, including a split between server and client, allowing our content processes to follow a single language selection decided in the parent process (or outside of Gecko in the case of Fennec – Firefox for Android!), and including a number of improvements in how OSPreferences interact with LocaleService.

Andre brought most of the remaining items for ECMA402 compatibility making SpiderMonkey Intl API 100% complete!

Notable changes [my work][intl]:

Firefox 56 (September)

As you can tell by now, there’s a clear direction in our work – migrating our internal APIs to use ICU, bridging the gap between JS Intl API and Gecko Intl APIs, and moving our UI to use ICU-backed APIs.

But until Firefox 56 there was one problem – due to size cost, Fennec’s team was pushing back on the idea to introduce ICU in our mobile browser.

Their reasoning was sound – adding 3MB to installer size is a non-trivial cost that has impact on the users and should not be added lightly.

That meant that Gecko had to maintain two ways of doing each API – one backed by ICU/CLDR, and the old one required by now just for our Android browser.

Fortunately, by Firefox 56 we had an idea how to move forward. We knew that once we turn on ICU everywhere, we’ll be able to remove all the old APIs and datasets and that will win us back some of the bundle size cost.

But the real deal was in the promise of the new localization API making it possible to load language resources at will. See, Fennec currently comes with a lot of language resources for a lot of locales. That’s because our 20 year old infrastructure isn’t very flexible and the easiest way to make sure that a locale is available is to package it into the .apk file.

If we could switch that to the new infrastructure and only load locales that the user selected, that would, coincidentally, save us around 3mb of the installers size!

Knowing that, by Firefox 56 we were able to agree on the plan and turn on ICU for Fennec. That meant not only that we were able to start removing the old APIs, and we could introduce all the new goodies like mozIntl to Fennec, but also, Fennec finally gets ECMA402 JS Intl API support!

Another big piece happened on the character encoding front. After months of work, Henri Sivonen landed a completely new, fast and shiny, character encoding library written in Rust!

This, bundled with a bump to Unicode 10 and ICU 59, enabled us to remove a lot of old APIs and reduce our technical debt significantly.

Last but not least, LocaleService gained a new API for retrieving Regional Preferences Locales allowing us to format a date to en-GB (or de-AT!) even if your Firefox UI is in en-US, and follow more closely the user choices from the Operating System.

Notable changes [my work][intl]:

Firefox 57 (November)

Firefox 57 was a very small release on the Intl front. We had most of our foundation work laid out by then and all the focus was on the quality of the Quantum release (I spent most of the cycle as a mercenary on the Quantum Flow team helping with UI/perf improvements).

But since the foundation was in place by then, we were able to use 57 to land all the new APIs – L10nRegistry, Fluent, FluentDOM and FluentWeb – in anticipation of being able to switch to using them in the following releases.

That means, that although we didn’t start using it yet, by 57 Fluent was in Gecko!

Notable changes [my work][intl]:

Firefox 58 (January 2018)

After the silent 57 release, 58 opened up with a lot of internationalization improvements accrued since 56.

The major one was the switch to new language packs.

Previously, Firefox had language packs based on the old extensions system, which relied heavily on ChromeRegistry. This, along with other problems, resulted in a sub-par user experience when compared to a fully localized build.

The new language packs are based on the Web Extensions ecosystem, are lighter, easier to maintain and are safer. They have a clean lifecycle, and of course support Fluent and L10nRegistry out of the box!

Speaking of removing technical debt, it was a good release for that. Having 57 cut off all of the old extensions, and being two releases after we enabled ICU in Fennec, it was the right time to remove a lot of old code.

Our hooks into OS via OSPreferences got improved on Android, Linux and Windows.

A bunch of our mozIntl APIs got completed as standard in ECMA402 and enabled for the web – Intl.PluralRules, hourCycle and NumberFormat.prototype.formatToParts.

We did a lot of intl build system refactors to make us package the right languages, with fallback, and also make it possible to build Firefox with many locales.

We gained an entry in about:support showing all the various language selections to help us debug cases where the localization doesn’t match expectations.

Finally, once all of the new build system bits got tested, the very first string localized using the Fluent landed in Firefox UI!

Huge milestone achieved!

Notable changes [my work][intl]:


In 2017 we successfully aligned our internationalization layer around Unicode standard, ICU and CLDR, removing a lot of old APIs and making Firefox UI use the same APIs (with a few extensions) as we expose to the Web Content.

We also advanced a lot of ECMA402 spec proposals that we identified in result of our Firefox OS and Desktop Firefox efforts making the Web a better platform for writing multilingual apps and the Firefox JS engine – SpiderMonkey – the most complete implementation of ECMA402 on the market.

Finally, we landed all the main components for the new localization framework in Gecko and got the first strings translated using Fluent!

Through all of that work, there were only a couple minor regressions that we were able to quickly fix without affecting any of the Quantum work. We vastly improved our test coverage in the intl/locale and intl/l10n modules, fixed tons of long standing bugs related to language switching and selection in the process, and got the platform ready for Fluent!

As a testament of all that happened this year, we just got a new module  ownership structure that reflects all the effort that we’ve put lately into making Gecko the best multilingual platform in the world and a great vehicle for driving the advancement of the Internationalization Web Standards!

It’s been the toughest year of work in my career so far. Handling so many variables, operating on a massive and aged codebase, writing code in three languages – JavaScript, Rust and C++ – aligning the goals and needs of many different stakeholders and teams, and pushing for the internal recognition and support for the refactor.

Taking advise from Mike Hoye“You can remove an adjective thankless from someones job by thanking them” – I’d like to thank the people who significantly contributed to this project either directly or indirectly, by supporting and mentoring me, brainstorming with me, patiently reviewing my patches and working on all the technologies required – Staś Małolepszy, Jonathan Kew, André Bargull, Dave Townsend, Jeff Walden, Makoto Kato, Axel Hecht, Richard Newman, Mike Conley, Nick Alexander, Daniel Ehrenberg, Kris Maglione, Andrew Swan, Matjaž HorvatGregory Szorc, Ted Mielczarek, Francesco Lodolo, Jeff Beatty, Jorg K, Rafael Xavier, Steven R. Loomis, Joe Hildebrand, Caridy Patiño and others. You turned the project from impossible to completed. Thank you.

With all the work to clean up the technical debt in 2017, 2018 is shaping up to be a year when we’ll be able to focus on using the modernized stack to work on adding new capabilities and fully switching Firefox to Fluent (starting with the Preferences UI).

I also hope to spend more time in Rust, get Firefox and Gecko to become better at serving users operating in multiple languages, and work with the Browser Architecture Group in getting the next generation stack at Mozilla be fully intl and l10n ready.

Stay tuned!

Categorieën: Mozilla-nl planet

Air Mozilla: Mozilla Weekly Project Meeting, 08 Jan 2018

ma, 08/01/2018 - 20:00

Mozilla Weekly Project Meeting The Monday Project Meeting

Categorieën: Mozilla-nl planet

Nick Cameron: A proof-of-concept GraphQL server framework for Rust

ma, 08/01/2018 - 19:02

Recently, I've been working a new project, a framework for GraphQL server implementations in Rust. It's still very much at the proof of concept stage, but it is complete enough that I want to show it to the world. The main restriction is that it only works with a small subset of the GraphQL language. As far as I'm aware, it's the only framework which can provide an 'end to end' implementation of GraphQL in Rust (i.e., it handles IDL parsing, generates Rust code from IDL, and parses, validates, and executes queries).

The framework provides a seamless GraphQL interface for Rust servers. It is type-safe, ergonomic, very low boilerplate, and customisable. It has potential to be very fast. I believe that it can be one of the best experiences for GraphQL development in any language, as well as one of the fastest implementations (in part, because it seems to me that Rust and GraphQL are a great fit).


GraphQL is an interface for APIs. It's a query-based alternative to REST. A GraphQL API defines the structure of data, and clients can query that structured data. Compared to a traditional RESTful API, a GraphQL interface is more flexible and allows for clients and servers to evolve more easily and separately. Since a query returns exactly the data the client needs, there is less over-fetching or under-fetching of data, and fewer API calls. A good example of what a GraphQL API looks like is v4 of the GitHub API; see their blog post for a good description of GraphQL and why they chose if for their API.

Compared to a REST or other HTTP APIs, a GraphQL API takes a fair bit more setting up. Whereas, you can make a RESTful API from scratch with only a little work on top of most HTTP servers, to present a GraphQL API, you need to use a server library. That library takes care of understanding your data's schema, parsing and validating (effectively type-checking) queries against that schema, and orchestrating execution of queries (a good server framework also takes care of various optimisations such as caching query validation and batching parts of execution). The server developer plugs code into the framework as resolvers - code which implements 'small' functions in the schema.

Rust and GraphQL

Rust is a modern systems language, it offers a rich type system, memory safety without garbage collection, and an expanding set of libraries for writing very fast servers. I believe Rust is an excellent fit for implementing GraphQL servers. Over the past year or so, many developers have found Rust to be excellent for server-side software, and providing a good experience in that domain was a key goal for the Rust community in 2017.

Rust's data structures are a good match for GraphQL data structures, combined with a strong, safe, and static type system, this means that GraphQL implementations can be type-safe. Rust's powerful procedural macro system means that a GraphQL framework can do a lot of work at compile-time, and make implementation very ergonomic. Rust's trait system is more expressive and flexible than more common OO languages, and this allows for easy customisation of a GraphQL implementation. Finally Rust is fast because it can be low-level and supports many low-cost abstractions; it has emerging support for asynchronous programming which will work neatly for GraphQL resolvers.


I'm going to go through part of an example, the whole example is in the repo, and you can also see the full output from the schema macro.

The fundamental part of the framework is the schema macro, this is used to specify a GraphQL schema using IDL (not officially part of GraphQL, but widely used). Here's a small schema:

schema! { type Query { hero(episode: Episode): Character, } enum Episode { NEWHOPE, EMPIRE, JEDI, } type Human implements Character { id: ID!, // `!` means non-null name: String!, friends: [Character], // ... } }

This generates a whole bunch of code, but some interesting bits are:

// Rust types corresponding to `Episode` and `Human`: #[allow(non_snake_case)] #[derive(Clone, Debug)] pub enum Episode { NEWHOPE, EMPIRE, JEDI, } #[allow(non_snake_case)] #[derive(Clone, Debug)] pub struct Human { pub id: Id, pub name: String, // Uses `Option` because GraphQL accepts `null` here. pub friends: Option<Vec<Option<Character>>>, // ... } // A trait for the `Query` type which the developer must implement. // Note that it is actually more complex than this, see below. pub trait AbstractQuery: ResolveObject { fn hero(&self, episode: Episode) -> QlResult<Option<Character>>; }

In the simplest case, a user of the framework would just implement the AbstractQuery trait using the data types created by the schema macro. There is a little bit of glue code (basically implementing a 'root' trait), and you have a working GraphQL server!

By using a strongly typed language and by having Rust types for every GraphQL type, we ensure type safety for our GraphQL implementations. By making use of Rust's procedural macros, we minimise the amount of boilerplate a developer has to write to get to a working server. Combined with the kind of speed which a good Rust implementation could offer, I believe this provides an un-paralleled mix of ease of development and speed.

Sometimes though you don't need or want to use such data structures. For example, you might be getting data straight from a database and constructing a Rust object only to serialise it again; or you might be able to optimise a particularly common query by not providing all the fields of a given data type (e.g., perhaps Human::friends is in a separate DB table and we often only want a Humans name and id, then we might be able to save a lookup in the friends table by using a custom data structure).

In such cases the framework lets you easily customise the concrete data types. As well as generating the above types, the schema macro generates abstract versions of each data type as a trait, for example:

pub trait AbstractHuman: ::graphql::types::schema::ResolveObject { type Character: AbstractCharacter = Character; #[allow(non_snake_case)] fn to_Character(&self) -> QlResult<Self::Character>; }

you can then implement AbstractHuman and ResolveObject for your custom type (there are some even more abstract traits that you almost certainly don't want to override, you need to implement them to, but the schema macro generates another macro to do that for you, so all it takes is the line ImplHuman!(MyCustomHumanType);). It is the ResolveObject implementation where you need to put some effort in to provide a resolve_field method.

Finally, you need to specify the custom types you are using. That is done using associated types in various traits, for example the associated type Character in AbstractHuman. There are similar associated types in the AbstractQuery trait, and the signature of hero uses those rather than the concrete Episode and Character types (i.e., I cheated a little bit in the definition of AbstractQuery above).

Although this is quite complex, I want to stress that in the common case (using the default, generated data types), you don't have to worry about any of this and it all happens auto-magically. However, when you need the power to customise the implementation, it is there for you.

The future

I'm pretty proud of the framework so far, but there is an awful lot still to do. The good news is that it seems like it will be really interesting, fun work! I've started filing issues on the repo. Some of the biggies are:

  • schema validation (we currently validate queries, but not schemas, which means we can crash rather than give an error message if there is a bad schema),
  • completeness (GraphQL is a pretty big language and only a small portion is implemented so far, lots to do!),
  • use asynchronous resolvers (each resolver might access a database or other server, so should be executed asynchronously. We should use Tokio and Rust futures to make this ergonomic and super-fast. There is also some orchestration work to do),
  • query validation caching and other fundamental optimisations (required to be an industrial-strength GraphQL server).

I'd love to have help doing this work and to build a community around GraphQL in Rust. If you're interested in hacking on the project, or considering using GraphQL and Rust in a project, please get in touch (twitter: @nickrcameron, irc: nrc, email: my irc nick at

Categorieën: Mozilla-nl planet

Mozilla Addons Blog: New Contribution Opportunity: Content Review for

ma, 08/01/2018 - 17:30

For over a dozen years, extension developers have volunteered their time and skills to review extensions submitted to (AMO). While they primarily focused on ensuring that an extension’s code adhered to Mozilla’s add-on policies, they also moderated the content of the listings themselves, like titles, descriptions, and user reviews.

To help add-on reviewers focus on the technical aspects of extension review and expand contribution opportunities to non-technical volunteers, we are creating a new volunteer program for reviewing listing content.

Add-on content reviewers will be focused on ensuring that extensions listed on AMO comply with Mozilla’s Acceptable Use Policy. Having a team of dedicated content reviewers will help ensure that extensions listed on AMO are not spam do not contain hate speech or obscene materials.

Since no previous development experience is necessary to review listing content, this is a great way to make an impactful, non-technical contribution to AMO. If you have a keen eye for details and want to make sure that users and developers have a great experience on, please take a look at our wiki to learn more about how to become an add-on content reviewer.

The post New Contribution Opportunity: Content Review for appeared first on Mozilla Add-ons Blog.

Categorieën: Mozilla-nl planet

Will Kahn-Greene: Socorro in 2017

ma, 08/01/2018 - 15:00

Socorro is the crash ingestion pipeline for Mozilla's products like Firefox. When Firefox crashes, the Breakpad crash reporter asks the user if the user would like to send a crash report. If the user answers "yes!", then the Breakpad crash reporter collects data related to the crash, generates a crash report, and submits that crash report as an HTTP POST to Socorro. Socorro saves the crash report, processes it, and provides an interface for aggregating, searching, and looking at crash reports.

2017 was a big year for Socorro. In this blog post, I opine about our accomplishments.

Read more… (23 mins to read)

Categorieën: Mozilla-nl planet

Robert O'Callahan: The Fight For Patent-Unencumbered Media Codecs Is Nearly Won

ma, 08/01/2018 - 10:41

Apple joining the Alliance for Open Media is a really big deal. Now all the most powerful tech companies — Google, Microsoft, Apple, Mozilla, Facebook, Amazon, Intel, AMD, ARM, Nvidia — plus content providers like Netflix and Hulu are on board. I guess there's still no guarantee Apple products will support AV1, but it would seem pointless for Apple to join AOM if they're not going to use it: apparently AOM membership obliges Apple to provide a royalty-free license to any "essential patents" it holds for AV1 usage.

It seems that the only thing that can stop AOM and AV1 eclipsing patent-encumbered codecs like HEVC is patent-infringement lawsuits (probably from HEVC-associated entities). However, the AOM Patent License makes that difficult. Under that license, the AOM members and contributors grant rights to use their patents royalty-free to anyone using an AV1 implementation — but your rights terminate if you sue anyone else for patent infringement for using AV1. (It's a little more complicated than that — read the license — but that's the idea.) It's safe to assume AOM members do hold some essential patents covering AV1, so every company has to choose between being able to use AV1, and suing AV1 users. They won't be able to do both. Assuming AV1 is broadly adopted, in practice that will mean choosing between making products that work with video, or being a patent troll. No doubt some companies will try the latter path, but the AOM members have deep pockets and every incentive to crush the trolls.

Opus (audio) has been around for a while now, uses a similar license, and AFAIK no patent attacks are hanging over it.

Xiph, Mozilla, Google and others have been fighting against patent-encumbered media for a long time. Mozilla joined the fight about 11 years ago, and lately it has not been a cause célèbre, being eclipsed by other issues. Regardless, this is still an important victory. Thanks to everyone who worked so hard for it for so long, and special thanks to the HEVC patent holders, whose greed gave free-codec proponents a huge boost.

Categorieën: Mozilla-nl planet

Andy McKay: A WebExtensions scratch pad

ma, 08/01/2018 - 09:00

Every Event is a daft little WebExtension I wrote a while ago to try and capture the events that WebExtensions APIs fires. It tries to listen to every possible event that is generated. I've found this pretty useful in past for asking questions like "What tab events fire when I move tabs between windows?" or "What events fire when I bookmark something?".

To use Every Event, install it from To open, click the alarm icon on the menu bar.

If you turn on every event for everything, you get an awful lot of traffic to the console. You might want to limit that down. So to test what happens when you bookmark something, click "All off", then click "bookmarks". Then click "Turn on". Then open the "Browser Console".

Each time you bookmark something, you'll see the event and the contents of the API as shown below:

That's also thanks to the Firefox Developer Tools which are great for inspecting objects.

But there's one other advantage to having Every Event around. Because it requests every single permission, it has access to every API. So that means if you go to about:debugging and then click on Debug for Every Event, you can play around with all the APIs and get nice autocomplete:

All you have to do is enter "browser." at the browser console and there's all the WebExtension APIs autocompletable.

Let's add in our own custom handler for browser.bookmarks.onRemoved.addListener and see what happens when I remove a bookmark...

Finally, I keep a checkout of Every Event near by on all my machines. All I have to do is enter the Every Event directory and start web-ext:

web-ext run --firefox /Applications/ --verbose --start-url about:debugging

That's aliased to a nice short command on my Mac and gives me a clean profile with the relevant console just one click away...

Update: see also shell WebExtension project by Martin Giger which has some more sophisticated content script support.

Categorieën: Mozilla-nl planet

Cameron Kaiser: Actual field testing of Spectre on various Power Macs (spoiler alert: G3 and 7400 survive!)

ma, 08/01/2018 - 05:24
Tip of the hat to miniupnp who ported the Spectre proof of concept to PowerPC intrinsics. I ported it to 10.2.8 so I could get a G3 test result, and then built generic PowerPC, G3, 7400, 7450 and G5 versions at -O0, -O1, -O2 and -O3 for a grand total of 20 variations.

Recall from our most recent foray into the Spectre attack that I believed the G3 and 7400 would be hard to successfully exploit because of their unusual limitations on speculative execution through indirect branches. Also, remember that this PoC assumes the most favourable conditions possible: that it already knows exactly what memory range it's looking for, that the memory range it's looking for is in the same process and there is no other privilege or partition protection, that it can run and access system registers at full speed (i.e., is native), and that we're going to let it run to completion.

miniupnp's implementation uses the mftb(u) instructions, so if you're porting this to the 601, you weirdo, you'll need to use the equivalent on that architecture. I used Xcode 2.5 and gcc 4.0.1.

Let's start with, shall we say, a positive control. I felt strongly the G5 would be vulnerable, so here's what I got on my Quad G5 (DC/DP 2.5GHz PowerPC 970MP) under 10.4.11 with Energy Saver set to Reduced Performance:

  • -arch ppc -O0: partial failure (two bytes wrong, but claims all "success")
  • -arch ppc -O1: recovers all bytes (but claims all "unclear")
  • -arch ppc -O2: same
  • -arch ppc -O3: same
  • -arch ppc750 -O0: partial failure (twenty-two bytes wrong, but claims all "unclear")
  • -arch ppc750 -O1: recovers all bytes (but claims all "unclear")
  • -arch ppc750 -O2: almost complete failure (twenty-five bytes wrong, but claims all "unclear")
  • -arch ppc750 -O3: almost complete failure (twenty-six bytes wrong, but claims all "unclear")
  • -arch ppc7400 -O0: almost complete failure (twenty-eight bytes wrong, claims all "success")
  • -arch ppc7400 -O1: recovers all bytes (but claims all "unclear")
  • -arch ppc7400 -O2: almost complete failure (twenty-six bytes wrong, but claims all "unclear")
  • -arch ppc7400 -O3: almost complete failure (twenty-eight bytes wrong, but claims all "unclear")
  • -arch ppc7450 -O0: recovers all bytes (claims all "success")
  • -arch ppc7450 -O1: recovers all bytes (but claims all "unclear")
  • -arch ppc7450 -O2: same
  • -arch ppc7450 -O3: same
  • -arch ppc970 -O0: recovers all bytes (claims all "success")
  • -arch ppc970 -O1: recovers all bytes, but noticeably more slowly (and claims all "unclear")
  • -arch ppc970 -O2: partial failure (one byte wrong, but claims all "unclear")
  • -arch ppc970 -O3: recovers all bytes (but claims all "unclear")

Twiddling CACHE_HIT_THRESHOLD to any value other than 1 caused the test to fail completely, even on the working scenarios.

These results are frankly all over the map and only two scenarios fully work, but they do demonstrate that the G5 can be exploited by Spectre. That said, however, the interesting thing is how timing-dependent the G5 is, not only to whether the algorithm succeeds but also to whether the algorithm believes it succeeded. The optimized G5 versions have more trouble recognizing if they worked even though they do; the fastest and most accurate is actually -arch ppc970 -O0. I mentioned the CPU speed for a reason, too, because if I set the system to Highest Performance, I get some noteworthy changes:

  • -arch ppc -O0: recovers all bytes (claims all "success")
  • -arch ppc -O1: partial failure (eight bytes wrong, claims all "unclear")
  • -arch ppc -O2: partial failure (twenty bytes wrong, claims all "unclear")
  • -arch ppc -O3: partial failure (twenty-three bytes wrong, claims all "unclear")
  • -arch ppc750 -O0: almost complete failure (one byte recovered, but claims all "unclear")
  • -arch ppc750 -O1: partial failure (five bytes wrong, claims all "unclear")
  • -arch ppc750 -O2: complete failure (no bytes recovered, all "unclear")
  • -arch ppc750 -O3: almost complete failure (thirty bytes wrong, but claims all "unclear")
  • -arch ppc7400 -O0: recovers all bytes (claims all "success")
  • -arch ppc7400 -O1: partial failure (four bytes wrong, but claims all "unclear")
  • -arch ppc7400 -O2: complete failure (no bytes recovered, all "unclear")
  • -arch ppc7400 -O3: same
  • -arch ppc7450 -O0: recovers all bytes (claims all "success")
  • -arch ppc7450 -O1: partial failure (eight bytes wrong, but claims all "unclear")
  • -arch ppc7450 -O2: partial failure (seven bytes wrong, but claims all "unclear")
  • -arch ppc7450 -O3: partial failure (five bytes wrong, but claims all "unclear")
  • -arch ppc970 -O0: recovers all bytes (but three were "unclear")
  • -arch ppc970 -O1: recovers all bytes, but noticeably more slowly (and claims all "unclear")
  • -arch ppc970 -O2: partial failure (nineteen bytes wrong, claims all "unclear")
  • -arch ppc970 -O3: partial failure (eighteen bytes wrong, claims all "unclear")

The speed increase causes one more scenario to succeed, but which ones do differ and it even more badly tanks some of the previously marginal ones. Again, twiddling CACHE_HIT_THRESHOLD to any value other than 1 caused the test to fail completely, even on the working scenarios.

What about more recent Power ISA designs? Interestingly, my AIX Power 520 server configured as an SMT-2 two-core four-way POWER6 could not be exploited if CACHE_HIT_THRESHOLD was 1. If it was set to 80 as the default exploit has, however, on POWER6 the exploit recovers all bytes successfully (compiled with -O3 -mcpu=power6). IBM has not yet said as of this writing whether they will issue patches for the POWER6.

I should also note that the worst case on the G5 took nearly seven seconds to complete at reduced power (-arch ppc7400 -O0), though the best case took less than a tenth of a second (-arch ppc970 -O0). The POWER6 took roughly three seconds. These are not fast attacks for the limited number of bytes scanned.

Given that we know the test will work on a vulnerable PowerPC system, what about the ones we theorized were resistant? Why, I have two of them right here! Let's cut to the chase, friends, your humble author's suspicions appear to be correct. Neither my strawberry iMac G3 with Sonnet HARMONi CPU upgrade (600MHz PowerPC 750CX) running 10.2.8, nor my Sawtooth G4 file server (450MHz PowerPC 7400) running 10.4.11 can be exploited with any of ppc, ppc750 or ppc7400 at any optimization level. They all fail to recover any byte despite the exploit believing it worked, so I conclude the G3 and 7400 are not vulnerable to the proof of concept.

The attacks are also quite slow on these systems. To run on the lower clock speed Sawtooth took almost 5 seconds in realtime, even at -arch ppc7400 -O3 (seven seconds in the worst case), and pegged the processor during the test. Neither system has power management and ran at full speed.

That leaves the 7450 G4e, which as you'll recall has notable microarchitectural advances from the 7400 G4 and differences in its ability to speculatively execute indirect branches. What about that? Again, some highly timing-dependent results. First, let's look at my beloved 1GHz iMac G4 (1GHz PowerPC 7450), running 10.4.11:

  • -arch ppc -O0: almost complete failure (twenty-nine bytes wrong, claims all "success")
  • -arch ppc -O1: recovers all bytes (claims all "success")
  • -arch ppc -O2: same
  • -arch ppc -O3: partial failure (one byte wrong, but still claims all "success")
  • -arch ppc750 -O0: recovers all bytes (claims all "success")
  • -arch ppc750 -O1: recovers all bytes (claims all "success")
  • -arch ppc750 -O2: recovers all bytes (claims all "success")
  • -arch ppc750 -O3: partial failure (one byte wrong, correctly identified as "unclear")
  • -arch ppc7400 -O0: almost complete failure (twenty-nine bytes wrong, claims all "success")
  • -arch ppc7400 -O1: partial failure (one byte wrong, but still claims all "success")
  • -arch ppc7400 -O2: same
  • -arch ppc7400 -O3: partial failure (one byte wrong, correctly identified as "unclear")
  • -arch ppc7450 -O0: almost complete failure (twenty-nine bytes wrong, claims all "success")
  • -arch ppc7450 -O1: partial failure (one byte wrong, but still claims all "success")
  • -arch ppc7450 -O2: recovers all bytes (claims all "success")
  • -arch ppc7450 -O3: partial failure (one byte wrong, correctly identified as "unclear")

This is also all over the place, but quite clearly demonstrates the 7450 is vulnerable and actually succeeds more easily than the 970MP did. (This iMac G4 does not have power management.) Still, maybe we can figure out under which circumstances it is, so what about laptops? Let's get out my faithful 12" 1.33GHz iBook G4 (PowerPC 7447A), running 10.4.11 also. First, on reduced performance:

  • -arch ppc -O0: recovers all bytes (claims all "success")
  • -arch ppc -O1: recovers all bytes (claims all "success")
  • -arch ppc -O2: recovers all bytes (claims all "success")
  • -arch ppc -O3: partial failure (two bytes wrong, only one correctly identified as "unclear")
  • -arch ppc750 -O0: partial failure (one byte wrong, correctly identified as "unclear")
  • -arch ppc750 -O1: partial failure (one byte wrong, but still claims all "success")
  • -arch ppc750 -O2: same
  • -arch ppc750 -O3: recovers all bytes (claims all "success")
  • -arch ppc7400 -O0: partial failure (one byte wrong, but still claims all "success")
  • -arch ppc7400 -O1: recovers all bytes (claims all "success")
  • -arch ppc7400 -O2: partial failure (two bytes wrong, only one correctly identified as "unclear")
  • -arch ppc7400 -O3: recovers all bytes (claims all "success")
  • -arch ppc7450 -O0: recovers all bytes (claims all "success")
  • -arch ppc7450 -O1: partial failure (one byte wrong, but still claims all "success")
  • -arch ppc7450 -O2: recovers all bytes (claims all "success")
  • -arch ppc7450 -O3: recovers all bytes (claims all "success")

This succeeds a lot more easily, and the attack is much faster (less than a quarter of a second in the worst case). On highest performance:

  • -arch ppc -O0: recovers all bytes (claims all "success")
  • -arch ppc -O1: recovers all bytes (but one byte is "unclear")
  • -arch ppc -O2: recovers all bytes (but one byte is "unclear")
  • -arch ppc -O3: recovers all bytes (claims all "success")
  • -arch ppc750 -O0: partial failure (one byte wrong, correctly identified as "unclear")
  • -arch ppc750 -O1: recovers all bytes (claims all "success")
  • -arch ppc750 -O2: partial failure (one byte wrong, correctly identified as "unclear")
  • -arch ppc750 -O3: recovers all bytes (claims all "success")
  • -arch ppc7400 -O0: recovers all bytes (claims all "success")
  • -arch ppc7400 -O1: recovers all bytes (claims all "success")
  • -arch ppc7400 -O2: recovers all bytes (claims all "success")
  • -arch ppc7400 -O3: partial failure (one byte wrong, correctly identified as "unclear")
  • -arch ppc7450 -O0: recovers all bytes (claims all "success")
  • -arch ppc7450 -O1: recovers all bytes (claims all "success")
  • -arch ppc7450 -O2: recovers all bytes (but one byte is "unclear")
  • -arch ppc7450 -O3: partial failure (one byte wrong, correctly identified as "unclear")

This almost completely succeeds! Even the scenarios that are wrong are still mostly correct; these varied a bit from run to run and some would succeed now and then too. The worst case timing is an alarming eighth of a second.

What gets weird is the DLSD PowerBook G4, though. Let's get out the last and mightiest of the PowerBooks with its luxurious keyboard, bright 17" high-resolution LCD and 1.67GHz PowerPC 7447B CPU running 10.5.8. The DLSD PowerBooks are notable for not allowing selectable power management ("Normal" or automatic equivalent only), and it turns out this is relevant here too:

  • -arch ppc -O0: complete failure (no bytes recovered but some garbage, all "unclear")
  • -arch ppc -O1: complete failure (no bytes recovered but mostly garbage, all "unclear")
  • -arch ppc -O2: complete failure (no bytes recovered but some garbage, all "unclear")
  • -arch ppc -O3: complete failure (no bytes recovered but mostly garbage, all "unclear")
  • -arch ppc750 -O0: complete failure (no bytes recovered but half garbage, all "unclear")
  • -arch ppc750 -O1: complete failure (no bytes recovered but some garbage, all "unclear")
  • -arch ppc750 -O2: same
  • -arch ppc750 -O3: same
  • -arch ppc7400 -O0: almost complete failure (only one byte recovered, but all "unclear")
  • -arch ppc7400 -O1: complete failure (no bytes recovered, all "unclear")
  • -arch ppc7400 -O2: complete failure (no bytes recovered but all seen as "E", all "unclear")
  • -arch ppc7400 -O3: complete failure (no bytes recovered but some garbage, all "unclear")
  • -arch ppc7450 -O0: complete failure (no bytes recovered, all "unclear")
  • -arch ppc7450 -O1: complete failure (no bytes recovered but half garbage, all "unclear")
  • -arch ppc7450 -O2: same
  • -arch ppc7450 -O3: same

This is an upgraded stepping of the same basic CPU, but the attack almost completely failed. It failed in an unusual way, though: instead of using the question mark placeholder it usually uses for an indeterminate value, it actually puts in some apparently recovered nonsense bytes. These bytes are almost always garbage, though one did sneak in in the right place, which leads me to speculate that the 7447B is vulnerable too but something is mitigating it.

This DLSD is different from my other systems in two ways: it's got a slightly different CPU with known different power management, and it's running Leopard. Setting the iBook G4 to use automatic ("Normal") power management made little difference, however, so I got down two 12" PowerBook G4s with one running 10.4 with a 1.33GHz CPU and the other 10.5.8 with a 1.5GHz CPU. The 10.4 12" PowerBook G4 was almost identical to the 10.4 12" in terms of vulnerability, but it got interesting in on the 10.5.8 system. In order, low, automatic and highest performance:

  • -arch ppc -O0: recovers all bytes (claims all "success")
  • -arch ppc -O1: partial failure (four bytes wrong, but still claims all "success")
  • -arch ppc -O2: partial failure (five bytes wrong, but still claims all "success")
  • -arch ppc -O3: partial failure (four bytes wrong, but still claims all "success")
  • -arch ppc750 -O0: partial failure (two bytes wrong, but still claims all "success")
  • -arch ppc750 -O1: partial failure (two bytes wrong, both garbage, but still claims all "success")
  • -arch ppc750 -O2: partial failure (one byte wrong, correctly identified as "unclear")
  • -arch ppc750 -O3: partial failure (four bytes wrong, but still claims all "success")
  • -arch ppc7400 -O0: recovers all bytes (claims all "success")
  • -arch ppc7400 -O1: partial failure (one byte wrong, but still claims all "success")
  • -arch ppc7400 -O2: recovers all bytes (claims all "success")
  • -arch ppc7400 -O3: partial failure (two bytes wrong, but still claims all "success")
  • -arch ppc7450 -O0: recovers all bytes (claims all "success")
  • -arch ppc7450 -O1: recovers all bytes (claims all "success")
  • -arch ppc7450 -O2: recovers all bytes (claims all "success")
  • -arch ppc7450 -O3: partial failure (four bytes wrong, but still claims all "success")

  • -arch ppc -O0: recovers all bytes (claims all "success")
  • -arch ppc -O1: partial failure (thirteen bytes wrong, all "T", correctly identified as "unclear")
  • -arch ppc -O2: partial failure (nine bytes wrong, some "u", correctly identified as "unclear")
  • -arch ppc -O3: partial failure (eight bytes wrong, correctly identified as "unclear")
  • -arch ppc750 -O0: partial failure (thirteen bytes wrong, all "-", correctly identified as "unclear")
  • -arch ppc750 -O1: partial failure (fifteen bytes wrong, correctly identified as "unclear")
  • -arch ppc750 -O2: partial failure (fifteen bytes wrong, some "@", correctly identified as "unclear")
  • -arch ppc750 -O3: partial failure (sixteen bytes wrong, correctly identified as "unclear")
  • -arch ppc7400 -O0: recovers all bytes (claims all "success")
  • -arch ppc7400 -O1: partial failure (seven bytes wrong, correctly identified as "unclear")
  • -arch ppc7400 -O2: partial failure (eleven bytes wrong with three garbage bytes, correctly identified as "unclear")
  • -arch ppc7400 -O3: partial failure (eleven bytes wrong, all garbage, correctly identified as "unclear")
  • -arch ppc7450 -O0: recovers all bytes (claims all "success")
  • -arch ppc7450 -O1: partial failure (ten bytes wrong, correctly identified as "unclear")
  • -arch ppc7450 -O2: partial failure (seventeen bytes wrong, all "h", correctly identified as "unclear")
  • -arch ppc7450 -O3: partial failure (twelve bytes wrong, all "b", correctly identified as "unclear")

  • -arch ppc -O0: recovers all bytes (claims all "success")
  • -arch ppc -O1: partial failure (three bytes wrong with two garbage bytes, correctly identified as "unclear")
  • -arch ppc -O2: partial failure (eight bytes wrong, all various garbage bytes, correctly identified as "unclear")
  • -arch ppc -O3: partial failure (six bytes wrong, correctly identified as "unclear")
  • -arch ppc750 -O0: partial failure (four bytes wrong, all various garbage bytes, correctly identified as "unclear")
  • -arch ppc750 -O1: partial failure (four bytes wrong, correctly identified as "unclear")
  • -arch ppc750 -O2: partial failure (eleven bytes wrong, correctly identified as "unclear")
  • -arch ppc750 -O3: partial failure (four bytes wrong, all various garbage bytes, correctly identified as "unclear")
  • -arch ppc7400 -O0: recovers all bytes (claims all "success")
  • -arch ppc7400 -O1: partial failure (three bytes wrong, but still claims all "success")
  • -arch ppc7400 -O2: partial failure (six bytes wrong, correctly identified as "unclear")
  • -arch ppc7400 -O3: partial failure (four bytes wrong, correctly identified as "unclear")
  • -arch ppc7450 -O0: recovers all bytes (claims all "success")
  • -arch ppc7450 -O1: partial failure (four bytes wrong, correctly identified as "unclear")
  • -arch ppc7450 -O2: partial failure (three bytes wrong, but still claims all "success")
  • -arch ppc7450 -O3: partial failure (eight bytes wrong, all various garbage bytes, correctly identified as "unclear")

Leopard clearly impairs Spectre's success, but the DLSDs do seem to differ further internally. The worst case runtime on the 10.5 1.5GHz 12" was around 0.25 seconds. The real test would be to put Tiger on a DLSD, but I wasn't willing to do so with this one since it's my Leopard test system.

Enough data. Let's irresponsibly make rash conclusions.

  • The G3 and 7400 G4 systems appear, at minimum, to be resistant to Spectre as predicted. I hesitate to say they're immune but there's certainly enough evidence here to suggest it. While there may be a variant around that could get them to leak, even if it existed it wouldn't do so very quickly based on this analysis.
  • The 7450 G4e is more vulnerable to Spectre than the G5 and can be exploited faster, except for the DLSDs which (at least in Leopard) seem to be unusually resistant.
  • Power management makes a difference, but not enough to completely retard the exploit (again, except the DLSDs), and not always in a predictable fashion.
  • At least for these systems, cache size didn't seem to have any real correlation.
  • Spectre succeeds more reliably in Tiger than in Leopard.
  • Later Power ISA chips are vulnerable with a lot less fiddling.

Before you panic, though, also remember:

  • These were local programs run at full speed in a test environment with no limits, and furthermore the program knew exactly what it was looking for and where. A random attack would probably not have this many advantages in advance.
  • Because the timing is so variable, a reliable attack would require running several performance profiles and comparing them, dramatically slowing down the effective exfiltration speed.
  • This wouldn't be a very useful Trojan horse because sketchy programs can own your system in ways a lot more useful (to them) than iffy memory reads that are not always predictably correct. So don't run sketchy programs!
  • No 7450 G4 is fast enough to be exploited effectively through TenFourFox's JavaScript JIT, which would be the other major vector. Plus, no 7450 can speculatively execute through TenFourFox's inline caches anyway because they use CTR for indirect branching (see the analysis), so the generated code already has an effective internal barrier.
  • Arguably the Quad G5 might get into the speed range needed for a JavaScript exploit, but it would be immediately noticeable (as in, jet engine time), not likely to yield much data quickly, and wouldn't be able to do so accurately. After FPR5 final, even that possibility will be greatly lessened as to make it just about useless.

I need to eat dinner. And a life. If you've tested your own system (Tobias reports success on a 970FX), say so in the comments.
Categorieën: Mozilla-nl planet

Cameron Kaiser: More about Spectre and the PowerPC (or why you may want to dust that G3 off)

ma, 08/01/2018 - 05:07
UPDATE: IBM is releasing firmware patches for at least the POWER7+ and forward, including the POWER9 expected to be used in the Talos II. My belief is that these patches disable speculative execution through indirect branches, making the attack much more difficult though with an unclear performance cost. See below for why this matters.

UPDATE the 2nd: The G3 and 7400 survived Spectre!

(my personal favourite Blofeld)

Most of the reports on the Spectre speculative execution exploit have concentrated on the two dominant architectures, x86 (in both its AMD and Meltdown-afflicted Intel forms) and ARM. In our last blog entry I said that PowerPC is vulnerable to the Spectre attack, and in broad strokes it is. However, I also still think that the attack is generally impractical on Power Macs due to the time needed to meaningfully exfiltrate information on machines that are now over a decade old, especially with JavaScript-based attacks even with the TenFourFox PowerPC JIT (to say nothing of various complicating microarchitectural details). But let's say that those practical issues are irrelevant or handwaved away. Is PowerPC unusually vulnerable, or on the flip side unusually resistant, to Spectre-based attacks compared to x86 or ARM?

For the purposes of this discussion and the majority of our audience, I will limit this initial foray to processors used in Power Macintoshes of recent vintage, i.e., the G3, G4 and G5, though the G5's POWER4-derived design also has a fair bit in common with later Power ISA CPUs like the Talos II's POWER9, and ramifications for future Power ISA CPUs can be implied from it. I'm also not going to discuss embedded PowerPC CPUs here such as the PowerPC 4xx since I know rather less about their internal implementational details.

First, let's review the Spectre white paper. Speculative execution, as the name implies, allows the CPU to speculate on the results of an upcoming conditional branch instruction that has not yet completed. It predicts future program flow will go a particular way and executes that code upon that assumption; if it guesses right, and most CPUs do most of the time, it has already done the work and time is saved. If it guesses wrong, then the outcome is no worse than idling during that time save the additional power usage and the need to restore the previous state. To do this execution requires that code be loaded into the processor cache to be run, however, and the cache is not restored to its previous state; previously no one thought that would be necessary. The Spectre attack proves that this seemingly benign oversight is in fact not so.

To determine the PowerPC's vulnerability requires looking at how it does branch prediction and indirect branching. Indirect branching, where the target is determined at time of execution and run from a register rather than coding it directly in the branch instruction, is particularly valuable for forcing the processor to speculatively execute code it wouldn't ordinarily run because there are more than two possible execution paths (often many, many more, and some directly controllable by the attacker).

The G3 and G4 have very similar branch prediction hardware. If there is no hinting information and the instruction has never been executed before (or is no longer in the branch history table, read on), the CPU assumes that forward branches are not taken and backwards branches are, since the latter are usually found in loops. The programmer can add a flag to the branch instruction to tell the CPU that this initial assumption is probably incorrect (a static hint); we use this in a few places in TenFourFox explicitly, and compilers can also set hints like this. All PowerPC CPUs, including the original 601 and the G5 as described below, offer this level of branch prediction at minimum. Additionally, in the G3 and G4, branches that have been executed then get an entry in the BHT, or branch history table, which over multiple executions records if the branch is not taken, probably not taken, probably taken or taken (in Dan Luu's taxonomy of branch predictors, this would be two-level adaptive, local). On top of this the G3 and G4 have a BTIC, or branch target instruction cache, which handles the situation of where the branch gets taken: if the branch is not taken, the following instructions are probably in the regular instruction cache, but if the branch is taken, the BTIC allows execution to continue while the instruction queue continues fetching from the new program counter location. The G3 and 7400-series G4 implement a 512-entry BHT and 64-entry, two-instruction BTIC; the 7450-series G4 implements a 2048-entry BHT and a 128-entry, four-instruction BTIC, though the actual number of instructions in the BTIC depends on where the fetch is relative to the cache block boundary. The G3 and 7400 G4 support speculatively executing through up to two unresolved branches; the 7450 G4e allows up to three, but also pays a penalty of about one cycle if the BTIC is used that the others do not.

The G5 (and the POWER4, and most succeeding POWER implementations) starts with the baseline above, though it uses a different two-bit encoding to statically hint branch instructions. Instead of the G3/G4 BHT scheme, however, the G5/970 uses what Luu calls a "hybrid" approach, necessary to substantially improve prediction performance in a CPU for which misprediction would be particularly harmful: a staggering 16,384-entry BHT but also an additional 16,384-entry table using an indexing scheme called gshare, and a selector table which tells the processor which table to use; later POWER designs refine this further. The G5 does not implement a BTIC probably because it would not be compatible with how dispatch groups work. The G5 can predict up to two branches per cycle, and have up to 16 unresolved branches.

The branch prediction capabilities of these PowerPC chips are not massively different from other architectures'. The G5's ability to keep a massive number of unresolved branch instructions in flight might make it actually seem a bit more subject to such an attack since there are many more opportunities to load victim process data into the cache, but the basic principles at work are much the same as everything else, so none of our chips are particularly vulnerable or resistant in that respect. Where it starts to get interesting, however, is when we talk about indirect branches. There is no way in the Power ISA to directly branch to an address in a register, an unusual absence as such instructions exist in most other contemporary architectures such as x86, ARM, MIPS and SPARC. Instead, software must load the instruction into either of two special purpose registers that allow branches (either the link register "LR" or the counter register "CTR") with a special instruction (mtctr and mtlr, both forms of the general SPR instruction mtspr) and branch to that, which can occur conditionally or unconditionally. (We looked at this in great detail, with microbenchmarks, in an earlier blog post.)

To be able to speculatively execute an indirect branch, even an unconditional one, requires that either LR or CTR be renamed so that its register state can be saved as well, but on PowerPC they are not general purpose registers that can use the regular register rename file like other platforms such as ARM. The G5, unfortunately in this case, has additional hardware to deal with this problem: to back up the 16 unresolved branches it can have in-flight, LR and CTR share a 16-entry rename mapper, which allows the G5 to speculatively execute a combination of up to 16 LR or CTR-referencing branches (i.e., b(c)lr and b(c)ctr). This could allow a lot of code to be run speculatively and change the cache in ways the attacker could observe. Substantial preparation would be required to get the G5's branch history fouled enough to make it mispredict due to its very high accuracy (over 96%), but if it does, the presence of indirect branches will not slow the processor's speculative execution down what is now the wrong path. This is at least as vulnerable as the known Spectre-afflicted architectures, though the big cost of misprediction on the G5 would make this type of blown speculation especially slow. Nevertheless, virtually all current POWER chips would fall in this hole as well.

But the G3 and G4 situation is very different. The G3 actually delays fetch and execution at a b(c)ctr until the mtctr that leads it has completed, meaning speculative execution essentially halts at any indirect branch. The same applies for the LR, and for the 7400. CTR-based indirect branching is very common in TenFourFox-generated code for JavaScript inline caches, and code such as mtlr r0:blr terminates nearly every PowerPC function call. No fetch, and therefore no speculative execution, will occur until the special purpose register is loaded, meaning the proper target must now be known and there is less opportunity for a Spectre-based attack to run. Even if the processor could continue speculation past that point, the G3 and 7400 implement only a single rename register each for LR and CTR, so they couldn't go past a second such sequence regardless.

The 7450 is a little less robust in this regard. If the instruction sequence is an unconditional mtlr blr, the 7450 (and, for that matter, the G5) implements a link stack where the expected return address comes off a stack of predicted addresses from prior LR-modifying instructions. This is enough of a hint on the 7450 G4e to possibly allow continued fetch and potential speculation. However, because the 7450 also has only a single rename register each for LR and CTR, it also cannot speculatively execute past a second such sequence. If the instruction sequence is mtlr bclr, i.e., there is a condition on the LR branch, then execution and therefore speculation must halt until either the mtlr completes or the condition information (CR or CTR) is available to the CPU. But if the special purpose register is the CTR, then there is no address cache stack available, and the G4e must delay at an mtctr b(c)ctr sequence just like its older siblings.

Bottom line? Spectre is still not a very feasible means of attack on Power Macs, as I have stated, though the possibilities are better on the G5 and later Power ISA designs which are faster and have more branch tricks that can be subverted. But the G3 and the G4, because of their limitations on indirect branching, are at least somewhat more resistant to Spectre-based attacks because it is harder to cause their speculative execution pathways to operate in an attacker-controllable fashion (particularly the G3 and the 7400, which do not have a link stack cache). So, if you're really paranoid, dust that old G3 or Sawtooth G4 off. You just might have the Yosemite that manages to survive the computing apocalypse.

Categorieën: Mozilla-nl planet

Mozilla Marketing Engineering & Ops Blog: Kuma Report, December 2017

ma, 08/01/2018 - 01:00

Here’s what happened in December in Kuma, the engine of MDN Web Docs:

Here’s the plan for January:

Done in December Purged 162 KumaScript Macros

We moved the KumaScript macros to GitHub in November 2016, and added a new macro dashboard. This gave us a clearer view of macros across MDN, and highlighted that there were still many macros that were unused or lightly used. Reducing the total macro count is important as we change the way we localize the output and add automated tests to prevent bugs.

We scheduled some time to remove these old macros at our Austin work week, when multiple people could quickly double-check macro usage and merge the 86 Macro Massacre pull requests. Thanks to Florian Scholz, Ryan Johnson, and wbamberg, we’ve removed 162 old macros, or 25% of the total at the start of the month.


Increased Availability of MDN

We made some additional changes to keep MDN available and to reduce alerts. Josh Mize added rate limiting to several public endpoints, including the homepage and wiki documents (PR 4591). The limits should be high enough for all regular visitors, and only high-traffic scrapers should be blocked.

I adjusted our liveness tests, but kept the database query for now (PR 4579). We added new thresholds for liveness and readiness in November, and these appear to be working well.

We continue to get alerts about MDN during high-traffic spikes. We’ll continue to work on availability in 2018.

Improved Kuma Deployments

Ryan Johnson worked to make our Jenkins-based tests more reliable. For example, Jenkins now confirms that MySQL is ready before running tests that use the database (PR 4581). This helped find an issue with the database being reused, and we’re doing a better job of cleaning up after tests (PR 4599).

Ryan continued developing branch-based deployments, making them more reliable (PR 4587) and expanding to production deployments (PR 4588). We can now deploy to staging and production by merging to stage-push and prod-push for Kuma as well as KumaScript, and we can monitor the deployment with bot notifications in #mdndev. This makes pushes easier and more reliable, and gets us closer to an automated deployment pipeline.

Added Browser Compatibility Data

Daniel D. Beck continued to convert CSS compatibility data from the wiki to the repository, and wrote 35 of the 57 PRs merged in December. Thanks to Daniel for doing the conversion work, and thanks to Jean-Yves Perrier for many reviews and merges over the holiday break.

Stephanie Hobson continued to refine the design of the new compatibility tables, including an icon for the Samsung Internet Browser and an updated Firefox icon (Kuma PR 4605). Florian Scholz added a legend, to explain the notation (KumaScript PR 437). We’re getting closer to shipping these to all users. Please give any feedback at Beta Testing New Compatibility Table on Discourse.

Said Goodbye to Stephanie Hobson

Stephanie Hobson is moving to the bedrock team in January, where she’ll help maintain and improve Schalk Neethling will take over as the primary front-end developer for MDN Web Docs.

Over the past 3½ years, Stephanie has had a huge impact on MDN. She shared her expertise on accessibility, multi-language support, readable HTML tables and all things Google Analytics. She advocated for the users during the spam mitigations and Persona shutdown. She’d argue for design changes from a web developer’s perspective, and back it up with surveys and interviews.

She’s also a talented developer, authoring over 400 PRs. She’s responsible for a lot of the changes on MDN in 2017:


Schalk has been working on MDN for most of 2017. He’s been focused on the interactive examples project that fully shipped in December. He’s also been reviewing front-end PRs, and his feedback and suggestion have improved the front-end code for months. In December, Stephanie and Schalk worked closely to make a smooth transition, which included getting all the JavaScript to pass eslint tests (PR 4596 and PR 4597).

We look forward to seeing what Stephanie will do on bedrock, and we look forward to Schalk’s work and fresh perspective on MDN Web Docs.

Shipped Tweaks and Fixes

There were 209 PRs merged in December (which was supposed to be a light month):

Several of these were from first-time contributors:

Other significant PRs:

Planned for January

We’re contining on existing projects like BCD in January, and starting some larger projects that will start to ship in February.

Prepare for a CDN

We’ve exhausted the easy solutions for increasing availability on MDN. We believe the next step is to put behind a Content Distribution Network, or CDN. Once we have everything setup, most requests won’t even hit the Kuma engine, but instead will be handled by caching servers around the world. We expect it to take 1 - 2 months before we can get the majority of requests served by the CDN.

A first step is to reduce the page variants sent to anonymous users, so that the CDN edge servers can handle most requests. Schalk Neethling has been removing waffle flags or migrating them to switches over many PRs, such as PR 4561.

In January, Ryan Johnson will start adding the caching headers needed for the CDN to store and serve the pages without contacting Kuma.

We believe a CDN will reduce downtime and alerts from increased traffic. More importantly, we expect it will speed up MDN Web Docs for visitors outside the US.

Ship More Interactive Examples

We launched the interactive example editor on a dozen pilot pages, and the analytics look good. Just before the holiday break, we decided we can ship the interactive example editor to any MDN page. You can see it on CSS background-size, Javascript Array.slice(), and more.

background-size array-slice

We have many more interactive examples ready to publish, including many JavaScript examples by Mark Boas. We’ll roll these and more out to MDN. We’ll also start on HTML interactive examples, and we’re planning to ship them in February. Follow mdn/interactive-examples to see the progress and learn how to help.

Update Django to 1.11

MDN Web Docs is built on top of Django. We’re currently using Django 1.8, first released in 2015. It is a Long-Term Release (LTS) that will be supported with security updates until at least April 2018. Django 1.11, released in 2017, is the new LTS release, and will be supported until at least April 2020. In January, we’ll focus on updating our code and third-party libraries so that we can quickly make the transition to 1.11.

For now, our plan is to stay on Django 1.11 until April 2019, when Django 2.2, the next LTS release, is shipped. Django 2 requires Python 3, and it may take a lot of effort to update Kuma and switch to third-party libraries that support Python 3. We’ll make a lot of progress during the 1.11 transition, and we’ll monitor our Django 2 and Python 3 compatibility in 2018.

Plan for 2018

We have a lot of things we have to do in Q1 2018, such as the CDN and Django 1.11 update. We postponed a detailed plan for 2018, and instead will spend some of Q1 discussing goals and priorities. During our discussions in December, a few themes came up.

For the MDN Web Docs product, the 2018 theme is Reach. We want to reach more web developers with MDN Web Docs data, and earn a key place in developers’ workflows. Sometimes this means making the best place to find the information, and sometimes it means delivering the data where the developer works. We’re using interviews and surveys to learn more and design the best experience for web developers.

For the technology side, the 2018 theme is Simplicity. There are many seldom-used Kuma features that require a history lesson to explain. These make it more complicated to maintain and improve the web site. We’d like to retire some of these features, simplify others, and make it easier to work on the code and data. We have ideas around zone redirects, asset pipelines, and translations, and we hope to implement these in 2018.

One thing that has gotten more complex in 2017 is code contribution. We’re implementing new features like browser-compat-data and interactive-examples as their own projects. Kuma is usually not the best place to contribute, and it can be challenging to discover where to contribute. We’re thinking through ways to improve this in 2018, and to steer contributor’s effort and enthusiasm where it will have the biggest impact.

Categorieën: Mozilla-nl planet

Nick Cameron: Rust 2018

zo, 07/01/2018 - 23:39

I want 2018 to be boring. I don't want it to be slow, I want lots of work to happen, but I want it to be 'boring' work. We got lots of big new things in 2017 and it felt like a really exciting year (new language features, new tools, new libraries, whole new ways of programming (!), new books, new teams, etc.). That is great and really pushed Rust forward, but I feel we've accumulated a lot of technical and social debt along the way. I would like 2018 to be a year of consolidation on 2017's gains, of paying down technical debt, and polishing new things into great things. More generally, we could think of a tick-tock cadence to Rust's evolution - 2015 and 2017 were years with lots of big, new things, 2016 and 2018 should be consolidation years.

Some specifics

Not in priority order.

  • finish design and implementation of 'in flight' language features:
    • const exprs
    • modules and crates
    • macros
    • default generics
    • ergonomics initiative things
    • impl Trait
    • specialisation
    • more ...
    • stabilisation debt (there are a lot of features that are 'done' but need stabilising. This is actually a lot of work since the risk is higher at this stage than at any other point in the language design process, so although it looks like just ticking a box, it takes a lot of time and effort).
  • async/await - towards a fully integrated language feature and complete library support, so that Rust is a first choice for async programming.
  • unsafe guidelines - we need this for reliable and safe programming and to facilitate compiler optimisations. There is too much uncertainty right now.
  • web assembly support - we started at the end of 2017, there is lots of opportunity for Rust in the space.
  • compiler performance - we made some big steps in 2017 (incremental compilation), but there is lots of 'small' work to do before compiling Rust programs is fast in the common case. It's also needed for a great IDE experience.
  • error handling - the Failure library is a good start, I think it is important that we have a really solid story here. There are other important pieces too, such as ? in main, stable catch blocks, and probably some better syntax for functions which return Results.
  • IDE support - we're well on our way and made good progress in 2017. We need to release the RLS, improve compiler integration, and then there's lots of opportunity for improving the experience, for example with debugger integration and refactoring tools.
  • mature other tools (Rustfmt and Clippy should both have 1.0 releases and we should have a robust distribution mechanism)
  • Cargo
    • build system integration (we planned this out in 2017, but didn't start implementation)
    • ongoing improvements (in particular I think we need to address crate squatting - we've shied away from curating (except for security issues), but I think there is lots of low-hanging fruit for low-key moderation/curation which would drastically improve the ecosystem)
    • Xargo integration
    • rustup integration (see below)
  • Rustdoc - there's been some exciting work on the internals in 2017, I think we could make some dramatic changes to incorporate guide-like text, smart source code exploration, and easier navigation.
  • debugging
  • learning resources for intermediate-level programmers - 2017 has been great for beginner Rust programmers, in 2018 I'd like to see more documentation, talks, etc. for intermediate level programmers so that as you grow as a Rust programmer you don't fall off a support cliff, especially if you prefer not to actively engage on irc or other 'live' channels.
  • team structure - we've expanded our team structure considerably in 2017, adding several new teams and lots of new team members. I think this has all been an improvement, but it feels like unfinished work - several of the teams still feel like they're getting off the ground while others feel too big and broad.
  • polish the RFC process - the RFC process is one of Rust's great strengths and really helps where strong ahead-of-time design is required. However, it also feels pretty heavyweight, can be an overwhelming time sink, and has been a real source of stress and stop-energy on some occasions. I think we need to re-balance things a little bit, though I'm not really sure how.
  • communication channels - we have a lot of them, and none of them feel really great - many people dislike irc, it is a barrier to entry for some people, and it is hard to moderate. The discuss forums are pretty good, but don't facilitate interactive communication very well. GitHub (at least the main Rust repo) can be overwhelming and its easy to miss important information. We tried Gitter for the impl period and we've used Slack for some minor things; both seemed only OK, had their own bugs and problems, and didn't offer much over irc, plus it meant more channels to keep an eye on. r/rust is in a weird semi-official state and some people really dislike Reddit. I don't think there is a silver bullet here, but I think we can polish and improve.
Some new things

OK, there are some pressing new things that should happen. I would like to keep this list short though:

  • new epoch - it's time to do this. We should make official what shouldn't be used any more and make room for new features to be implemented 'properly'.
  • internationalisation (i18n) - I think it's really important that software is usable by as many people as possible, and I think software ecosystems do better when the tools to do so are central and official. We should develop libraries and language features to help internationalise and localise programs.
  • cargo/rustup integration - there is no reason for these to be separate programs and it increases friction for new programmers. Although it is a relatively minor thing, I think it has a big impact.
  • testing - Rust's built-in unit tests are really neat, but we also need to facilitate more powerful testing frameworks.

That's a lot of stuff! And I've probably missed some libraries and community things because I'm not really up to speed with what is going on there. I think it is about right for a years work, but only if we can resist the allure of new, shiny things on top of that lot.

I'm probably a bit biased, but tools (including Cargo) seem to be an area where there is a lot of work to do and that work is important. It is also an area which feels 'under-staffed', so we either need to encourage more people to focus on tools or cut back on what we want to achieve there.

The goal

At the end of the year I want Rust to feel like a really solid, reliable choice for people choosing a programming language. I want to build on the mostly excellent reputation we have for backwards compatibility and stability without stagnation. I want the community leadership to feel like a well-functioning machine, and that the larger community feels well-represented and can trust the leadership teams. I want to feel like there are a much smaller number of projects in progress, and much fewer unanswered questions (and more projects being finished or reaching maturity). I want 'average' users to feel a good balance between innovation and stability.

Categorieën: Mozilla-nl planet

Robert O'Callahan: Ancient Browser-Wars History: MD5-Hashed Posts Declassified

zo, 07/01/2018 - 06:40

2007-2008 was an interesting time for Mozilla. In the market, Firefox was doing well, advancing steadily against IE. On the technical front we were doing poorly. Webkit was outpacing us in performance and rapid feature development. Gecko was saddled with design mistakes and technical debt, and Webkit captured the mindshare of open-source contributors. We knew Google was working on a Webkit-based browser which would probably solve Webkit's market-share problems. I was very concerned and, for a while, held the opinion that Mozilla should try to ditch Gecko and move everything to Webkit. For me to say so loudly would have caused serious damage, so I only told a few people. In public, I defended Gecko from unfair attacks but was careful not to contradict my overall judgement.

I wasn't the only one to be pessimistic about Gecko. Inside Mozilla, under the rubric of "Mozilla 2.0", we thrashed around for considerable time trying to come up with short-cuts to reducing our technical debt, such as investments in automatic refactoring tools. Outside Mozilla, competitors expected to rapidly outpace us in engine development.

As it turned out, we were all mostly wrong. We did not find any silver bullets, but just by hard work Gecko mostly kept up, to an extent that surprised our competitors. Weaknesses in Webkit — some expedient shortcuts taken to boost performance or win points on specific compatibility tests, but also plain technical debt — became apparent over time. Chrome boosted Webkit, but Apple/Google friction also caused problems that eventually resulted in the Blink fork. The reaction to Firefox 57 shows that Gecko is still at least competitive today, even after the enormous distraction of Mozilla's failed bet on FirefoxOS.

One lesson here is even insiders can be overly pessimistic about the prospects of an old codebase; dedicated, talented staff working over the long haul can do wonders, and during that time your competitors will have a chance to develop their own problems.

Another lesson: in 2007-2008 I was overly focused on toppling IE (and Flash and WPF), and thought having all the open-source browsers sharing a single engine implementation wouldn't be a big problem for the Web. I've changed my mind completely; the more code engines share, the more de facto standardization of bugs we would see, so having genuinely separate implementations is very important.

I'm very grateful to Brendan and others for disregarding my opinions and not letting me lead Mozilla down the wrong path. It would have been a disaster for everyone.

To let off steam, and leave a paper trail for the future, I wrote four blog posts during 2007-2008 describing some of my thoughts, and published their MD5 hashes. The aftermath of the successful Firefox 57 release seems like an appropriate time to harmlessly declassify those posts. Please keep in mind that my opinions have changed.

  1. January 21, 2007: declassified
  2. December 1, 2007: declassified
  3. June 5, 2008: declassified
  4. September 7, 2008: declassified
Categorieën: Mozilla-nl planet