Mozilla Nederland LogoDe Nederlandse

Air Mozilla: Mozilla Weekly Project Meeting, 31 Jul 2017

Mozilla planet - ma, 31/07/2017 - 20:00

Mozilla Weekly Project Meeting The Monday Project Meeting

Categorieën: Mozilla-nl planet

William Lachance: mozregression's new mascot

Mozilla planet - ma, 31/07/2017 - 17:32

Spent a few hours this morning on a few housekeeping issues with mozregression. The web site was badly in need of an update (it was full of references to obsolete stuff like B2G and and the usual pile of fixes motivated a new release of the actual software. But most importantly, mozregression now has a proper application icon / logo, thanks to Victoria Wang!

One of the nice parts about working at Mozilla is the flexibility it offers to just hack on stuff that’s important, whether or not it’s part of your formal job description. Maintaining mozregression is pretty far outside my current set of responsibilities (or even interests), but I keep it going because it’s a key tool used by developers team here and no one else seems willing to take it over. Fortunately, tools like appveyor and pypi keep the time suckage to a mostly-reasonable level.

Categorieën: Mozilla-nl planet

Hacks.Mozilla.Org: Tour the latest features of the CSS Grid Inspector, July 2017

Mozilla planet - ma, 31/07/2017 - 16:00

We began work on a developer tool to help with understanding and using CSS Grid over a year ago. In March, we shipped the first version of a Grid Inspector in the Firefox DevTools along with CSS Grid. Now significant new features are landing in Firefox Nightly. Here’s a tour of what’s arrived in July 2017.

Download Firefox Nightly (if you don’t have it already) to get access to the latest and greatest, and to keep up with the continuing improvements.

Categorieën: Mozilla-nl planet

Robert O'Callahan: Selecting A Compression Algorithm For rr

Mozilla planet - ma, 31/07/2017 - 05:56

rr's traces are large. Memory-mapped files account for a lot of that, but the most efficient way to handle them is "zero copy" file cloning so we don't want to compress them during recording. Most of the rest is recordings of data copied into tracee address spaces by the kernel, and plus snapshots of registers, and these data are extremely compressible — often containing long runs of zeroes, for example. For a long time rr has used zlib to compress the non-mapped-file trace data, and zlib's 'deflate' algorithm often achieves compression of 8x or more.

Of course zlib is pretty old and significantly better algorithms exist, so now seems like a good time to reevaluate that decision. I used the Squash framework to compare zlib to two contenders, brotli and zstd, on actual rr trace data: a single, fairly short Firefox run, 828MB uncompressed, treated as independent 1MB chunks because that's what rr does. Here are the results:

I've omitted compression levels that took more than 20 seconds to compress the data. Currently rr uses zlib level 6, which takes just over 12 seconds to compress the data. Data compression occurs in parallel with the rest of recording, and uses multiple cores when it needs to, so in practice is seldom a performance bottleneck.

On this data, both brotli and zstd beat zlib by a significant margin, so we're definitely leaving some performance on the table by sticking with zlib. In particular, given the same time budget, zstd can shave 14% off the size of the non-mapped-file trace data, and brotli can shave off 17%. Alternatively, for the same trace size we could use much less CPU time — zstd level 1 compresses slightly better than zlib level 6, at 10x the speed!

For rr I think brotli level 5 is an obvious choice. For some reason there's a sudden improvement in compression at level 5, where it passes zstd and reaches roughly its optimal compression given a reasonable time budget. At level 5 we're shaving 17% off the current size and also taking 32% off the CPU time.

Apart from the performance, brotli also has a better licensing story. zstd has Facebook's standard patent license, which terminates if you sue Facebook for any patent infringement, and some organisations aren't comfortable with that. Apparently people have done patent searches and haven't found any Facebook patents covering zstd, but that is not wholly reassuring (while also being mystifying — if they're not applying for relevant patents, why not make that clear?). On the other hand, Google has made a clear commitment to license brotli royalty-free with no such conditions. Of course there could be third-party patents, but if they became a problem it would be easy to switch rr's algorithm (especially compared to the trouble they would cause for Web sites and browsers!).

Of course there are lots of other compression algorithms I could evaluate, but I guess if there are any further gains to be had, they would be very minor.

Update Unfortunately Ubuntu doesn't have a brotli library package. (Fedora does.) So, using brotli would mean everyone building rr on Ubuntu has to build brotli themselves first, or we vendor brotli into rr (or we do something truly insane like have rr pull and build brotli at build time if necessary). None of these approaches are appealing :-(. I guess there's also "rewrite rr in Rust so we can use cargo to have reasonable dependency management", which is appealing but not currently practical.

I'm leaning towards vendoring brotli into rr.

Categorieën: Mozilla-nl planet

Robert O'Callahan: Upstream Stable Kernels Work With rr Again

Mozilla planet - za, 29/07/2017 - 01:22

Greg K-H has released stable Linux kernels 3.18.63, 4.4.79, 4.9.40, and 4.12.4, containing a (backout) fix for the regression that broke rr. 4.13-rc2 also contains the fix. 4.11 was just declared end-of-life so it will not ever be officially fixed.

Obviously distros still have to produce kernel updates containing the fix, so we're not quite out of the woods yet, but that should be soon.

I'm holding off doing the rr 4.6.0 release until distro updates that work with rr have been out for a little while. To the (limited) extent possible I'd like to avoid people trying rr while it doesn't work on their kernel.

Categorieën: Mozilla-nl planet

Air Mozilla: Localization Conference - 20170727

Mozilla planet - vr, 28/07/2017 - 23:21

Localization Conference - 20170727 Localization Conference

Categorieën: Mozilla-nl planet

The Mozilla Blog: How Could You Use a Speech Interface?

Mozilla planet - vr, 28/07/2017 - 18:10

Last month in San Francisco, my colleagues at Mozilla took to the streets to collect samples of spoken English from passers-by. It was the kickoff of our Common Voice Project, an effort to build an open database of audio files that developers can use to train new speech-to-text (STT) applications.

What’s the big deal about speech recognition?

Speech is fast becoming a preferred way to interact with personal electronics like phones, computers, tablets and televisions. Anyone who’s ever had to type in a movie title using their TV’s remote control can attest to the convenience of a speech interface. According to one study, it’s three times faster to talk to your phone or computer than to type a search query into a screen interface.

Plus, the number of speech-enabled devices is increasing daily, as Google Home, Amazon Echo and Apple HomePod gain traction in the market. Speech is also finding its way into multi-modal interfaces, in-car assistants, smart watches, lightbulbs, bicycles and thermostats. So speech interfaces are handy — and fast becoming ubiquitous.

The good news is that a lot of technical advancements have happened in recent years, so it’s simpler than ever to create production-quality STT and text-to-speech (TTS) engines. Powerful tools like artificial intelligence and machine learning, combined with today’s more advanced speech algorithms, have changed our traditional approach to development. Programmers no longer need to build phoneme dictionaries or hand-design processing pipelines or custom components. Instead, speech engines can use deep learning techniques to handle varied speech patterns, accents and background noise – and deliver better-than-ever accuracy.

The Innovation Penalty

There are barriers to open innovation, however. Today’s speech recognition technologies are largely tied up in a few companies that have invested heavily in them. Developers who want to implement STT on the web are working against a fractured set of APIs and support. Google Chrome supports an STT API that is different from the one Apple supports in Safari, which is different from Microsoft’s.

So if you want to create a speech interface for a web application that works across all browsers, you would need to write code that would work with each of the various browser APIs. Writing and then rewriting code to work with every browser isn’t feasible for many projects, especially if the code base is large or complex.

There is a second option: You can purchase access to a non-browser-based API from Google, IBM or Nuance. Fees for this can cost roughly one cent per invocation. If you go this route, then you get one stable API to write to. But at one cent per utterance, those fees can add up quickly, especially if your app is wildly popular and millions of people want to use it. This option has a success penalty built into it, so it’s not a solid foundation for any business that wants to grow and scale.

Opening Up Speech on the Web

We think now is a good time to try to open up the still-young field of speech technology, so more people can get involved, innovate, and compete with the larger players. To help with that, the Machine Learning team in Mozilla Research is working on an open source STT engine. That engine will give Mozilla the ability to support STT in our Firefox browser, and we plan to make it freely available to the speech developer community, with no access or usage fees.

Secondly, we want to rally other browser companies to support the Web Speech API, a W3C community group specification that can allow developers to write speech-driven interfaces that utilize any STT service they choose, rather than having to select a proprietary or commercial service. That could open up a competitive market for smart home hubs–devices like the Amazon Echo that could be configured to communicate with one another, and other systems, for truly integrated speech-responsive home environments.

Where Could Speech Take Us?

Voice-activated computing could do a lot of good. Home hubs could be used to provide safety and health monitoring for ill or elderly folks who want to stay in their homes. Adding Siri-like functionality to cars could make our roads safer, giving drivers hands-free access to a wide variety of services, like direction requests and chat, so eyes stay on the road ahead. Speech interfaces for the web could enhance browsing experiences for people with visual and physical limitations, giving them the option to talk to applications instead of having to type, read or move a mouse.

It’s fun to think about where this work might lead. For instance, how might we use silent speech interfaces to keep conversations private? If your phone could read your lips, you could share personal information without the person sitting next to you at a café or on the bus overhearing. Now that’s a perk for speakers and listeners alike.Speech recognition using lip-reading

Want to participate? We’re looking for more folks to participate in both open source projects: STT engine development and the Common Voice application repository.

If programming is not your bag, you can always donate a few sentences to the Common Voice Project. You might read: “It made his heart rise into his throat” or “I have the diet of a kid who won $20.” Either way, it’s quick and fun. And it helps us offer developers an open source option that’s robust and affordable.

The post How Could You Use a Speech Interface? appeared first on The Mozilla Blog.

Categorieën: Mozilla-nl planet

Firefox Test Pilot: Test Pilot on Campus

Mozilla planet - vr, 28/07/2017 - 16:33

Test Pilot can never have too many good ideas! That was the original thought behind co-hosting a 12-week course with Tatung University (TTU)’s Department of Media Design in Taiwan. We expected to cultivate new ideas by guiding students through a simplified UX design process, and we did!

Scope of collaboration

In 2017, the Campus Seeding program in Taipei encouraged us to grow our influence in schools, so the Taipei UX team decided to work with TTU’s Professor Chia-yin Yu and Professor Peirong Cheng as our first step to creating a win-win collaboration: students learns new things, Mozilla collects new ideas. After several meetings, we finalized the course information and the modality of cooperation. There were 12 graduate students from diverse backgrounds who were divided into four teams.

Taipei UX team gave three in-class lessons covering our design process, including user research, a design workshop, prototyping, and user testing.

Professors shared relevant knowledge to form a compact program. Students completed assignments in Google Docs so that we could review and give feedback online. At the end of this semester, we asked students to give presentations in the Mozilla Taipei office and invited Firefox UX designers to critique.

The simplified UX design process covered during the course

1st lesson: User Research

For the first lesson, I gave an introduction to the Firefox browser and Test Pilot to help students understand who we are and what we do. Next, Ruby Hsu, our Senior User Researcher, gave a User Research 101 lesson and demonstrated interviews, breaking the research method down into step-by-step exercises. After students practiced several rounds of interviews with each other, the assignment was to put what they learned to use and deliver a report which contained user needs and insights for the next lesson: design workshop.

Ruby demonstrating advanced interview skills

2nd lesson: Design Workshop

Juwei Huang, our Senior User Experience Designer, with support from me, UX designers Tina Hsieh and Tori Chen, prepared a series of brainstorming tools to help students diverge from their reports and converge to concrete design proposals. With students’ research reports posted on the wall, the four of us led each team to practice various brainstorming exercises, including affinity diagramming, How Might We, and 3–12–3 brainstorming.

Each team got a UX designer-mentor

3rd lesson: Prototyping & User Testing

In the last lesson, Mark Liang, our UX Designer/Prototyper, introduced different prototyping instruments. Mark also elaborated on the importance and performance of user testing from Ruby’s lesson. From instruction to execution, we guided students to build paper prototypes based on their proposals and tested them with each other.

Students having fun working on paper prototypes

Final challenge: Presentations

At the end of the semester, we invited students to the Mozilla Taipei office. We gave an office tour to talk about Mozilla’s vision, manifesto, and the history of Taipei office. More importantly, students experienced how we review and critique at Mozilla by presenting their final presentations and what they learned. The participation from various Mozillians was invaluable. John Gruen, Test Pilot Product Manager, reviewed presentations from the product perspective. Philipp Sackl, Firefox Product Design Lead and Michael Verdi, Firefox Senior Interaction Designer, shared more international perspectives, not to mention the helpful critique from Taipei UX team members.

Students listening to Mozilla staff during critique

Looking ahead

To wrap up, it was a fabulous journey for students and for us. Thanks to everyone who contributed time on this project and joined us to learn about design process. We’ve collected potential experiment ideas and compelling proposals for the next phase of Test Pilot, and we’ll keep collaborating with schools and students for more fresh ideas. We can never have too many good ideas, right?

Test Pilot on Campus was originally published in Firefox Test Pilot on Medium, where people are continuing the conversation by highlighting and responding to this story.

Categorieën: Mozilla-nl planet

Adrian Gaudebert: - a Powerful and Flexible Report Builder for JSON APIs

Mozilla planet - vr, 28/07/2017 - 14:08

More than 2 years ago, I wrote about a prototype I built, called Spectateur, that allowed users of Socorro to create custom reports for their needs. The idea was to make it easy to write a script that pulls data from a public API, transforms it and then displays it as a nice table or chart. I strongly believe in this idea, and have been making various progress on the concept over the last 2 years, until I finally put together a nicer, more complete prototype at the end of 2016. I called it Processeer, and it's available at

What is Processeer solving?

Processeer aims at leveraging 2 things: the burden of creating custom reports, and the difficulty of sharing them with other people. First, it simplifies the process of writing scripts that interpret data from APIs, and provides tools to show that data in a beautiful fashion. Second, it makes it easy to share both the results of such scripts and the scripts themselves with other people, encouraging them to reuse and improve existing reports.

Here are few comparison points that might help you understand what it is. Do you know about re:dash? It's kind of the same thing, except it doesn't talk to a database but to APIs, and there's some powerful composition possibilities in Processeer. Maybe you've heard of Processeer wants to give you the same kind of easy-sharing reports.

How about a simple example? Let's say you're a Lord of the Rings fan and you want to know at a glance the repartition of each of Middle Earth's peoples in the community of the Ring as well as the number of Rings of Power each of those peoples has. Well, here's a nice line chart that shows just that.

Right, that's cheating: that last one doesn't even pull data from an API. Alright, how about something a bit more complex? This one tells you if a bug from bugzilla is still relevant or not, by showing the number of crash reports associated to that bug in the last week: Bug Signature Status. That Report is composed of 3 different Blocks. The first one pulls a list of signatures from crash-stats, the second one pulls the bug title from bugzilla, and the third one pulls from crash-stats the actual number of crash reports for each signature we received from the first Block. If you want to look at how it works, simply log in with a github account and you'll be able to edit the Report and Blocks.

Note that you can type in any bug number in the "bug" filter at the top, and click "Run" to rerun that Report for that bug.

How does it work?

Processeer has a few concepts that need to be explained. First of all, you need to have a data source that is accessible over HTTP and that is serving JSON. That includes, for example, the crash-stats public API, bugzilla's REST API, github's REST API, and many more.

Processeer's high level component is called the Report. It is an ordered list of Blocks (we'll learn about that in just a few moments). When you load a Report page, Processeer is going to run each Block in turn, passing the output of the first Block as the input of the second, and so on. The output of the last Block of the chain is then going to be parsed and displayed as nicely as possible, as a table or as a chart for example.

Blocks are the building elements of Processeer. It's where you put most of the logic: where to pull the data from, what to do with it, and what to return. The Models of a Block define how the data is pulled. You can set several Models in a Block. Each one is basically a construct to make an HTTP call to some API that returns JSON. The Controller is a JavaScript function that you write in order to transform the data you get from your models into what you want your Block to return. It can be a specially parsed object that Processeer will parse and turn into a nice looking chart or table, or it can be an object that should be passed to the next Block in a Report. And that is where Params come in. Params define the "input" of your Block: you can set default values for some arguments you expect, and if those arguments exist in the output of the previous Block, their values will be passed down for you to use in your current Block. That mechanism allows, for example, to have a block that pulls some data from an API, then have another Block that uses that very data in the query of another API request. It also makes Blocks more reusable, and encourages users to keep them as small and focused as possible.

Schema of how Processeer works, a recap of the previous paragraph.

Schema of how Processeer works, a recap of the previous paragraph.

How do I use it?

Go to and log in using a github account. You will then be able to create new Blocks and Reports. There are some Blocks already, I invite you to take a look at them to see how the system works, and what you can do. There's some documentation on the Processeer github repository, as well as some basic examples to understand the possible outputs.

I am very happy to answer questions if you have any. You can reach out to me with github issues, by email via my contact page, or on IRC (for example in #processeer in Freenode's IRC server).

The future is unwritten

The future of Processeer is unclear. I don't have any users just yet, probably for a lack of advertisement and documentation. But it is hard to invest time into those things when you don't have any users. So, here's my message in a bottle that I am throwing into the sea. If you find Processeer useful and think it can help you, I would love to hear about it. I need fearless explorers ready to dig into it to make the best out of the amazing possibilities it offers.

And if you think it's a brilliant idea and want to contribute, it's all open source (not licensed yet though). :)

Categorieën: Mozilla-nl planet

Macro Castelluccio: Overview of the Code Coverage Architecture at Mozilla

Mozilla planet - vr, 28/07/2017 - 02:00

Firefox is a huge project, consisting of around 20K source files and 3M lines of code (if you only consider the Linux part!), supporting officially four operating systems, and being written in multiple programming languages (C/C++/JavaScript/Rust). We have around 200 commits landing per day in the mozilla-central repository, with developers committing even more often to the try repository. Usually, code coverage analysis is performed on a single language on small/medium size projects. Therefore, collecting code coverage information for such a project is not an easy task.

I’m going to present an overview of the current state of the architecture for code coverage builds at Mozilla.

Tests in code coverage builds are slower than in normal builds, especially when we will start disabling more compiler optimizations to get more precise results. Moreover, the amount of data generated is quite large, each report being around 20 MB. If we had one report for each test suite and each commit, we would have around ~100 MB x ~200 commits x ~20 test suites = ~400 GB per day. This means we are, at least currently, only running a code coverage build per mozilla-central push (which usually contain around ~50 to ~100 commits), instead of per mozilla-inbound commit.

View of linux64-ccov build and tests from Treeherder Figure 1: A linux64-ccov build (B) with associated tests, from

Each test machine, e.g. bc1 (first chunk of the Mochitest browser chrome) in Figure 1, generates gcno/gcda files, which are parsed directly on the test machine to generate a LCOV report.

Because of the scale of Firefox, we could not rely on some existing tools like LCOV. Instead, we had to redevelop some tooling to make sure the whole process would scale. To achieve this goal, we developed grcov, an alternative to LCOV written in Rust (providing performance and parallelism), to parse the gcno/gcda files. With the standard LCOV, parsing the gcno/gcda files takes minutes as opposed to seconds with grcov (and, if you multiply that by the number of test machines we have, it becomes more than 24 hours vs around 5 minutes).

Let’s take a look at the current architecture we have in place:

Architecture view Figure 2: A high-level view of the architecture.

Both the Pulse Listener and the Uploader Task are part of the awesome Mozilla Release Engineering Services ( The release management team has been contributing to this project to share code and efforts.

Pulse Listener

We are running a pulse listener process on Heroku which listens to the taskGroupResolved message, sent by TaskCluster when a group of tasks finishes (either successfully or not). In our case, the group of tasks is the linux64-ccov build and its tests (note: you can now easily choose this build on trychooser, run your own coverage build and generate your report. See this page for instructions).

The listener, once it receives the “group resolved” notification for a linux64-ccov build and related tests, spawns an “uploader task”.

The source code of the Pulse Listener can be found here.

Uploader Task

The main responsibility of the uploader task is aggregating the coverage reports from the test machines.

In order to do this, the task:

  1. Clones mozilla-central;
  2. Builds Firefox (using artifact builds for speed); this is currently needed in order to generate the mapping between the URLs of internal JavaScript components and modules (which use special protocols, such as chrome:// or resource://) to the corresponding files in the mozilla-central repository (e.g. resource://gre/modules/Services.jsmtoolkit/modules/Services.jsm);
  3. Rewrites the LCOV files generated by the JavaScript engine for JavaScript code, using the mapping generated in step 2 and also resolving preprocessed files (yes, we do preprocess some JavaScript source files with a C-style preprocessor);
  4. Runs grcov again to aggregate the LCOV reports from the test machines into a single JSON report, which is then sent to and

Both and, in order to show source files with coverage overlay, take the contents of the files from GitHub. So, we can’t directly use our Mercurial repository, but we have to rely on our Git mirror hosted on GitHub ( In order to map the mercurial changeset hash associated with the coverage build to a Git hash, we use a Mapper service.

Code coverage results on Firefox code can be seen on:

The source code of the Uploader Task can be found here.

Future Directions Reports per Test Suite and Scaling Issues

We are interested in collecting code coverage information per test suite. This is interesting for several reasons. First of all, we could suggest developers which suite they should run in order to cover the code they change with a patch. Moreover, we can evaluate the coverage of web platform tests and see how they fare against our built-in tests, with the objective to make web platform tests cover as much as possible.

Both and support receiving multiple reports for a single build and showing the information both in separation and in aggregation (“flags” on, “jobs” on Unfortunately, both services are currently choking when we present them with too much data (our reports are huge, given that our project is huge, and if we send one per test suite instead of one per build… things blow).

Coverage per Push

Understanding whether the code introduced by a set of patches is covered by tests or not is very valuable for risk assessment. As I said earlier, we are currently only collecting code coverage information for each mozilla-central push, which means around 50-100 commits (e.g., instead of for each mozilla-inbound push (often only one commit). This means we don’t have coverage information for each set of patches pushed by developers.

Given that most mozilla-inbound pushes in the same mozilla-central push will not change the same lines in the same files, we believe we can infer the coverage information for intermediate commits from the coverage information of the last commit.

Windows, macOS and Android Coverage

We are currently only collecting coverage information for Linux 64 bit. We are looking into expanding it to Windows,macOS and Android. Help is appreciated!

Support for Rust

Experimental support for gcov-style coverage collection landed recently in Rust. The feature needs to be ship in a stable release of Rust before we can use it; this issue is tracking its stabilization.

Categorieën: Mozilla-nl planet

Chris H-C: Another Advantage of Decreasing Data Latency: Flatter Graphs

Mozilla planet - do, 27/07/2017 - 22:22

I’ve muttered before about how difficult it can be to measure application crashes. The most important lesson is that you can’t just count the number of crashes, you must normalize it by some “usage” value in order to determine whether a crashy day is because the application got crashier or because the application was just being used more.

Thus you have a numerator (number of crashes) and a denominator (some proxy of application usage) to determine the crash rate: crashes-per-use.

The current dominant denominator for Firefox is “thousand hours that Firefox is open,” or “kilo-usage-hours (kuh).”

The biggest problem we’ve been facing lately is how our numerator (number of crashes) comes in at a different rate and time than our denominator (kilo-usage-hours) due to the former being transmitted nearly-immediately via “crash” ping and the former being transmitted occasionally via “main” ping.

With pingsender now sending most “main” pings as soon as they’re created, our client submission delay for “main” pings is now roughly in line with the client submission delay of “crash” pings.

What does this mean? Well, look at this graph from

Screenshot-2017-7-25 Crash Rates (Telemetry)

This is the Firefox Beta Main Crash Rate (number of main process crashes on Firefox Beta divided by the number of thousands of hours users had Firefox Beta running) over the past three months or so. The spike in the middle is when we switched from Firefox Beta 54 to Firefox Beta 55. (Most of that spike is a measuring artefact due to a delay between a beta being available and people installing it. Feel free to ignore it for our purposes.)

On the left in the Beta 54 data there is a seven-day cycle where Sundays are the lowest point and Saturday is the highest point.

On the right in the Beta 55 data, there is no seven-day cycle. The rate is flat. (It is a little high, but flat. Feel free to ignore its height for our purposes.)

This is because sending “main” pings with pingsender is behaviour that ships in Firefox 55. Starting with 55, instead of having most of our denominator data (usage hours) coming in one day late due to “main” ping delay, we have that data in-sync with the numerator data (main crashes), resulting in a flat rate.

You can see it in the difference between Firefox ESR 52 (yellow) and Beta 55 (green) in the kusage_hours graph also on

Screenshot-2017-7-27 Crash Rates (Telemetry)

On the left, before Firefox Beta 55’s release, they were both in sync with each other, but one day behind the crash counts. On the right, after Beta 55’s release, notice that Beta 55’s cycle is now one day ahead of ESR 52’s.

This results in still more graphs that are quite satisfying. To me at least.

It also, somewhat more importantly, now makes the crash rate graph less time-variable. This reduces cognitive load on people looking at the graphs for explanations of what Firefox users experience in the wild. Decision-makers looking at these graphs no longer need to mentally subtract from the graph for Saturday numbers, adding that back in somehow for Sundays (and conducting more subtle adjustments through the week).

Now the rate is just the rate. And any change is much more likely to mean a change in crashiness, not some odd day-of-week measurement you can ignore.

I’m not making these graphs to have them ignored.

(many thanks to :philipp for noticing this effect and forcing me to explain it)


Categorieën: Mozilla-nl planet

Air Mozilla: Reps Weekly Meeting Jul. 27, 2017

Mozilla planet - do, 27/07/2017 - 18:00

Reps Weekly Meeting Jul. 27, 2017 This is a weekly call with some of the Reps to discuss all matters about/affecting Reps and invite Reps to share their work with everyone.

Categorieën: Mozilla-nl planet

Air Mozilla: Egencia Training: Canada site- Pacific Time

Mozilla planet - do, 27/07/2017 - 12:03

 Canada site- Pacific Time Training and demo of Egencia Canada site (For all residents in Canada)

Categorieën: Mozilla-nl planet

Air Mozilla: Egencia Training: UK site

Mozilla planet - do, 27/07/2017 - 11:56

 UK site Training and demo of Egencia UK site (For residents in the UK)

Categorieën: Mozilla-nl planet

Air Mozilla: Egencia Training: Singapore site

Mozilla planet - do, 27/07/2017 - 11:48

 Singapore site Training and demo of Egencia Singapore site (For residents in Taipei and APAC, excludes Australia and New Zealand) Here is the training video for Egencia...

Categorieën: Mozilla-nl planet

Air Mozilla: Egencia Training: New Zealand site

Mozilla planet - do, 27/07/2017 - 11:43

 New Zealand site Training and demo of Egencia New Zealand site (For residents in New Zealand and Australia)

Categorieën: Mozilla-nl planet

Air Mozilla: Egencia Training: France site

Mozilla planet - do, 27/07/2017 - 11:31

 France site Training and demo of Egencia France site. (For all residents in France)

Categorieën: Mozilla-nl planet

Air Mozilla: Egencia Training: Germany site

Mozilla planet - do, 27/07/2017 - 11:21

 Germany site Training and demo of Egencia Germany site. (For residents in Germany and EMEA)

Categorieën: Mozilla-nl planet

Cameron Kaiser: TenFourFox FPR2b1 available (plus: come to VCF!)

Mozilla planet - do, 27/07/2017 - 07:00
TenFourFox Feature Parity Release 2 beta 1 is now available (downloads, hashes, release notes). Besides various and sundry additional optimizations once again shamelessly backported from the Quantum project, this release includes the accelerated AltiVec string matching routine which I will be also hooking up to other parts of the browser in future releases, and reworks the font blacklist so that Apple sites work again (and more efficiently). Release is planned for August 8.

With this beta you will notice I've made a new build available - a true "Debug" build. Since currently we really only have comprehensive test coverage for JavaScript, this gives people a chance to help debug other browser issues without having to build the entire application themselves. I intend to only issue these for beta releases since the idea is to have as few changes as possible between the beta and the release version. Please take heed of this warning: the Debug version is not for mere mortals. Because it has a full symbol table, it could cause your system's crash dump facility to hang if it bombs and you weren't running it within the TenFourFox debugger (and if that happens, you'll have to kill the crash dump or it will peg your CPU; I actually hacked /usr/libexec/crashdump to not run if I've touched /tmp/no_crashdump as a flag). In addition, the debug build is much slower, runs a lot of sanity checking code and generates copious logging output. It is not optimized for any processor and has no AltiVec code, so it will run (badly) on any compatible Power Mac. Please refer to the developer instructions and understand what you're doing before you run it, and if you do run it, I strongly suggest creating a separate profile specifically for debugging if you intend to work with it often. The debugging versions will not be offered from the main TenFourFox page so clueless folks don't grab it by accident.

Next up, FPR3 will have some additional optimizations and performance improvements too, but will be focused instead more towards supporting new features again: in addition to some minor compatibility tweaks, I would like to get enough of our CSS grid support working to be credible and do further improvements to our JavaScript ES6 implementation. More on that a little later.

A schedule note: I'll be demonstrating the Apple Network Server 500 (the original at this year's Vintage Computer Festival West August 5 and 6 at the Computer History Museum in Mountain View, CA (last year's exhibit was good fun). The ANS (codenamed "Shiner" after a local brand of beer in Texas, where it was developed) was arguably Apple's first true-UNIX server (A/UX notwithstanding), almost certainly the biggest computer they've ever made, and the last general purpose computer they built that wasn't a Mac. I ran mine almost non-stop from 1998 to 2012 as my main server (14 years), and it came back briefly in 2014 when the POWER6 currently running Floodgap blew its mainboard. I'll have it set up so you can play with it, plus an actual prototype Shiner for display and a couple (haven't decided?) PowerBook clients to demonstrate its unusual software powers. Be there or be less nerdy than I am.

Oh, and at last, Flash is dead (by 2020). But it was dead to us in Power Mac-land a long time ago. Finally the rest of the world catches up.

Categorieën: Mozilla-nl planet

Robert O'Callahan: Let's Never Create An Ad-Hoc Text Format Again

Mozilla planet - do, 27/07/2017 - 06:07

Recently I needed code to store a small amount of data in a file. Instinctively I started doing what I've always done before, which is create a trivial custom text-based format using stdio or C++ streams. But at that moment I had an epiphany: since I was using Rust, it would actually be more convenient to use the serde library. I put the data in a custom struct (EpochManifest), added #[derive(Serialize, Deserialize)] to EpochManifest, and then just had to write:

let f = File::create(manifest_path).expect("Can't create manifest");
serde_json::to_writer_pretty(f, &manifest).unwrap();and let f = File::open(&p).expect(&format!("opening {:?}", &p));
let manifest = serde_json::from_reader(f).unwrap();

This is more convenient than hand-writing even the most trivial text (un)parser. It's almost guaranteed to be correct. It's more robust and maintainable. If I decided to give up on human readability in exchange for smaller size and faster I/O, it would only take a couple of changed lines to switch to bincode's compact binary encoding. It prevents the classic trap where the stored data grows in complexity and an originally simple ad-hoc text format evolves into a baroque monstrosity.

There are libraries to do this sort of thing in C/C++ but I've never used them, perhaps because importing a foreign library and managing that dependency is a significant amount of work in C/C++, whereas cargo makes it trivial in Rust. Perhaps that's why the ISO C++ wiki page on serialization provides lots of gory details about how to implement serialization rather than just telling you to use a library.

As long as I get to keep using Rust I should never create an ad-hoc text format again.

Categorieën: Mozilla-nl planet