Mozilla Nederland LogoDe Nederlandse

Will Kahn-Greene: Socorro Engineering: June 2019 happenings

Mozilla planet - di, 02/07/2019 - 18:00

Socorro Engineering team covers several projects:

This blog post summarizes our activities in June.

Highlights of June
  • Socorro: Fixed the collector's support of a single JSON-encoded field in the HTTP POST payload for crash reports. This is a big deal because we'll get less junk data in crash reports going forward.
  • Socorro: Reworked how Crash Stats manages featured versions: if the product defines a product_details/PRODUCTNAME.json file, it'll pull from that. Otherwise it calculates featured versions based on the crash reports it's received.
  • Buildhub: deprecated Buildhub in favor of Buildhub2. Current plan is to decommission Buildhub in July.
  • Across projects: Updated tons of dependencies that had security vulnerabilities. It was like a hamster wheel of updates, PRs, and deploys.
  • Tecken: Worked on GCS emulator for local dev environment.
  • All hands discussions:
    • GCP migration plan for Tecken and figure out what needs to be done.
    • Possible GCP migration schedule for Tecken and Socorro.
    • Migrating applications using Buildhub to Buildhub2 and decommissioning Buildhub in July.
    • What would happen if we switched from Elasticsearch to BigQuery?
    • Switching from Socorro's minidump-stackwalk to minidump-analyzer.
    • Re-implementing the Socorro Top Crashers and Signature reports using Telemetry tools and data.
    • Writing a symbolicator and Socorro-style signature generator in Rust that can be used for crash reports in Socorro and crash pings in Telemetry.
    • The crash ping vs. crash report situation (blog post coming soon).

Read more… (5 min remaining to read)

Categorieën: Mozilla-nl planet

The Mozilla Blog: Mozilla joins brief for protection of LGBTQ employees from discrimination

Mozilla planet - di, 02/07/2019 - 16:39

Last year, we joined the call in support of transgender equality as part of our longstanding commitment to diversity, inclusion and fostering a supportive work environment. Today, we are proud to join over 200 companies, big and small, as friends of the court, in a brief brought to the Supreme Court of the United States.

Proud to reaffirm that everyone deserves to be protected from discrimination, whatever their sexual orientation or gender identity.

Diversity fuels innovation & competition. It's a necessary part of a healthy, supportive workplace and of a healthy internet.

— Mozilla (@mozilla) July 2, 2019

The brief says, in part:

“Amici support the principle that no one should be passed over for a job, paid less, fired, or subjected to harassment or any other form of discrimination based on their sexual orientation or gender identity. Amici’s commitment to equality is violated when any employee is treated unequally because of their sexual orientation or gender identity. When workplaces are free from discrimination against LGBT employees, everyone can do their best work, with substantial benefits for both employers and employees.”

The post Mozilla joins brief for protection of LGBTQ employees from discrimination appeared first on The Mozilla Blog.

Categorieën: Mozilla-nl planet

Mozilla Open Policy & Advocacy Blog: Building on the UK white paper: How to better protect internet openness and individuals’ rights in the fight against online harms

Mozilla planet - di, 02/07/2019 - 11:07

In April 2019 the UK government unveiled plans for sweeping new laws aimed at tackling illegal and harmful content and activity online, described by the government as ‘the toughest internet laws in the world’. While the UK government’s proposal contains some interesting avenues of exploration for the next generation of European content regulation laws, it also includes several critical weaknesses and grey areas. We’ve just filed comments with the government that spell out the key areas of concern and provide recommendations on how to address them.

The UK government’s white paper responds to legitimate public policy concerns around how technology companies deal with illegal and harmful content online. We understand that in many respects the current European regulatory paradigm is not fit for purpose, and we support an exploration of what codified content ‘responsibility’ might look like in the UK and at EU-level, while ensuring strong and clear protections for individuals’ free expression and due process rights.

As we have noted previously, we believe that the white paper’s proposed regulatory architecture could have some potential. However, the UK government’s vision to put this model into practice contains serious flaws. Here are some of the changes we believe the UK government must make to its proposal to avoid the practical implementation pitfalls:

  • Clarity on definitions: The government must provide far more detail on what is meant by the terms ‘reasonableness’ and ‘proportionality’, if these are to serve as meaningful safeguards for companies and citizens. Moreover, the government must clearly define the relevant ‘online harms’ that are to be made subject to the duty of care, to ensure that companies can effectively target their trust and safety efforts.
  • A rights-protective governance model:  The regulator tasked with overseeing the duty of care must be truly co-regulatory in nature, with companies and civil society groups central to the process by which the Codes of Practice are developed. Moreover, the regulator’s mission must include a mandate to protect fundamental rights and internet openness, and it must not have power to issue content takedown orders.
  • A targeted scope: The duty of care should be limited to online services that store and publicly disseminate user-uploaded content. There should be clear exemptions for electronic communications services, internet service providers, and cloud services, whose operational and technical architecture are ill-suited and problematic for a duty of care approach.
  • Focus on practices over outcomes:  The regulator’s role should be to operationalise the duty of care with respect to companies’ practices – the steps they are taking to reduce ‘online harms’ on their service. The regulator should not have a role in assessing the legality or harm of individual pieces of content. Even the best content moderation systems can sometimes fail to identify illegal or harmful content, and so focusing exclusively on outcomes-based metrics to assess the duty of care is inappropriate.

We look forward to engaging more with the UK government as it continues its consultation on the Online Harms white paper, and hopefully the recommendations in this filing can help address some of the white paper’s critical shortcomings. As policymakers from Brussels to Delhi contemplate the next generation of online content regulations, the UK government has the opportunity to set a positive standard for the world.

The post Building on the UK white paper: How to better protect internet openness and individuals’ rights in the fight against online harms appeared first on Open Policy & Advocacy.

Categorieën: Mozilla-nl planet

This Week In Rust: This Week in Rust 293

Mozilla planet - di, 02/07/2019 - 06:00

Hello and welcome to another issue of This Week in Rust! Rust is a systems language pursuing the trifecta: safety, concurrency, and speed. This is a weekly summary of its progress and community. Want something mentioned? Tweet us at @ThisWeekInRust or send us a pull request. Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub. If you find any errors in this week's issue, please submit a PR.

Updates from Rust Community News & Blog Posts Crate of the Week

This week's crate is aljabar, an extremely generic linear algebra libary. Thanks to Vikrant for the suggestion!

Submit your suggestions and votes for next week!

Call for Participation

Always wanted to contribute to open-source projects but didn't know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

gfx-rs introduces the contributor-friendly label for issues that are appropriately inviting to new members:

If you are a Rust project owner and are looking for contributors, please submit tasks here.

Updates from Rust Core

196 pull requests were merged in the last week

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

Final Comment Period

Every week the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now.

RFCs Tracking Issues & PRs New RFCs

No new RFCs were proposed this week.

Upcoming Events Asia Pacific Europe North America

If you are running a Rust event please add it to the calendar to get it mentioned here. Please remember to add a link to the event too. Email the Rust Community Team for access.

Rust Jobs

Tweet us at @ThisWeekInRust to get your job offers listed here!

Quote of the Week

Python and Go pick up your trash for you. C lets you litter everywhere, but throws a fit when it steps on your banana peel. Rust slaps you and demands that you clean up after yourself.

Nicholas Hahn on his blog

Thanks to UtherII for the suggestion!

Please submit quotes and vote for next week!

This Week in Rust is edited by: nasa42, llogiq, and Flavsditz.

Discuss on r/rust.

Categorieën: Mozilla-nl planet

Mike Hommey: Git now faster than Mercurial to clone Mozilla Mercurial repos

Mozilla planet - di, 02/07/2019 - 03:06

How is that for clickbait?

With the now released git-cinnabar 0.5.2, the cinnabarclone feature is enabled by default, which means it doesn’t need to be enabled manually anymore.

Cinnabarclone is to git-cinnabar what clonebundles is to Mercurial (to some extent). Clonebundles allow Mercurial to download a pre-generated bundle of a repository, which reduces work on the server side. Similarly, Cinnabarclone allows git-cinnabar to download a pre-generated bundle of the git form of a Mercurial repository.

Thanks to Connor Sheehan, who deployed the necessary extension and configuration on the server side, cinnabarclone is now enabled for mozilla-central and mozilla-unified, making git-cinnabar clone faster than ever for these repositories. In fact, under some conditions (mostly depending on network bandwidth), cloning with git-cinnabar is now faster than cloning with Mercurial:

$ time git clone hg:: mozilla-unified_git Cloning into 'mozilla-unified_git'... Fetching cinnabar metadata from Receiving objects: 100% (12153616/12153616), 2.67 GiB | 41.41 MiB/s, done. Resolving deltas: 100% (8393939/8393939), done. Reading 172 changesets Reading and importing 170 manifests Reading and importing 758 revisions of 570 files Importing 172 changesets It is recommended that you set "remote.origin.prune" or "fetch.prune" to "true". git config remote.origin.prune true or git config fetch.prune true Run the following command to update tags: git fetch --tags hg::tags: tag "*" Checking out files: 100% (279688/279688), done. real 4m57.837s user 9m57.373s sys 0m41.106s $ time hg clone destination directory: mozilla-unified applying clone bundle from adding changesets adding manifests adding file changes added 537259 changesets with 3275908 changes to 523698 files (+13 heads) finished applying clone bundle searching for changes adding changesets adding manifests adding file changes added 172 changesets with 758 changes to 570 files (-1 heads) new changesets 8b3c35badb46:468e240bf668 537259 local changesets published updating to branch default (warning: large working directory being used without fsmonitor enabled; enable fsmonitor to improve performance; see "hg help -e fsmonitor") 279688 files updated, 0 files merged, 0 files removed, 0 files unresolved real 21m9.662s user 21m30.851s sys 1m31.153s

To be fair, the Mozilla Mercurial repos also have a faster “streaming” clonebundle that they only prioritize automatically if the client is on AWS currently, because they are much larger, and could take longer to download. But you can opt-in with the --stream command line argument:

$ time hg clone --stream mozilla-unified_hg destination directory: mozilla-unified_hg applying clone bundle from 525514 files to transfer, 2.95 GB of data transferred 2.95 GB in 51.5 seconds (58.7 MB/sec) finished applying clone bundle searching for changes adding changesets adding manifests adding file changes added 172 changesets with 758 changes to 570 files (-1 heads) new changesets 8b3c35badb46:468e240bf668 updating to branch default (warning: large working directory being used without fsmonitor enabled; enable fsmonitor to improve performance; see "hg help -e fsmonitor") 279688 files updated, 0 files merged, 0 files removed, 0 files unresolved real 1m49.388s user 2m52.943s sys 0m43.779s

If you’re using Mercurial and can download 3GB in less than 20 minutes (in other words, if you can download faster than 2.5MB/s), you’re probably better off with the streaming clone.

Bonus fact: the Git clone is smaller than the Mercurial clone

The Mercurial streaming clone bundle contains data in a form close to what Mercurial puts on disk in the .hg directory, meaning the size of .hg is close to that of the clone bundle. The Cinnabarclone bundle contains a git pack, meaning the size of .git is close to that of the bundle, plus some more for the pack index file that unbundling creates.

The amazing fact is that, to my own surprise, the git pack, containing the repository contents along with all git-cinnabar needs to recreate Mercurial changesets, manifests and files from the contents, takes less space than the Mercurial streaming clone bundle.

And that translates in local repository size:

$ du -h -s --apparent-size mozilla-unified_hg/.hg 3.3G mozilla-unified_hg/.hg $ du -h -s --apparent-size mozilla-unified_git/.git 3.1G mozilla-unified_git/.git

And because Mercurial creates so many files (essentially, two per file that ever was in the repository), there is a larger difference in block size used on disk:

$ du -h -s mozilla-unified_hg/.hg 4.7G mozilla-unified_hg/.hg $ du -h -s mozilla-unified_git/.git 3.1G mozilla-unified_git/.git

It’s even more mind blowing when you consider that Mercurial happily creates delta chains of several thousand revisions, when the git pack’s longest delta chain is 250 (set arbitrarily at pack creation, by which I mean I didn’t pick a larger value because it didn’t make a significant difference). For the casual readers, Git and Mercurial try to store object revisions as a diff/delta from a previous object revision because that takes less space. You get a delta chain when that previous object revision itself is stored as a diff/delta from another object revision itself stored as a diff/delta … etc.

My guess is that the difference is mainly caused by the use of line-based deltas in Mercurial, but some Mercurial developer should probably take a deeper look. The fact that Mercurial cannot delta across file renames is another candidate.

Categorieën: Mozilla-nl planet

Mozilla Security Blog: Fixing Antivirus Errors

Mozilla planet - ma, 01/07/2019 - 19:07

After the release of Firefox 65 in December, we detected a significant increase in a certain type of TLS error that is often triggered by the interaction of antivirus software with the browser. Today, we are announcing the results of our work to eliminate most of these issues, and explaining how we have done so without compromising security.

On Windows, about 60% of Firefox users run antivirus software and most of them have HTTPS scanning features enabled by default. Moreover, CloudFlare publishes statistics showing that a significant portion of TLS browser traffic is intercepted. In order to inspect the contents of encrypted HTTPS connections to websites, the antivirus software intercepts the data before it reaches the browser. TLS is designed to prevent this through the use of certificates issued by trusted Certificate Authorities (CAs). Because of this, Firefox will display an error when TLS connections are intercepted unless the antivirus software anticipates this problem.

Firefox is different than a number of other browsers in that we maintain our own list of trusted CAs, called a root store. In the past we’ve explained how this improves Firefox security. Other browsers often choose to rely on the root store provided by the operating system (OS) (e.g. Windows). This means that antivirus software has to properly reconfigure Firefox in addition to the OS, and if that fails for some reason, Firefox won’t be able to connect to any websites over HTTPS, even when other browsers on the same computer can.

The interception of TLS connections has historically been referred to as a “man-in-the-middle”, or MITM. We’ve developed a mechanism to detect when a Firefox error is caused by a MITM. We also have a mechanism in place that often fixes the problems. The “enterprise roots” preference, when enabled, causes Firefox to import any root CAs that have been added to the OS by the user, an administrator, or a program that has been installed on the computer. This option is available on Windows and MacOS.

We considered adding a “Fix it” button to MITM error pages (see example below) that would allow users to easily enable the “enterprise roots” preference when the error is displayed. However, we realized that this was something we want users to do rather than an “override” button that allows a user to bypass an error at their own risk.

Example of a MitM Error Page in Firefox

Beginning with Firefox 68, whenever a MITM error is detected, Firefox will automatically turn on the “enterprise roots” preference and retry the connection. If it fixes the problem, then the “enterprise roots” preference will remain enabled (unless the user manually sets the “security.enterprise_roots.enabled” preference to false). We’ve tested this change to ensure that it doesn’t create new problems. We are also recommending as a best practice that antivirus vendors enable this preference (by modifying prefs.js) instead of adding their root CA to the Firefox root store. We believe that these actions combined will greatly reduce the issues encountered by Firefox users.

In addition, in Firefox ESR 68, the “enterprise roots” preference will be enabled by default. Because extended support releases are often used in enterprise settings where there is a need for Firefox to recognize the organization’s own internal CA, this change will streamline the process of deploying Firefox for administrators.

Finally, we’ve added an indicator that allows the user to determine when a website is relying on an imported root CA certificate. This notification is on the site information panel accessed by clicking the lock icon in the URL bar.

It might cause some concern for Firefox to automatically trust CAs that haven’t been audited and gone through the rigorous Mozilla process. However, any user or program that has the ability to add a CA to the OS almost certainly also has the ability to add that same CA directly to the Firefox root store. Also, because we only import CAs that are not included with the OS, Mozilla maintains our ability to set and enforce the highest standards in the industry on publicly-trusted CAs that Firefox supports by default. In short, the changes we’re making meet the goal of making Firefox easier to use without sacrificing security.

The post Fixing Antivirus Errors appeared first on Mozilla Security Blog.

Categorieën: Mozilla-nl planet

Mike Hommey: Announcing git-cinnabar 0.5.2

Mozilla planet - ma, 01/07/2019 - 07:17

Git-cinnabar is a git remote helper to interact with mercurial repositories. It allows to clone, pull and push from/to mercurial remote repositories, using git.

Get it on github.

These release notes are also available on the git-cinnabar wiki.

What’s new since 0.5.1?
  • Updated git to 2.22.0 for the helper.
  • cinnabarclone support is now enabled by default. See details in and mercurial/
  • cinnabarclone now supports grafted repositories.
  • git cinnabar fsck now does incremental checks against last known good state.
  • Avoid git cinnabar sometimes thinking the helper is not up-to-date when it is.
  • Removing bookmarks on a Mercurial server is now working properly.
Categorieën: Mozilla-nl planet

IRL (podcast): Democracy and the Internet

Mozilla planet - ma, 01/07/2019 - 02:05

Part of celebrating democracy is questioning what influences it. In this episode of IRL, we look at how the internet influences us, our votes, and our systems of government. Is democracy in trouble? Are democratic elections and the internet incompatible?

Politico's Mark Scott takes us into Facebook's European Union election war room. Karina Gould, Canada's Minister for Democratic Institutions, explains why they passed a law governing online political ads. The ACLU's Ben Wizner says our online electoral integrity problem goes well beyond a few bad ads. The team at Stop Fake describes a massive problem that Ukraine faces in telling political news fact from fiction, as well as how they're tackling it. And NYU professor Eric Klinenberg explains how a little bit of offline conversation goes a long way to inoculate an electorate against election interference.

IRL is an original podcast from Firefox. For more on the series go to

Early on in this episode, we comment about how more privacy online means more democracy offline. Here's more on that concept from Michaela Smiley at Firefox.

Have a read through Mark Scott's Politico reporting on Facebook's European election war room.

For more from Eric Klinenberg, check out his book, Palaces for the People: How Social Infrastructure Can Help Fight Inequality, Polarization, and the Decline of Civic Life.

And, find out more about Stop Fake, its history, and its mission here.

Categorieën: Mozilla-nl planet

Cameron Kaiser: And now for something completely different: NetBSD on the last G4 Mac mini (and making the kernel power failure proof)

Mozilla planet - zo, 30/06/2019 - 08:48
(First, as a public service message, if you're running Linux on a G5 you may wish to update the kernel.)

I'm a big fan of NetBSD. I've run it since 2000 on a Mac IIci (of course it's still running it) and I ran it for several years on a Power Mac 7300 with a G3 card which was the second incarnation of the Floodgap gopher server. Today I also still run it on a MIPS-based Cobalt RaQ 2 and an HP Jornada 690. I think NetBSD is a better match for smaller or underpowered systems than current-day Linux, and is fairly easy to harden and keep secure even though none of these systems are exposed to the outside world.

Recently I had a need to set up a bridge system that would be fast enough to connect two networks and I happened to have two of the "secret" last-of-the-line 1.5GHz G4 Mac minis sitting on the shelf doing nothing. Yes, they're probably outclassed by later Raspberry Pi models, but I don't have to buy anything and I like putting old hardware to good use. So here it is, doing serious business, with the total outlay being the cost of one weekend afternoon:

NetBSD/macppc is a fairly mature port, but that doesn't mean it doesn't have bugs. And oddly there do seem to still be some in the install process, at least of the 8.1 release I used, on this last and mightiest of the PowerPC miniatures. Still, once it got set up it's been working great since, so here's a few pointers on getting the 1.5 mini (and current Power Macs generally) running as little NetBSD servers. As most of my readers are Power Mac users and likely to have multiple Power Macs that can aid this process, I will orient this guide to them with some sidebar notes for people trying to pull the mini up all by itself. This machine is configured with 1GB of RAM, the standard 2.5" PATA spinning disk and optical drive, USB 2.0, FireWire 400, Bluetooth and WiFi, using the onboard Radeon 9200 GPU as the console.

The working configuration, hit upon by Sevan Janiyan, is to have an HFS+ partition with the bootloader (called ofwboot.xcf) and the kernel, and then a separate partition for NetBSD's root volume. For some reason the mini goes berserk when trying to boot from a kernel on a NetBSD native partition, but works fine from an HFS+ one. Unfortunately, since the NetBSD installer cannot actually initialize an HFS+ volume, you'll need to do some of the partitioning work in Mac OS X, copy the files there, and then finish the rest. There's a couple ways of skinning that cat, but for many of you this means you'll need not only the NetBSD installer CD, but also a bootable copy of Mac OS X either on disc (i.e., an installer) or a bootable external drive, and some means of copying files.

And that brings us to our first pro tip: the G4 Mac minis had absolutely crap optical drives that would fail if you looked at them crossways. This drive was no exception; it would read commercially pressed CDs but only certain media types of burned CDs and wouldn't read any DVD at all. That means it wouldn't boot from my usual OS X Tiger 10.4.6 install DVD, and the last generation of G4 minis require 10.4, so none of my previous OS X CDs would work.

As it happens, the minimum requirement for the G4/1.5 minis is not actually 10.4.2, yet another Apple lie; it's actually 10.4.0 (though note that some devices like Bluetooth may not work properly). This is fortunate because 10.4.0 was available in CD format and I was able to boot Disk Utility off that Tiger CD instead. Your other option is to bring up the mini in Target Disk Mode (connect over FireWire; hold T down as you turn the mini on until you see a yellow logo on a blue background) from another Power Mac and do the formatting there. In fact, we'll be using Target Disk Mode in a minute, but here I just booted from the CD instead.

In Disk Utility (whether you're doing this on the machine from the Tiger installer or on another machine over FireWire), wipe the mini's current partition scheme and create two new partitions. The first is your HFS+ volume for booting. This particular machine will only run NetBSD, so I made it 512MB to have enough room for multiple kernels and for other files I might need, but if you want a dual-boot system you can make this larger. The second partition will be for NetBSD; I allocated everything else and created it as a UFS ("UNIX File System") partition, though we will divvy it up later. The formatting scheme should look more or less like these screenshots. Shut down the mini when you're done.

Now we boot the NetBSD installer. Bring up the machine in OpenFirmware mode -- all New World Macs use OpenFirmware 3 -- by holding down Command-Option-O-F while powering it on (I advise doing this from a directly-attached USB keyboard). This will bring up the OpenFirmware interface. When you are commanded to do so, release the keys and you will drop to the famous ok prompt. If you're lucky and the evil spirits in your optical drive have been placated by an offering of peanut M&Ms and a young maiden at midnight, you can simply insert the NetBSD install disc and type

boot cd:,\ofwboot.xcf netbsd.macppc

Note the backslash, not a forward slash! If this works properly, then the screen will go black (you don't go back) and enter the Installer proper.

If you get weird errors or OpenFirmware complains the disc is not readable, the optical drive is probably whacked. My drive wouldn't read burned Fujifilm CD-R media (that everything else did), but would read burned Maxell media. If you can't even control for that, you may be able to connect a FireWire CD/DVD reader and boot from it instead. The command would be "something like"

boot fw/node/sbp-2/disk:,\ofwboot.xcf netbsd.macppc

If this didn't work, you may need to snoop around the OpenFirmware device tree to figure out where the device is actually attached, though this should basically work for the G4 mini's single port. Alternatively, you could also try a USB CD-ROM drive, or dding the install image to a USB drive on another computer and booting the mini from that, but the boot string will vary based on which port you connect it to (use dev usb0 and ls to show everything under that port, then dev usb1, etc.). Make sure it is directly connected to the mini. Once you find a device that shows a disk, then "something like" this will work (let's say it was found under usb1):

boot usb1/disk:,\ofwboot.xcf netbsd.macppc

If even that won't work, there are some other ways like netbooting, but this is rather more complicated and I won't talk about it here. Or you could actually fix the drive, I guess ...

When the Installer starts up, choose the option to drop to the shell when it is offered. We will now finish the partitioning from the NetBSD side; we do not use the Installer's built-in partition tool as it will run out of memory. At the shell prompt, type

pdisk /dev/wd0c

When it asks you for a command, type a capital letter P and press RETURN. This will print out the current partition map, which if your mini is similar to mine, should show 4 partitions: the Apple partition map itself, followed by the HFS+ partition, and then by a tiny Apple_Boot partition that is made whenever a UFS volume appears to be the boot volume. (Silly Mac OS X.) You can remove it if you want, but this seemed like more trouble than it was worth for a measly 8.5 megabytes. After that is the space for NetBSD. On my G4 mini, this was partition number 4. Delete this partition by typing a lower-case d, press RETURN, and type 4. Be sure of this number! I will use it in the examples below.

First we will formally create the swap. This is done with the capital letter C command (as shown in the screenshot). Indicate the first block is 4p (i.e., starting at partition 4), for 4194304 blocks (2GB), type Apple_UNIX_SVR2 (don't forget the underscores!), and slice b.

Next is the actual NetBSD root: capital letter C, then saying the first block was 5p (i.e., starting at partition 5, the unallocated section), soaking up the rest of the blocks (however many you see listed under Apple_Free), type Apple_UNIX_SVR2 (don't forget the underscores!), and slice a.

If you did all this right, your screen should look more or less like this:

Verify the partition map one more time with the capital letter P command, then write it out with lower-case w, answering y(es), and then quit with lower-case q. At the shell prompt, return to the installer by typing sysinst and when asked, indicate you will "Use existing partition sizes." The installer will then install the appropriate packages and you can do the initial setup for your clock, the root password, etc. When this is all done, reboot your mini with the left mouse button held down; it will eject the CD (and fail to find a boot volume if you do not have an OS X installation). Shut down the mini.

Before the mini will boot NetBSD, we must copy the kernel and the bootloader to the HFS+ partition. This is where Target Disk Mode comes in handy, because you can just copy directly. Here is my iBook G4 copying a custom kernel (more in a moment):

On the iBook G4, I put in the NetBSD install CD and copied off ofwboot.xcf and netbsd-GENERIC.gz, or you can download them from here and here. They should be copied to the root of the mini's HFS+ volume for the command below to work. For good measure I also uncompressed the gzipped kernel as a failsafe and put a copy of the installation kernel there too, though this isn't necessary. Once the files are copied, eject the mini's drive on the FireWire machine, unplug the FireWire and power the mini off.

If you don't have another Mac around that can talk to the mini over FireWire, you can do this from NetBSD itself, but it's a bit more involved.

Either way, re-enter OpenFirmware with Cmd-Opt-O-F while powering it back up. It's time to boot your new NetBSD machine.

You can see from the screenshot here that the HFS+ volume is considered partition 2, as we left it in pdisk. That means your boot string is

boot hd:,\ofwboot.xcf hd:2/netbsd-GENERIC.gz

Yes, the path to ofwboot still has a backslash, but the argument to ofwboot actually needs a forward slash. NetBSD will start immediately.

There are several minor and one rather obnoxious bug with NetBSD's current support. You will notice a few strange messages on startup as part of the huge mass of text:

oea_startup: failed to allocate DEAD ZONE: error=12
pmu0: power-mgt not configured
pmu0: pmu-pwm-fans not configured
WARNING: 3 errors while detecting hardware; check system log.
bwi0: firmware_open failed on v3/ucode5.fw

I don't know what the first three are, but they appear to be harmless, and appear in many otherwise working dmesg archives (see also this report). The summary WARNING thus can also be politely ignored.

However, the last message is rather obnoxious. Per Sevan the built-in Broadcom WiFi in the Mac mini (detected as bwi0) doesn't work right in NetBSD with more than 512MB of memory, which I refuse to downgrade to, and NetBSD doesn't come with the firmware anyway. Even if you copy it off some other system that does, you won't be able to bring the interface up in the configuration here (you'll just see weird errors about wrong firmware version, etc.).

Since this machine is a bridge and sometimes needs to connect to a test WiFi, I went with a USB WiFi dongle instead (I also use a USB dongle when bridging Ethernet to Ethernet, but pretty much any Ethernet-USB dongle will work too). The one I had on the shelf that I'd bought for something else and then forgot about was a Belkin Wireless G. They sell a number of chipsets under this name, but the model F5D7050 I have here is based on a Ralink RT2501USB chipset that NetBSD sees as rum0, and works fine with wpa_supplicant.

Last but not least was making it "failsafe," with a solid power supply and making it autostarting. Although the G4 mini came with an 85W power supply, I stole the 110W from my 2007 Intel mini and used that so it wouldn't run anywhere near the PSU's capacity and hopefully lengthen its lifetime. As it turns out, this may not have been a problem anyway; most of the time this system is using just 21W on the Kill-A-Watt, maybe 40ish when it's booting.

To autostart NetBSD, ordinarily you would go into OpenFirmware and set boot-device to the bootloader and boot-file to the kernel, as the picture below shows.

However, you'll end up with a black screen or at minimum no console at all on an OpenFirmware 3 system if that's all you do. The magic sauce is to emit some text to the screen before loading the bootloader. Thus, the OpenFirmware settings are (each setenv command is one line):

setenv auto-boot? true
setenv boot-device hd:,\ofwboot.xcf
setenv boot-file hd:2/netbsd-GENERIC.gz (note that I used a different kernel in the screenshot: more in a second)
setenv boot-command ." hi there" cr " screen" output boot

The boot-command spacing is especially critical. There is a space after the ." and the quote mark before screen" and after cr is also separated by spaces. The reset-all just tells OpenFirmware to write those settings to NVRAM. If you zap the mini's PRAM with Command-Option-P-R later, you may need to re-enter these.

In this configuration your mini will now start NetBSD automatically when it's turned on (just hold down Command-Option-O-F when starting it up to abort to OpenFirmware). However, this won't bring the machine up automatically after a power failure. While FreeBSD allows starting up after a power failure, this code apparently never made it over to NetBSD. Happily, supporting it merely requires a relatively simple kernel hack. Based on the FreeBSD pmu(4) driver, I created a patch that will automatically reboot any PMU-based NetBSD Power Mac after a power failure.

You should be comfortable with compiling your own kernels in NetBSD; not only is it just good to do for auditing purposes, but you can slim the kernel down substantially or enable other less common features. It's especially easy for NetBSD because all the tools to build it come with a standard installation. All you need to do is download the source and run the build process.

To use this patch, download the source to your home directory on the NetBSD box (you want syssrc.tgz) and download the patch and have it in your home directory as pmu.diff. If you don't have a working curl on your install yet (pkg_add curl, pkg_add mozilla-rootcerts, mozilla-rootcerts install), you may want to download it somewhere else and use scp, sftp or ftp to retrieve it. Then, adjusting as necessary for username and path,

cd /
tar zxf ~/syssrc.tgz
cd /usr/src/sys/arch/macppc/dev
patch -p0 < ~/pmu.diff

Then follow the instructions to make the kernel. I have a pre-built one of 8.1-GENERIC (I call it POWERON) on the gopher server, but you should really roll your own so that you get security fixes, since I may only maintain that kernel intermittently. That build is the one I'm using on the machine currently and on the screenshot above. With this custom kernel installed, when the power is abruptly cut while the machine is powered up it will automatically reboot when power is reapplied, just as the analogous option does in Mac OS X. Copy it to the HFS+ partition and remember to change boot-file to point to it once you've confirmed it works.

Overall, I think the G4 mini makes a fine little server. I wouldn't use it as a client except in Mac OS X itself, and I am forced to admit that even that is becoming less practical these days. But as a little machine to do important back-office tasks and do so reliably, I think NetBSD on the mini is a good choice. Once all the kinks with the installation got ironed out, so far it's been solid and performant especially considering this machine is about 13 years old (though I'm happy with its performance even on thirty-year-old machines). Rather than buying something new, if your needs are small it's probable you've got some old machine around that could do those tasks instead of oozing toxins from its circuit board into a waste dump in Rwanda. And since I had two on the shelf, it has an instant spare. I'll probably be usefully running it for as long as I've run my other NetBSD systems, and that's the highest compliment I think I can pay it.

Categorieën: Mozilla-nl planet

Firefox UX: iPad Sketching with GoodNotes 5

Mozilla planet - zo, 30/06/2019 - 00:51

I’m always making notes and sketching on my iPad and people often ask me what app I’m using. So I thought I’d make a video of how I use GoodNotes 5 in my design practice.

This video on YouTube
Templates: blank, storyboard, crazy 8s, iPhone X

Hi, I’m Michael Verdi, and I’m a product designer for Firefox. Today, I wanna talk about how I use sketching on my iPad in my design practice. So, first, sketching on a iPad, what I really like about it is that these apps on the iPad allow me to collect all my stuff in one place. So, I’ve got photos and screenshots, I’ve got handwritten notes, typed up notes, whatever, it’s all together. And I can organize it, and I can search through it and find what I need.

There’s really two basic kind of apps that you can use for this. There’s drawing and sketching apps, and then there’s note-taking apps. Personally, I prefer the note-taking apps because they usually have better search tools and organization. The thing that I like that I’m gonna talk about today is GoodNotes 5.

I’ve got all kinds of stuff in here. I’ve got handwritten notes with photographs, I’ve got some typewritten notes, screenshots, other things that I’ve saved, photographs, again. Yeah, I really like using this. I can do storyboards in here, right, or I can draw things, copy and paste them so that I can iterate quickly, make multiple variations over and over again. I can stick in screenshots and then draw on top of them or annotate them. Right, so, let me do a quick demo of using this to draw.

So, one of the things that I’ll do is actually, maybe I’ve drawn some stuff before, and I’ll save that drawing as an image in my photo library. And then I’ll come stick it in here, and I’ll draw on top of it. So, I work on search, so here’s Firefox with no search box. So, I’m gonna draw one. Let’s use some straight lines to draw one. I’m gonna draw a big search box, but I’m doing it here in the middle because I’m gonna place it a little better in a second. And we have the selection tool, and I’m gonna make the, the selection is not selecting images, right? So, I can come over here and just grab my box, and then I can move my box around on top. Okay, so, I still have this gray line. I can’t erase that because it’s an image. So, I’m gonna come over here, and I’m gonna get some white, and I’m gonna just draw over it. Right, okay. Let’s go back and get my gray color. I can zoom in when I need to, and I’m gonna copy this, and I’m gonna pate it a bunch of times. Then I can annotate this. Right, so, there we go.

Another thing that I really like about GoodNotes is the ability to search through stuff that you’ve done. So, I’m gonna search here, and I’m gonna search for spaces. So, this was a thing that we mocked up with a storyboard. This is it right here. And it recognized my, it read my handwriting, which is really cool. So, I can find this thing in a notebook of a jillion pages. But there’s also another way to find things. So, you have this view here, this is called the outline view. These are sorta like named bookmarks. There’s also a thumbnail view, right? Here’s all the pages in this notebook. But if I go to the outlines, so, here, I did some notes about a critique format, and I can jump right to them. But let’s say this new drawing, well, where did I do it? This new drawing, I wanna be able to get to this all the time, right? So, I can come up here, and I can say Add This Page to the Outline, and now I can give it a name. And I don’t know what I’m gonna call this, so I’m just callin’ it sample for right now. And so, now it is in the outline. Oh, I guess I had already done this as a demo. But there it is. And that’s how I can get to it now. That’s super, super cool.

Okay, and then one last thing I wanna show you about this is templates. So, I actually made this template to better fit my setup here. I’ve got no status bar at the top. And then these are just PDFs. And you can import your own. And I can change the template for this page, and I’ve made a storyboard template. And I can apply that here. And now I’ve got a thing so I can draw a storyboard. Or maybe I don’t wanna do a storyboard, but what else do I have? Oh, I wanna do a crazy eights exercise. So, now I’ve got one ready for a crazy eight exercise. I love these templates, they’re super handy. I’ll include those in the post with this, a link to some of these things.

So, that’s sketching on the iPad with GoodNotes 5. Thanks for watching.


Categorieën: Mozilla-nl planet

Hacks.Mozilla.Org: GeckoView in 2019

Mozilla planet - do, 27/06/2019 - 18:02

Logo of GeckoViewLast September we wrote about using GeckoView to bring Firefox’s rendering engine to Android as a reusable library. By decoupling the Gecko engine from the Firefox application, we’ve created a newer, faster, and more maintainable way to create Android applications. This approach leverages Gecko’s excellent performance, privacy, and support for cutting-edge web standards.

With today’s release of our GeckoView-powered Firefox Preview, we’d like to share an update on what we’ve accomplished and where GeckoView is going in 2019.

Introducing Firefox Preview

Wordmark of Firefox PreviewWe’re excited to announce today’s initial release of Firefox Preview (GitHub), an entire browser built from the ground up with GeckoView and Mozilla Android Components at its core. Though still an early preview, this is our first end-user product built completely with these new technologies.

Two screenshots of Firefox Preview showing the home screen and a page loaded with the main menu open

Firefox Preview is our platform for building, testing, and delivering unique features. We’ll use it to explore new concepts for how mobile browsers should look and feel. We encourage you to give it a try!

Other Projects using GeckoView

But that is not all — Mozilla is using GeckoView in many other products as well:

Firefox Focus

Logo of Firefox FocusTo date, Firefox Focus has been our most prominent consumer of GeckoView. Focus’s simplicity lends itself to experimentation. Currently we’re using Focus to split test between GeckoView and Android’s built-in WebView. This helps us ensure that GeckoView’s performance and stability meet or exceed expectations set by Android’s platform libraries.

While Focus is great at what it does, it is not a general purpose browser. By design, Focus does not keep track of history or bookmarks, nor does it support APIs like WebRTC. Yet we need a place to test those features to ensure that GeckoView is sufficiently robust and capable of building fully-featured browsers. That’s where Reference Browser comes in.

Reference Browser

Logo of Reference BrowserLike Firefox Preview, Reference Browser is a full browser built with GeckoView and Mozilla Android Components, but crucially, it is not an end-user product. Its intended audience is browser developers. Indeed, Reference Browser is a proving ground where we validate that GeckoView and the Components fit together and work as expected. We gain the ability to develop our core libraries without the constraints of an in-market product.

Firefox Reality

Logo of Firefox RealityGeckoView also powers Firefox Reality, a browser designed for standalone virtual reality headsets. In addition to leveraging Gecko’s excellent support for immersive web technologies, Firefox Reality demonstrates GeckoView’s versatility. The same library that’s at the heart of “traditional” browsers like Focus and Firefox Preview can also power experiences in an entirely different medium.

Firefox for Android

Logo of FirefoxLastly, while Firefox for Android (“Fennec”) does not use GeckoView for normal browsing, it does use it to support Progressive Web Apps and Custom Tabs. Moreover, because GeckoView and Fennec are both based on Gecko, they jointly benefit from improvements to that common infrastructure.

GeckoView is the foundation of Mozilla’s next generation of mobile products. To better support that future, we’ve halted new feature development on Focus while we concentrate on refining GeckoView and prepare for the launch of Firefox Preview. If you’re interested in supporting Focus in the future, please help by filling out this survey.


Aside from product development, the past six months have seen many improvements to GeckoView’s internals, especially around compiler-level optimizations and support for additional CPU architectures. Highlights include:

  • Profile-Guided Optimization (PGO) on Android is now enabled, which allows the compiler to generate more efficient code by considering data gathered by actually running and observing GeckoView.
  • The IonMonkey JavaScript JIT compiler is now enabled for 64-bit ARM builds of GeckoView.
  • We’re now producing builds of GeckoView for the x86_64 architecture.

In addition to meeting Google’s upcoming requirements for listing in the Play Store, supporting 64-bit architectures further improves GeckoView’s stability (fewer out-of-memory crashes) and security.

For upcoming releases, we’re working on support for web push and “add to home screen,” among other things.

Get involved

GeckoView isn’t just for Mozilla, we want it to be useful to you.

Thanks to Emily Toop for new work on the GeckoView Documentation website. It’s easier than ever to get started, either as an app developer using GeckoView or as a GeckoView contributor. If you spot something that could be better documented, pull requests are always welcome.

Logos of all GeckoView-powered projects

We’d also love to help you directly. If you need assistance with anything GeckoView-related, you can find us:

Please do reach out with any questions or comments.

The post GeckoView in 2019 appeared first on Mozilla Hacks - the Web developer blog.

Categorieën: Mozilla-nl planet

Mozilla Future Releases Blog: Reinventing Firefox for Android: a Preview

Mozilla planet - do, 27/06/2019 - 17:59

At Firefox, we’re passionate about providing solutions for people who care about safety, privacy and independence. For several months, we’ve been working on a new strategy for our Android products to serve you even better. Today we’re very happy to announce a pilot of our new browser for Android devices that is available to early adopters for testing as of now. We’ll have a feature-rich, polished version of this flagship application available for this fall.

Firefox Preview — our new mobile pilot app for Android

Always-on, always private: a new and improved mobile Firefox

Unlike Big Tech, which only recently started to put more emphasis on privacy, we launched Firefox Focus about two and a half years ago, a mobile browser for iOS and Android that allows you to discover the web without being followed around by trackers. While continuously improving Firefox Focus over time, we realized that users demanded a full-fledged mobile browsing experience, but more private and secure than any existing app. So we decided to make Firefox more like Focus, but with all the ease and amenities of a full-featured mobile browser. The result is an early version of what we currently call Firefox Preview.

Bringing Firefox Quantum performance to mobile, with GeckoView

With Firefox Preview, we’re combining the best of what our lightweight Focus application and our current mobile browsers have to offer to create a best in class mobile experience. The new application is powered by Firefox’s own mobile browser engine — GeckoView — the same high-performance, feature enabling motor that fuels our Focus app.

You might remember how we revamped the engine behind the Firefox desktop browser in 2017 enabling us to significantly improve the desktop user experience. As a result, today’s Firefox Quantum is much faster, more efficient, equipped with a modern user interface and clearly the next-gen Firefox. Quite similarly, implementing GeckoView paves the way for a complete makeover of the mobile Firefox experience. While all other major Android browsers today are based on Blink and therefore reflective of Google’s decisions about mobile, Firefox’s GeckoView engine ensures us and our users independence. Building Firefox for Android on GeckoView also results in greater flexibility in terms of the types of privacy and security features we can offer our mobile users. With GeckoView we have the ability to develop faster, more secure and more user friendly browsers that deliver unprecedented performance.

To speak more specifically about features, here are some new functions Firefox Preview will offer, partially enabled by GeckoView:

        • Faster than ever: Firefox Preview is up to 2x faster than previous versions of Firefox for Android.
        • Fast by design: with a minimalist start screen and bottom navigation bar, Preview helps you get more done on the go.
        • Stay organized: Make sense of the web with Collections, a new feature that helps you save, organize, and share collections of sites. Quickly save and return to tasks like your morning routine, shopping lists, travel planning and more.
        • Tracking Protection on by default: Everyone deserves freedom from invasive advertising trackers and other bad actors so Firefox Preview blocks trackers by default. The result is faster browsing and fewer annoyances.



With Firefox Preview you’re browsing the mobile web faster, more efficiently and more privately


For more information about how we’re planning to use GeckoView in our product portfolio, check out this blog post on Mozilla Hacks.

Be among the first to test

Before we release products to the world, we run many different experiments and tests which we learn from and help us make our products better for real consumption. For example, our Firefox Quantum desktop browser has a beta release, a separate channel aimed at developers or early tech adopters to test upcoming features before they’re released to all consumers.

Likewise, what we’re releasing today is an early version for our experimental browser for Android users based on GeckoView. Firefox Preview is a separate mobile application primarily aimed at developers and early adopters who want to help us improve Firefox on Android. The user experience of this early version will differ significantly from the final product, planned for release later this year. We’re counting on our passionate users to try it now and provide the kind of feedback (via email or on Github) that will enable us to release the best mobile Firefox possible and continuously improve GeckoView.

How our new mobile strategy affects existing products

For the rest of 2019, we’re going to direct our efforts into optimizing the entire Firefox experience on all Android devices. In order to have a strong foundation for the next generation of mobile Firefox browsers and put all our efforts and resources in GeckoView, work on Firefox Focus will currently be on hold. Don’t worry though, you can still keep using our privacy browser, Focus, as well as our current Firefox for Android.

Stay tuned for more!

We hope this update from the Firefox Mobile Team sparks excitement for the new mobile strategy we’re rolling out in 2019. We plan to take mobile browsing to a whole new level. No matter where, when or on which device, we at Firefox believe that you always deserve the best possible user experience. And we’ll do our best to bring it to your screens.

Try the preview of our new Firefox for Android, let us know what you think about this GeckoView-based mobile app and stay tuned!

The post Reinventing Firefox for Android: a Preview appeared first on Future Releases.

Categorieën: Mozilla-nl planet

Chris Pearce: Firefox's Gecko Media Plugin & EME Architecture

Mozilla planet - do, 27/06/2019 - 04:31
For rendering audio and video Firefox typically uses either the operating system's audio/video codecs or bundled software codec libraries, but for DRM video playback (like Netflix, Amazon Prime Video, and the like) and WebRTC video calls using baseline H.264 video, Firefox relies on Gecko Media Plugins, or GMPs for short.

This blog post describes the architecture of the Gecko Media Plugin system in Firefox, and the major class/objects involved, as it looked in June 2019.

For DRM video Firefox relies upon Google's Widevine Content Decryption Module, a dynamic shared library downloaded at runtime. Although this plugin doesn't conform to the GMP ABI, we provide an adapter to allow it to be run through the GMP system. We use the same Widevine CDM plugin that Chrome uses.
For decode and encode of H.264 streams for WebRTC, Firefox uses OpenH264, which is provided by Cisco. This plugin implements the GMP ABI.
These two plugins are downloaded at runtime from Google's and Cisco's servers, and installed in the user's Firefox profile directory.
We also ship a ClearKey CDM, which is the baseline decryption scheme required by the Encrypted Media Extensions specification. This mimics interface which the Widevine CDM implements, and is used in our EME regression tests. It's bundled with the rest of Firefox, and lives in the Firefox install directory.
The objects involved in running GMPs are spread over three processes; the main (AKA parent) process, the sandboxed content process where we run JavaScript and load web pages, and the sandboxed GMP process, which only runs GMPs.

You can view a Diagram of Firefox's Gecko Media Plugin online, or download a PDF version of Firefox's Gecko Media Plugin architecture.
The main facade to the GMP system is the GeckoMediaPluginService. Clients use the GeckoMediaPluginService to instantiate IPDL actors connecting their client to the GMP process, and to configure the service. In general, most operations which involve IPC to the GMPs/CDMs should happen on the GMP thread, as the GMP related protocols are processed on that thread.
mozIGeckoMediaPluginService can be used on the main thread by JavaScript, but the main-thread accessible methods proxy their work to the GMP thread.How GMPs are downloaded and installedThe Firefox front end code which manages GMPs is the GMPProvider. This is a JavaScript object, running in the front end code in the main process. On startup if any existing GMPs are already downloaded and installed, this calls mozIGeckoMediaPluginService.addPluginDir() with the path to the GMP's location on disk. Gecko's C++ code then knows about the GMP. The GeckoMediaPluginService then parses the metadata file in that GMP's directory, and creates and stores a GMPParent for that plugin. At this stage the GMPParent is like a template, which stores the metadata describing how to start a plugin of this type. When we come to instantiate a plugin, we'll clone the template GMPParent into a new instance, and load a child process to run the plugin using the cloned GMPParent.
Shortly after the browser starts up (usually within 60 seconds), the GMPProvider will decide whether it should check for new GMP updates. The GMPProvider will check for updates if either it has not checked in the past 24 hours, or if the browser has been updated since last time it checked. If the GMPProvider decides to check for updates, it will poll Mozilla's Addons Update Server. This will return an update.xml file which lists the current GMPs for that particular Firefox version/platform, and the URLs from which to download those plugins. The plugins are hosted by third parties (Cisco and Google), not on Mozilla's servers. Mozilla only hosts the manifest describing where to download them from.
If the GMPs in the update.xml file are different to what is installed, Firefox will update its GMPs to match the update.xml file from AUS. Firefox will download and verify the new GMP, uninstall the old GMP, install the new GMP, and then add the new GMP's path to the mozIGeckoMediaPluginService. The objects that do this are the GMPDownloader and the GMPInstallManager, which are JavaScript modules in the front end code as well.
Note Firefox will take action to ensure its installed GMPs matches whatever is specified in the update.xml file. So if a version of a GMP which is older than what is installed is specified in the update.xml file, Firefox will uninstall the newer version, and download and install the older version. This is to allow a GMP update to be rolled back if a problem is detected with the newer GMP version.
If the AUS server can't be contacted, and no GMPs are installed, Firefox has the URLs of GMPs baked in, and will use those URLs to download the GMPs.
On startup, the GMPProvider also calls mozIGeckoMediaPluginService.addPluginDir() for the ClearKey CDM, passing in its path in the Firefox install directory.How EME plugins are started in FirefoxThe lifecycle for Widevine and ClearKey CDM begins in the content process with content JavaScript calling Navigator.requestMediaKeySystemAccess(). Script passes in a set of MediaKeySystemConfig, and these are passed forward to the MediaKeySystemAccessManager. The MediaKeySystemAccessManager figures out a supported configuration, and if it finds one, returns a MediaKeySystemAccess from which content JavaScript can instantiate a MediaKeys object. 
Once script calls MediaKeySystemAccess.createMediaKeys(), we begin the process of instantiating the plugin. We create a MediaKeys object and a ChromiumCDMProxy object, and call Init() on the proxy. The initialization is asynchronous, so we return a promise to content JavaScript and on success we'll resolve the promise with the MediaKeys instance which can talk to the CDM in the GMP process.
To create a new CDM, ChromiumCDMProxy::Init() calls GeckoMediaPluginService::GetCDM(). This runs in the content process, but since the content process is sandboxed, we can't create a new child process to run the CDM there and then. As we're in the content process, the GeckoMediaPluginService instance we're talking to is a GeckoMediaPluginServiceChild. This calls over to the parent process to retrieve a GMPContentParent bridge. GMPContentParent acts like the GMPParent in the content process. GeckoMediaPluginServiceChild::GetContentParent() retrieves the bridge, and sends a LaunchGMPForNodeId() message to instantiate the plugin in the parent process.
In the non multi-process Firefox case, we still call GeckoMediaPluginService::GetContentParent(), but we end up running GeckoMediaPluginServiceParent::GetContentParent(), which can just instantiate the plugin directly.
When the parent process receives a LaunchGMPForNodeId() message, the GMPServiceParent runs through its list of GMPParents to see if there's one matching the parameters passed over. We check to see if there's an instance from the same NodeId, and if so use that. The NodeId is a hash of the origin requesting the plugin, combined with the top level browsing origin, plus salt. This ensures GMPs from different origins always end up running in different processes, and GMPs running in the same origin run in the same process.
If we don't find an active GMPParent running the requested NodeId, we'll make a copy of a GMPParent matching the parameters, and call LoadProcess() on the new instance. This creates a GMPProcessParent object, which in turn uses GeckoChildProcessHost to run a command line to start the child GMP process. The command line passed to the newly spawned child process causes the GMPProcessChild to run, which creates and initializes the GMPChild, setting up the IPC connection between GMP and Main processes.
The GMPChild delegates most of the business of loading the GMP to the GMPLoader. The GMPLoader opens the plugin library from disk, and starts the Sandbox using the SandboxStarter, which has a different implementation for every platform. Once the sandbox is started, the GMPLoader uses a GMPAdapter parameter to adapt whatever binary interface the plugin exports (the Widevine C API for example) to the match the GMP API. We use the adapter to call into the plugin to instantiate an instance of the CDM. For OpenH264 we simply use a PassThroughAdapter, since the plugin implements the GMP API.
If all that succeeded, we'll send a message reporting success to the parent process, which in turn reports success to the content process, which resolves the JavaScript promise returned by MediaKeySystemAccess.createMediaKeys() with the MediaKeys object, which is now setup to talk to a CDM instance.
Once content JavaScript has a MediaKeys object, it can set it on an HTMLMediaElement using HTMLMediaElement.setMediaKeys().
The MediaKeys object encapsulates the ChromiumCDMProxy, which proxies commands sent to the CDM into calls to ChromiumCDMParent on the GMP thread.How EME playback worksThere are two main cases that we care about here; encrypted content being encountered before a MediaKeys is set on the HTMLMediaElement, or after. Note that the CDM is only usable to the media pipeline once it's been associated with a media element by script calling HTMLMediaElement.setMediaKeys().
If we detect encrypted media streams in the MediaFormatReader's pipeline, and we don't have a CDMProxy, the pipeline will move into a "waiting for keys" state, and not resume playback until content JS has set a MediaKeys on the HTMLMediaElement. Setting a MediaKeys on the HTMLMediaElement causes the encapsulated ChromiumCDMProxy to bubble down past MediaDecoder, through the layers until it ends up on the MediaFormatReader, and the EMEDecoderModule.
Once we've got a CDMProxy pushed down to the MediaFormatReader level, we can use the PDMFactory to create a decoder which can process encrypted samples. The PDMFactory will use the EMEDecoderModule to create the EME MediaDataDecoders, which process the encrypted samples.
The EME MediaDataDecoders talk directly to the ChromiumCDMParent, which they get from the ChromiumCDMProxy on initialization. The ChromiumCDMParent is the IPDL parent actor for communicating with CDMs.
All calls to the ChromiumCDMParent should be made on the GMP thread. Indeed, one of the primary jobs of the ChromiumCDMProxy is to proxy calls made by the MediaKeys on the main thread to the GMP thread so that commands can be sent to the CDM via off main thread IPC.
Any callbacks from the CDM in the GMP process are made onto the ChromiumCDMChild object, and they're sent via PChromiumCDM IPC over to ChromiumCDMParent in the content process. If they're bound for the main thread (i.e. the MediaKeys or MediaKeySession objects), the ChromiumCDMCallbackProxy ensures they're proxied to the main thread.
Before the EME MediaDataDecoders submit samples to the CDM, they first ensure that the samples have a key with which to decrypt the samples. This is achieved by a SamplesWaitingForKey object. We keep a copy in the content process of what keyIds the CDM has reported are usable in the CDMCaps object. The information stored in the CDMCaps about which keys are usable is mirrored in the JavaScript exposed MediaKeySystemStatusMap object.
The MediaDataDecoder's decode operation is asynchronous, and the SamplesWaitingForKey object delays decode operations until the CDM has reported that the keys that the sample requires for decryption are usable. Before sending a sample to the CDM, the EME MediaDataDecoders check with the SamplesWaitingForKey, which looks up in the CDMCaps whether the CDM has reported that the sample's keyId is usable. If not, the SamplesWaitingForKey registers with the CDMCaps for a callback once the key becomes usable. This stalls the decode pipeline until content JavaScript has negotiated a license for the media.
Content JavaScript negotiates licenses by receiving messages from the CDM on the MediaKeySession object, and forwarding those messages on to the license server, and forwarding the response from the license server back to the CDM via the MediaKeySession.update() function. These messages are in turn proxied by the ChromiumCDMProxy to the GMP thread, and result in a call to ChromiumCDMParent and thus an IPC message to the GMP process, and a function call into the CDM there. If the license server sends a valid license, the CDM will report the keyId as usable via a key statuses changed callback.
Once the key becomes usable, the SamplesWaitingForKey gets a callback, and the EME MediaDataDecoder will submit the sample for processing by the CDM and the pipeline unblocks. EME on AndroidEME on Android is similar in terms of the EME DOM binding and integration with the MediaFormatReader and friends, but it uses a MediaDrmCDMProxy instead of a ChromiumCDMProxy. The MediaDrmCDMProxy doesn't talk to the GMP subsystem, and instead uses the Android platform's inbuilt Widevine APIs to process encrypted samples.How WebRTC uses OpenH264WebRTC uses OpenH264 for encode and decode of baseline H.264 streams. It doesn't need all the DRM stuff, so it talks to the OpenH264 GMP via the PGMPVideoDecoder and PGMPVideoEncoder protocols.
The child actors GMPVideoDecoderChild and GMPVideoEncoderChild talk to OpenH264, which conforms to the GMP API.
OpenH264 is not used by Firefox for playback of H264 content inside regular <video>, though there is still a GMPVideoDecoder MediaDataDecoder in the tree should this ever be desired.How GMP shutdown worksShutdown is confusing, because there are three processes involved. When the destructor of the MediaKeys object in the content process is run (possibly because it's been cycle or garbage collected), it calls CDMProxy::Shutdown(), which calls through to ChromiumCDMParent::Shutdown(), which cancels pending decrypt/decode operations, and sends a Destroy message to the ChromiumCDMChild.
In the GMP process, ChromiumCDMChild::RecvDestroy() shuts down and deletes the CDM instance, and sends a __delete__ message back to the ChromiumCDMParent in the content process.
In the content process, ChromiumCDMParent::Recv__delete__() calls GMPContentParent::ChromiumCDMDestroyed(), which calls CloseIfUnused(). The GMPContentParent tracks the living protocol actors for this plugin instance in this content process, and CloseIfUnused() checks if they're all shutdown. If so, we unlink the GMPContentParent from the GeckoMediaPluginServiceChild (which is PGMPContent protocol's manager), and close the GMPContentParent instance. This shuts down the bridge between the content and GMP processes.
This causes the GMPContentChild in the GMP process to be removed from the GMPChild in GMPChild::GMPContentChildActorDestroy(). This sends a GMPContentChildDestroyed message to GMPParent in the main process.
In the main process, GMPParent::RecvPGMPContentChildDestroyed() checks if all actors on its side are destroyed (i.e. if all content processes' bridges to this GMP process are shutdown), and will shutdown the child process if so. Otherwise we'll check again the next time one of the GMPContentParents shuts down. 
Note there are a few places where we use GMPContentParent::CloseBlocker. This stops us from shutting down the child process when there are no active actors, but we still need the process alive. This is useful for keeping the child alive in the time between operations, for example after we've retrieved the GMPContentParent, but before we've created the ChromiumCDM (or some other) protocol actor.How crash reporting works for EME CDMsCrash handling for EME CDMs is confusing for the same reason as shutdown; because there are three processes involved. It's tricky because the crash is first reported in the parent process, but we need state from the content process in order to identify which tabs need to show the crash reporter notification box.
We receive a GMPParent::ActorDestroy() callback in the main process with aWhy==AbnormalShutdown. We get the crash dump ID, and dispatch a task to run GMPNotifyObservers() on the main thread. This collects some details, including the pluginID, and dispatches an observer service notification "gmp-plugin-crash".  A JavaScript module ContentCrashHandlers.jsm observes this notification, and rebroadcasts it to the content processes.
JavaScript in every content process observes the rebroadcast, and calls mozIGeckoMediaPluginService::RunPluginCrashCallbacks(), passing in the plugin ID. Each content process' GeckoMediaPluginService then goes through its list of GMPCrashHelpers, and finds those which match the pluginID. We then dispatch a PluginCrashed event at the window that the GMPCrashHelper reports as the current window owning the plugin. This is then handled by PluginChild.jsm, which sends a message to cause the crash reporter notification bar to show.GMP crash reporting for WebRTCUnfortunately, the code paths for WebRTC handling crashes is slightly different, due to their window being owned by PeerConnection. They don't use GMPCrashHelpers, they have PeerConnection help find the target window to dispatch PluginCrashed to.
Categorieën: Mozilla-nl planet

Hacks.Mozilla.Org: How accessibility trees inform assistive tech

Mozilla planet - wo, 26/06/2019 - 15:09

The web is accessible by default. It was designed with features to make accessibility possible, and these have been part of the platform pretty much from the beginning. In recent times, inspectable accessibility trees have made it easier to see how things work in practice. In this post we’ll look at how “good” client-side code (HTML, CSS and JavaScript) improves the experience of users of assistive technologies, and how we can use accessibility trees to help verify our work on the user experience.

People browse differently

Assistive Technology (AT) is the umbrella term for tools that help people operate a computer in the way that suits them. Braille displays, for instance, let blind users understand what’s on their screen by conveying that information in braille format in real time. VoiceOver, a utility for Mac and iOS, converts text into speech, so that people can listen to an interface. Dragon NaturallySpeaking is a tool that lets people operate an interface by talking into a microphone.

hand on purple braille terminal with laptop on topA refreshable Braille display (Photo: Sebastien.delorme)

The idea that people can use the web in the way that works best for them is a fundamental design principle of the platform. When the web was invented to let scientists exchange documents, those scientists already had a wide variety of systems. Now, in 2019, systems vary even more.  We use browsers on everything from watches to phones, tablets to TVs. There is a perennial need for web pages that are resilient and allow for user choice. These values of resilience and flexibility have always been core to our work.

AT draws on these fundamentals. Most assistive technologies need to know what happens on a user’s screen. They all must understand the user interface, so that they can convey it to the user in a way that makes sense. Many years ago, assistive technologies relied on OCR (optical character recognition) techniques to figure what was on the screen. Later they consumed markup directly from the browser. On modern operating systems the software is more advanced: accessibility APIs that are built into the platform provide guidance.

How front-end code helps

Platform-specific Accessibility APIs are slightly different depending on the platform. Generally, they know about the things that are platform-specific: the Start Menu in Windows, the Dock on the Mac, the Favorites menu in Firefox… even the address bar in Firefox. But when we use the address bar to access a website, the screen displays information that it probably has never displayed before, let alone for AT users. How can Accessibility APIs tell AT about information on websites? Well, this is where the right client-side HTML, CSS and JavaScript can help.

Whether we write plain HTML, JSX or Jinja, when someone accesses our site, the browser ultimately receives markup as the start for any interface. It turns that markup into an internal representation, called the DOM tree. The DOM tree contains objects for everything we had in our markup. In some cases, browsers also create an accessibility tree, based on the DOM tree, as a tool to better understand the needs and experiences of assistive technology users. The accessibility tree informs platform-specific Accessibility APIs, which then inform Assistive Technologies. So ultimately, our client-side code impacts the experience of assistive technology users.

flow chart, starts at your markup, point at DOM tree, points at accessibility tree, points at platform apis, points at AT, which lists text to speech, screen magnifiers and alternate pointing devices

A flow chart: your markup results in a DOM tree, which impacts the accessibility tree, which informs the Platform APIs, which ultimately impact AT users.


With HTML, we can be specific about what things are in the page. We can define what’s what, or, in technical terms, provide semantics. For example, we can define something as a:

  • checkbox or a radio button
  • table of structured data
  • list, ordered or unordered, or a list of definitions
  • navigation or a footer area

Stylesheets can also impact the accessibility tree: layout and visibility of elements are sometimes taken into account. Elements that are set to display: none or visibility: hidden are taken out of the accessibility tree completely. Setting display to table/table-cell can also impact semantics, as Adrian Roselli explains in Tables, CSS display properties and ARIA.

If your site dynamically changes generated content in CSS (::before and ::after), this can also appear or disappear in accessibility trees.

And then, there are properties that can make visual layout differ from DOM order, for example order in grid and flex items, and auto-flow: dense in Grid Layout. When visual order is different from DOM order, it is likely also going to be different from accessibility tree order. This may confuse AT users. The Flexbox spec is quite clear: the CSS order property is “for visual, not logical reordering”.


JavaScript lets us change the state of our components. This is often relevant for accessibility, for instance, we can determine:

  • Is the menu expanded or collapsed?
  • Was the checkbox checked or not?
  • Is the email address field valid or invalid?

Note that accessibility tree implementations can vary, creating discrepancies between browsers.  For instance, missing values are computed to null in some browsers, '' (empty string) in others. Differing implementations are one of many reasons plans to develop a standard are in the works.

What’s in an accessibility tree?

Accessibility trees contain accessibility-related meta information for most of our HTML elements. The elements involved determine what that means, so we’ll look at some examples.

Generally, there are four things in an accessibility tree object:

  • name: how can we refer to this thing? For instance, a link with the text ‘Read more’ will have ‘Read more’ as its name (more on how names are computed in the Accessible Name and Description Computation spec)
  • description: how do we describe this element, if we want to add anything to the name? The description of a table could explain what kind of info that table offers.
  • role: what kind of thing is it? For example, is it a button, a nav bar or a list of items?
  • state: if any, does it have state? Think checked/unchecked for checkboxes, or collapsed/expanded for the `<summary>` element

Additionally, the accessibility tree often contains information on what can be done with an element: a link can be followed, a text input can be typed into, that kind of thing.

Inspecting the accessibility tree in Firefox

All major browsers provide ways to inspect the accessibility tree, so that we can figure out what an element’s name has computed to, or what role it has, according to the browser. For some context on how this works in Firefox, see Introducing the Accessibility Inspector in the Firefox Developer Tools by Marco Zehe.

Here’s how it works in Firefox:

  • In Settings, under Default Developer Tools, ensure that the checkbox “Accessibility” is checked
  • You should now see the Accessibility tab
  • In the Accessibility tab, you’ll find the accessibility tree with all its objectsanimated; shows go to settings, check accessibility, accessibility tab appears
Other browsers

In Chrome, the Accessibility Tree information lives together with the DOM inspector and can be found under the ‘Accessibility’ tab. In Safari, it is in the Node tab in the panel next to the DOM tree, together with DOM properties.

An example

Let’s say we have a form where people can pick their favourite fruit:

<form action=""> <fieldset> <legend>Pick a fruit </legend> <label><input type="radio" name="fruit"> Apple</label> <label><input type="radio" name="fruit"> Orange</label> <label><input type="radio" name="fruit"> Banana</label> </fieldset> </form>

 radio buttons, all unchecked, apple orange banana

In Firefox, this creates a number of objects, including:

    • An object with a role of grouping, named Pick a fruit
    • Three objects with roles of label, named Apple, Orange and Banana, with action Click, and these states: selectable text, opaque, enabled, sensitive
    • An object with role of radiobutton, named Apple, with action of Select and these states: focusable, checkable, opaque, enabled, sensitive

And so on. When we select ‘Apple’, checked is added to its list of states.

Note that each thing expressed in the markup gets reflected in a useful way. Because we added a legend to the group of radio buttons, it is exposed with a name of ‘Pick a fruit’.  Because we used inputs with a type of radio, they are exposed as such and have relevant states.

As mentioned earlier, we don’t just influence this through markup. CSS and JavaScript can also affect it.

With the following CSS, we would effectively take the name out of the accessibility tree, leaving the fieldset unnamed:

legend { display: none; /* removes item from accessibility tree */ }

This is true in at least some browsers. When I tried it in Firefox, its heuristics still managed to compute the name to ‘Pick a fruit’, in Chrome and Safari it was left out completely. What this means, in terms of real humans: they would have no information as to what to do with Apple, Orange and Banana.

As mentioned earlier, we can also influence the accessibility tree with JavaScript. Here is one example:

const inputApple = document.querySelector(‘input[radio]’); inputApple.checked = true; // alters state of this input, also in accessibility tree

Anything you do to manipulate DOM elements, directly through DOM scripting or with your framework of choice, will update the accessibility tree.


To provide a great experience to users, assistive technologies present our web pages differently from how we may have intended them. Yet, what they present is based directly on the content and semantic structure that we provide. As designers and developers, we can ensure that assistive technologies understand our pages well writing good HTML, CSS and JavaScript. Inspectable accessibility trees help us verify directly in the browser if our names, roles and state make sense.

The post How accessibility trees inform assistive tech appeared first on Mozilla Hacks - the Web developer blog.

Categorieën: Mozilla-nl planet

The Firefox Frontier: Hey advertisers, track THIS

Mozilla planet - wo, 26/06/2019 - 02:01

If it feels like the ads chasing you across the internet know you a little too well, it’s because they do (unless you’re an avid user of ad blockers, in … Read more

The post Hey advertisers, track THIS appeared first on The Firefox Frontier.

Categorieën: Mozilla-nl planet

This Week In Rust: This Week in Rust 292

Mozilla planet - di, 25/06/2019 - 06:00

Hello and welcome to another issue of This Week in Rust! Rust is a systems language pursuing the trifecta: safety, concurrency, and speed. This is a weekly summary of its progress and community. Want something mentioned? Tweet us at @ThisWeekInRust or send us a pull request. Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub. If you find any errors in this week's issue, please submit a PR.

Updates from Rust Community News & Blog Posts Crate of the Week

This week's crate is winit, a pure-rust cross-platform window initialization library. Thanks to Osspial for the suggestion!

Submit your suggestions and votes for next week!

Call for Participation

Always wanted to contribute to open-source projects but didn't know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

If you are a Rust project owner and are looking for contributors, please submit tasks here.

Updates from Rust Core

172 pull requests were merged in the last week

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

No RFCs were approved this week.

Final Comment Period

Every week the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now.

RFCs Tracking Issues & PRs New RFCs Upcoming Events Africa Asia Pacific Europe North America

If you are running a Rust event please add it to the calendar to get it mentioned here. Please remember to add a link to the event too. Email the Rust Community Team for access.

Rust Jobs

Tweet us at @ThisWeekInRust to get your job offers listed here!

Quote of the Week

why doesn't 'static, the largest lifetime, not simply eat all the others

@mountain_ghosts on twitter

@mountain_ghosts 'static is biggest but actually,, weakest of lifetimes, becuase it is subtype of every lifetime

'static is big soft friend

pls love and protect it

@gankro on twitter

Thanks to Christopher Durham for the suggestion!

Please submit quotes and vote for next week!

This Week in Rust is edited by: nasa42, llogiq, and Flavsditz.

Discuss on r/rust.

Categorieën: Mozilla-nl planet

Emily Dunham: More on Mentorship

Mozilla planet - ma, 24/06/2019 - 09:00
More on Mentorship

Last year, I wrote about some of the aspirations which motivated my move from Mozilla Research to the CloudOps team. At the recent Mozilla All Hands in Whistler, I had the “how’s the new team going?” conversation with many old and new friends, and that repetition helped me reify some ideas about what I really meant by “I’d like better mentorship”.

To generalize about how mentors’ careers affect what they can mentor me on, I’ve sketched up a quick figure in order to name some possible situations that people can be in relative to one another:


The first couple cases of mentorship are easy to describe, because I’ve experienced and thought about them for many years already:

Mentorship across industries

Mentors from outside my own industry are valuable for high level perspectives, and for advice on general life and human topics that aren’t specialized to a single field. Additionally, specialists in other industries often represent the consumers of my own industry’s products. Wise and thoughtful people who share little to none of my domain knowledge can provide constructive feedback on why my industry’s work gets particular reactions from the people it affects – just as someone who’s never read a particular book before is likely to catch more spelling errors than its own author, who’s been poring over the same manuscript for many hours a day for several years.

However, for more concrete problems within my particular career (“this program is running slower than expected”, or even “how should I describe that role on my resume?”), observers from outside of it can rarely offer a well tested recommendation of a path forward.

Mentorship across companies within an industry

Similarly, mentors from other companies within my own industry are my go-to source of insight on general trends and technologies. A colleague in a distant corner of my field can tell me about the frustrations they encountered when using a piece of technology that I’m considering, and I can use that advice to make better-informed choices in my daily work.

But advice on a particular company’s peculiarities rarely translates well across organizations. A certain frequency of reorganization might be perfectly ordinary at my company, but a re-org might indicate major problems at another. This type of education, while difficult to get from someone at a different company, is perfectly feasible to pick up from anyone on another team within one’s own organization.

Mentorship across teams within a company

When I switched roles, I had trial-and-errored my way into the observation that there’s a large class of problems with which mentors from different teams within the same company cannot effectively help. I’d tentatively call these “junior engineer problems”, as having overcome their general cases seems to correlate strongly to seniority. In my own expeience, honing the improvement of code-adjacent skills such as the intuition for what problems should be effectively solvable from the docs versus when and whom to ask for help, how deeply to explore a prospective course of action before committing to it, and when to write off an experiment as “effectively impossible”, are all questions whose answers one derives from experience and observing expert peers rather than from just asking them with words.

Mentorship across projects or specialties within a team

I had assumed that simply being on the same team as people capable of imparting that highly specialized variant of common sense would suffice to expose me to it. However, my first few projects on my new team have clearly shown, in both the positive and the negative cases, that working on the same project as an expert is far more useful to my own growth than simply chancing to be bureaucracied into the same group.

The negative case was my first pair of projects: The migration of 2 small, simple services from my team’s AWS infrastructure to GCP. Although I was on the same team as experts in this process, the particular projects were essentially mine alone, and it was up to me to determine how far to proceed on each problem by myself before escalating it to interrupt a busy senior engineer. My heuristics for that process weren’t great, and I knew that at the outset, but my bias toward asking for help later than was optimal slowed the process of improving my ability to draw that line – how can one enhance one’s discrimination between “too soon”, “just right”, and “too late” when all the data points one gathers are in the same one of those categories?

Mentorship within a project

Finally, however, I’m in the midst of a project that demonstrates a positive case for the type of mentorship I switched teams to seek. I’m in the case labeled A on the diagram up above – I’m working with a more-experienced teammate on a project which also includes close collaboration with members of another team within our organization. In examining why this is working so much better for me than my prior tasks, I’ve noticed some differences: First, I’m getting constant feedback on my own expectations for my work. This is no serious nor bureaucratic process, but simply a series of tiny interactions – expressions of surprise when I complete a task effectively, or recommendations to move on to a different approach when something seems to take too long. Similarly, code review from someone immersed in the same problem that I’m working on is indescribably more constructive than review from someone who’s less familiar with the nuances of whatever objective my code is trying to achieve.

Another reason that I suspect I’m improving more quickly than before in this particular task is the opportunity to observe my teammate modeling the skills that I’m learning in his interactions with our colleagues from another team (those in position C on that chart). There’s always a particular trick to asking a question in a way that elicits the category of answer one actually wanted, and watching this trick done frequently in circumstances where I’m up to date on all the nuances and details is a great way to learn.

The FOSS loophole

I suspect I may have been slower to notice these differences than I otherwise might have been, because the start of my career included a lot of fantastic, same-project mentorship from individuals on other teams, at other companies, and even in other industries. This is because my earliest work was on free and open source software and infrastructure. In FOSS, anyone who can pay with their time and computer usage buys access to a cross-company, often cross-industry web of professionals and can derive all the benefits of working directly with mentors on a single project. I was particularly fortunate to draw a wage from the OSU Open Source Lab while doing that work, because the opportunity cost of a hours spent on FOSS by a student who also needs to spend those hours on work is far from free.

Categorieën: Mozilla-nl planet

Daniel Stenberg: openssl engine code injection in curl

Mozilla planet - ma, 24/06/2019 - 07:46

This flaw is known as CVE-2019-5443.

If you downloaded and installed a curl executable for Windows from the curl project before June 21st 2019, go get an updated one. Now.

On Windows, using OpenSSL

The official curl builds for Windows – that the curl project offers – are built cross-compiled on Linux. They’re made to use OpenSSL by default as the TLS backend, the by far most popular TLS backend by curl users.

The curl project has provided official curl builds for Windows on and off through history, but most recently this has been going on since August 2018.

OpenSSL engines

These builds use OpenSSL. OpenSSL has a feature called “engines”. Described by the project itself like this:

“a component to support alternative cryptography implementations, most commonly for interfacing with external crypto devices (eg. accelerator cards). This component is called ENGINE”

More simply put, an “engine” is a plugin for OpenSSL that can be loaded and run dynamically. The particular engine is activated either built-in or by loading a config file that specifies what to do.

curl and OpenSSL engines

When using curl built with OpenSSL, you can specify an “engine” to use, which in turn allows users to use their dedicated hardware when doing TLS related communications with curl.

By default, the curl tool allows OpenSSL to load a config file and figure out what engines to load at run-time but it also provides a build option to make it possible to build curl/libcurl without the ability to load that config file at run time – which some users want, primarily for security reasons.

The mistakes

The primary mistake in the curl build for Windows that we offered, was that the disabling of the config file loading had a typo which actually made it not disable it (because the commit message had it wrong). The feature was therefore still present and would load the config file if present when curl was invoked, contrary to the intention.

The second mistake comes a little more from the OpenSSL side: by default if you build OpenSSL cross-compiled like we do, the default paths where it looks for the above mentioned config file is under the c:\usr\local tree. It is in fact even complicated and impossible to fix this path in the build without a patch.

What the mistakes enable

A non-privileged user or program (the attacker) with access to the host to put a config file in the directory where curl would look for a config file (and create the directory first as it probably didn’t already exist) and the suitable associated engine code.

Then, when an privileged user subsequently executes curl, it will run with more power and run the code, the engine, the attacker had put there. An engine is a piece of compiled code, it can do virtually anything on the machine.

The fix

Already three days ago, on June 21st, a fixed version of the curl executable for Windows was uploaded to the curl web site (“curl 7.65.1_2”). All older versions that had been provided in the past were removed to reduce the risk of someone still using an old lingering download link.

The fix now makes the curl build switch off the loading of the config file, as was already intended. But also, the OpenSSL build that is used for the build is now modified to only load the config file from a privileged path that isn’t world writable (C:/Windows/System32/OpenSSL/).

Widespread mistake

This problem is very widespread among projects on Windows that use OpenSSL. The curl project coordinated this publication with the postgres project and have worked with OpenSSL to make them improve their default paths. We have also found a few other openssl-using projects that already have fixed their builds for this flaw (like stunnel) but I think we have reason to suspect that there are more vulnerable projects out there still not fixed.

If you know of a project that uses OpenSSL and ships binaries for Windows, give them a closer look and make sure they’re not vulnerable to this.

The cat is already out of the bag

When we got this problem reported, we soon realized it had already been publicly discussed and published for other projects even before we got to know about it. Due to this, we took it to publication as quick as possible to minimize user impact as much as we can.

Only on Windows and only with OpenSSL

This flaw only exists on curl for Windows and only if curl was built to use OpenSSL with this bad path and behavior.

Microsoft ships curl as part of Windows 10, but it does not use OpenSSL and is not vulnerable.


This flaw was reported to us by Rich Mirch.

The build was fixed by Viktor Szakats.

The image on the blog post comes from pixabay.

Categorieën: Mozilla-nl planet

Cameron Kaiser: TenFourFox FPR15b1 available

Mozilla planet - za, 22/06/2019 - 22:28
TenFourFox Feature Parity Release 15 beta 1 is now available (downloads, hashes, release notes).

In honour of New Coke's temporary return to the market (by the way, I say it tastes like Pepsi and my father says it tastes like RC), I failed again with this release to get some sort of async/await support off the ground, and we are still plagued by issue 533. The second should be possible to fix, but I don't know exactly what's wrong. The first is not possible to fix without major changes because it reaches up into the browser event loop, but should be still able to get parsing and thus enable at least partial functionality from the sites that depend on it. That part didn't work either. A smaller hack, though, did make it into this release with test changes. Its semantics aren't quite right, but they're good enough for what requires it and does fix some parts of Github and other sites.

However, there are some other feature improvements, including expanded blocking of cryptominers when basic adblock is enabled (from the same list Mozilla uses for enhanced privacy in mainstream Firefox), and updated internationalization support with upgraded timezones and locales such as the new Japanese Reiwa era (for fun, look at Is it Reiwa yet? in FPR14.1 before you download FPR15b1). The usual maintenance and security fixes are (will be) also included (in final). In the meantime, I'm going to take a different pass at the async/await problem for FPR16. If even that doesn't work, we'll have to see where we're at then for parity purposes, since while the majority of websites still work well in TenFourFox's heavily patched-up engine there are an increasing number of major ones that don't. It's hard to maintain a browser engine on your own. :(

Meanwhile, if you'd like the next generation of PowerPC but couldn't afford a Talos II, maybe you can afford a Blackbird. Here's what I thought of it. (See also the followup.)

Categorieën: Mozilla-nl planet

Karl Dubost: Quick notes for Mozilla Whistler All Hands 2019

Mozilla planet - za, 22/06/2019 - 08:12

Whistler 2019 Quick Notes

(taken as it comes, without a specific logic, just thoughts here and there. Emotions. To take with a pinch of salt.)

  • Plane trip without a hitch from Japan.
  • Back in Vancouver after 5 years, from the bus windows, I noticed the new high rise condos and I wonder who can afford them when they are so many of them. People living with credits and loans?
  • All the Vietnamese restaurants just make me want to stop to have a Bun Bo Hue.
  • Bus didn’t get a flat tire
  • Two very chatty persons beside me during the full bus trip never stopped talking. A flow of words very difficult to cope with when you are tired with jet lag.
  • Noisy Welcome reception.
  • Happy to see new people, happy to see old friends.
  • Beautiful view, I just want to hop in shoes and hike the trails.
  • Huge North American hotel room with cold Air con and all lights on is a waste.
  • Cafe latte. Wonderful.
  • Uneasy with the Native American dance. Culture out of context.
  • I like Roxy Wen for her direct talk about things.
  • Stan Leong very positive vibe for Mozilla and Taipei office.
  • Less people who seemed to read a script at the Plenary. This is a good thing.
  • Overall good impression of the Plenary on Tuesday.
  • Does Pocket surface blogs which are edited by simple people. What’s happening in there? The promoted content seems to be mainstream editors.
  • Noisy environments do not help to have soft, relaxed discussions.
  • Finding a bug and being in admiration by the explanation of Boris Zbarsky
  • The wonderfully intoxicating smell of cypress in the mornings
  • Early morning and refreshing cold makes me happy.
  • Thanks Brianna for the cafe latte station at the breakfast area.
  • I guess I do not have a very good relationship with marketing. I need to dive into that. Plenary Wednesday.
  • Our perception of privacy is not equally distributed. People have different expectations and habits. People working at Mozilla are privileged compared to the rest of the population.
  • That said, there were comments during the panel by Lindsey Shepard, VP Product Marketing which resonated with me. So maybe, I need to break down my own silos.
  • Performance Workshop. We, the developers, techies are a bourgeoisie (by/through devices) which makes us blind to the reality of common users performances. This tied to the Plenary this morning about knowing the normal people using services online.
  • Congratulations to people who made possible to have a dot release during the All Hands.
  • Little discussions here and there which help you to unpack a of lot of unknown contexts, specifically when you are working remotely. Invaluable.
  • Working. Together.
  • Released a long due version of the code for the webcompat metrics dashboard. Found more bugs. Fixed more bugs. Filed new issues.
  • The demos session made discovered cool projects that I had no idea about. This is useful and cool.
  • Chatting about movies from childhood to now with friends we do not have the opportunities to see each other enough.
  • Laptop… shutting off automatically when the battery reaches 50%, keys 2 and m repeating time to time, and shift key not working 20% of time. This last one is probably the most frustrating. 2 years and this MacBook Pro is not giving good signs of health.
  • Spotted two bears from the gondola on our way to the top of the mountain.
  • Very good feeling about the webcompat metrics discussions after the talk by Mike Taylor. Closer work in between Web Platform Tests and Web Compat sounds like a very good thing. We need to explore and define the small loosely joined hooks that will make it really cool.
  • Firefox Devtools team, you are a bunch of awesome people.
  • Plenaries, for this Whistler All Hands, felt more sincere, more in touch with people with clearer goals for Mozilla (than the last 6 years since I started at Mozilla). So that was cool.
  • Loved the cross-cultural/cross-team vibes.
  • Thanks to the people who are contributing to the projects and give one week of their precious time with their family to work on the projects they care about.
  • Whistler is a very expensive place.
  • Slept through all the ride back from Whistler to Vancouver, avoiding being motion sick.
  • Staying in Vancouver for a couple of days
  • Then heading back to Japan on Wednesday.


Categorieën: Mozilla-nl planet