I posted an update to the Firefox Roadmap.
Firefox will deliver a rock solid browsing experience with world-beating customization and a first of its kind recommendation engine that gets you the content you want when you want it, whether at home or on the go.
Project SensorWeb is an experiment from the Connected Devices group at Mozilla in open publishing of environmental data. I am excited about this experiment because we’ve had some serious air quality discoveries in Portland recently – our air is possibly the worst in the USA, and bad enough that mega-activists like Erin Brockovich are getting involved.
A couple of weeks ago, Eddie and Evan from Project SensorWeb helped me put together a NodeMCU board and a PM2.5 sensor so I could set up an air quality sensor in Portland to report to their network. They’re still setting up the project so I haven’t gotten the configuration info from them yet…
But you don’t need the SensorWeb server to get your sensor up and running and pushing data to your own server! I want a copy of the data for myself anyway, to be able to do my own visualizations and notifications. I can then forward the data on to SensorWeb.
So I started by flashing the current version of the SensorWeb code to the device, which is a Nodemcu 0.9 board with an ESP8266 wifi chip on board, and a PM2.5 sensor attached to it.
I used Kumar Rishav’s excellent step-by-step post to get through the process.
Some things I learned along the way:
- On Mac OS X you need a serial port driver in order for the Arduino IDE to detect the board.
- After much gnashing of teeth, I discovered that you can’t have the PM2.5 sensor plugged into the board when you flash it.
After getting the regular version flashed correctly, I tested with Kumar’s API key and device id, and confirmed it was reporting the data correctly to the SensorWeb server.
Now for the changes.
- I set up the Maker channel on IFTTT, which allows me post data to an HTTP endpoint to get it into IFTTT’s system.
- I then created a new IFTTT recipe that accepts the data from the device and pushes it into a Google spreadsheet.
- I forked the SensorWeb code and modified it to post to the Maker channel instead of the SensorWeb server.
I flashed the device and viola, it is publishing data to my spreadsheet.
And now once SensorWeb is ready to take new devices, I can set up a new IFTTT recipe to forward the posts to them, allowing me to own my own data and also publish to the project!
This post should find its way to the Planet Mozilla twitter feed; if it does, new posts to Planet will be reflected there again.
(Pic cropped out of the “Halo: ODST” trailer from when it was still called called “Recon”. Reinitialize…)
I post these updates every 3 weeks to inform add-on developers about the status of the review queues, add-on compatibility, and other happenings in the add-ons world.The Review Queues
In the past 3 weeks, 1278 listed add-ons were reviewed:
- 1194 (93%) were reviewed in fewer than 5 days.
- 62 (5%) were reviewed between 5 and 10 days.
- 22 (2%) were reviewed after more than 10 days.
There are 74 listed add-ons awaiting review.
You can read about the recent improvements in the review queues here.
If you’re an add-on developer and are looking for contribution opportunities, please consider joining us. Add-on reviewers are critical for our success, and can earn cool gear for their work. Visit our wiki page for more information.Compatibility
As always, we recommend that you test your add-ons on Beta and Firefox Developer Edition to make sure that they continue to work correctly. End users can install the Add-on Compatibility Reporter to identify and report any add-ons that aren’t working anymore.Extension Signing
The wiki page on Extension Signing has information about the timeline, as well as responses to some frequently asked questions. The current plan is to remove the signing override preference in Firefox 48. The unbranded builds, which were the final piece, are available here. We will update the Extension Signing doc with more information about how to obtain them.Recognition
We would like to thank these people for their recent contributions to the add-ons world: Sylwia Ornatowska, dw-dev, Lavish Aggarwal, Baris Derin, Martin Giger, Viswaprasath, gorf4673, and Atique Ahmed Ziad.
You can read more about their contributions in our recognition page.
mconley livehacks on real Firefox bugs while thinking aloud.
This is the sumo weekly call
Last month I spent a week working on IoT projects with a group of 40 researchers, designers and coders… in Anstruther, a small fishing village in Scotland. Not a high-tech hub, but that was the point. We immersed ourselves in a small community with limited connectivity and interesting weather (and fantastic F&C) in order to explore how they use technology and how ubiquitous physical computing might be woven into their lives.
The ideamonsters behind this event were Michelle Thorne and Jon Rogers, who are putting on a series of these exploratory events around the world this year as part of the Mozilla Foundation’s Open IoT Studio. The two previous editions of this event were a train caravan in India and a fablab sprint in Berlin (which I also attended, and will write up as well. I SWEAR.).
Michelle and Jon will be writing a proper summary of the week as a whole, so I’m going to focus on the project my group built: Bubble.Context
From research conducted with local fishing folk, farmers from a bit inland, and a group of teenagers from the local school, we figured out a few things:
- Mobile connectivity is sparse and unreliable throughout the whole region. In this particular town, only about half the town had any service.
- The important information, places and things aren’t immediately obvious unless you know a local.
- Just like I was, growing up in a small town: The kids are just looking for something to do.
Initially we focused on the teens… fun things like virtual secret messaging at the red telephone boxes. Imagine you connect to the wifi at the phone box, and the captive portal is a web UI for leaving and receiving secret messages. Perhaps they’re only read once before dying, like a hyperlocal Snapchat. Perhaps the system is user-less, mediated only by secret combinations of emojis as keys. The street corner becomes the hangout, physically and digitally.
We meandered to public messaging from there, thinking about how there’s so much to learn and share about the physical space. What’s the story behind the messages to fairies that are being left in that phone box? I can see the island off the coast from here – what’s it called and what’s out there? Who the hell is Urquardt and what’s a “wynd”? Maybe we make a public message board as well – disconnected from the internet but connected to anything within view.
We kept going back to the physical space. We talked about a virtual graffiti wall, and then started exploring AR and ways of marking up the surroundings – the people, the history, the local pro-tip on which fish and chips shop is the best. But all of this available only to people in close physical proximity.Implementation
Given the context and the constraints, as well as watching direction some of the other groups were going in, we started designing a general approach to bringing digital interactivity to disconnected spaces.
The first cut is Bubble: A wi-fi access point with a captive portal that opens a web page that displays an augmented reality view of your immediate surroundings, with messages overlaid on what you’re seeing:
A few implementation notes:
- We used a Raspberry Pi 3, running as a wi-fi access point.
- It ran a node.js script that served up the captive portal web UI.
- The web UI used getUserMedia to access the device camera awe.js for the AR bits and A-Frame for a VR backup view on iOS.
- We designed a logo and descriptive text and then lasercut some plaques to put up where hotspots are.
Designs, board, battery and boxes:
Connected to the front-end: In the final box:
- Captive portals are hobbled web pages. You can’t do things like use getUserMedia to get access to the camera.
- iOS doesn’t have *any* way to let web pages access the camera.
- Power can be hard. We talked about solar and other ways of powering these.
- Gotta hope they don’t get nicked.
I’m happy to announce that we now have an official set of curl stickers that you can get. Sorry, that came out wrong. That you should get! The first official curl stickers ever and they’re all based on our new and shiny logo.
These stickers are designed and sold by the great folks over at unixstickers.com and for every purchase you do, a small percentage of that adds up to stickers for me so that I can hand them out to peeps I meet.
Firefox has it's own built-in update system. The update system supports 2 types of updates: complete and incremental. Completes can be applied to any older version, unless there are some incompatible changes in the MAR format. Incremental updates can be applied only to a release they were generated for.
Usually for the beta and release channels we generate incremental updates against 3-4 versions. This way we try to minimize bandwidth consumption for our end users and increase the number of users on the latest version. For Nightly and Developer Edition builds we generate 5 incremental updates using funsize.
Both methods assume that we know ahead of time what versions should be used for incremental updates. For releases and betas we use ADI stats to be as precise as possible. However, these methods are static and don't use real-time data.
The idea to generate incremental updates on demand has been around for ages. Some of the challenges are:
- Acquiring real-time (or close to real-time) data for making decisions on incremental update versions
- Size of the incremental updates. If the size is very close to the size of the corresponding complete, there is reason to serve incremental updates. One of the reasons is that the that the updater tries to use the incremental update first, and then falls back to the complete in case if something goes wrong. In this case the updater downloads both the incremental and the complete.
Ben and I talked about this today and to recap some of the ideas we had, I'll put them here.
- We still want to "pre-seed" most possible incremental updates before we publish any updates
- Whenever Balrog serves a complete-only update, it should generate a structured log entry and/or an event to be consumed by some service, which should contain all information required to generate a incremental update.
- The "new system" should be able to decide if we want to discard incremental update generation, based on the size. These decisions should be stored, so we don't try to generate incremental update again next time. This information may be stored in Balrog to prevent further events/logs.
- Before publishing the incremental update, we should test if they can be applied without issues, similar to the update verify tests we run for releases, but without hitting Balrog. After they pass this test, we can publish them to Balrog and check if Balrog returns expected XML with partial info in it.
- Minimize the amount of served completes, if we plan to generate incremental updates. One of the ideas was to modify the client to support responses like "Come in 5 minutes, I may have something for you!"
The only remaining thing is to implement all these changes. :)
In London, Mozlondon, we had a session on creating a SUMO Release Report a few weeks after major updates to Mozilla products. This post will be the first to include testimonials from users and submissions from users in the community to make it unique to SUMO. With the intention to highlight all of the work that the community comes to accomplish together, the user testimonials, feedback, copious issues found, brought to attention and solved, knowledge base articles created, and collaborated on, as well as article translations to so many languages and organized social media this report shows how much we need your help. Core Community Members and new ones are equally as important. We have highlighted the issues that were and are actively being tracked down to improve Firefox and other Mozilla products.
We have lots of ways to contribute, from Support to Social to PR, the ways you can help shape our communications program and tell the world about Mozilla are endless. For more information: [https://goo.gl/NwxLJF]Customer Kudos to the SUMO community
1104 users said “thank you” out of the 7300 answers during this time.
We cannot include all of the thank yous that were received, however these are many of the community members that also received thank yous from Firefox users. Shout outs to Fred, cor-el, Seburo, philipp, Matt, Zenos, Scribe, jscher2000, James, Wayne Mary, Chris Ilias, Christ1, the-edmeister, Tonnes, Toad-Hall. They all received direct thank yous from users and their solved issues.Feedback and community highlights
- Great support from John 99, jscher2000, and FredMcD
- Great support from Seburo
(June 7 – June 30)Allow Firefox to load multiple tabs in the background 71-76% 5340 “Why do you think it is a good idea to confuse existing users with taking away the options they once had? Why change the options that they choose to set? I am mildly upset” Pages appear tiny when I print or view them in Print Preview 51-62% 3281 “Still having an issue.”
“My print_paper_height and _width settings appear in millimeters even though the paper being used is set to 8.5 x 11 inches. Margin settings still appear in inches”
“Prints half size in width. Followed instructions exactly”
“page goes from very small when printing to very large font when using email”
“actually my log in page is about the size of a dollar bill…..I cant see it because it is so small I can go to Internet Explorer and I have NO PROBLEM but firefox another story” Firefox support has ended for OS X 10.6, 10.7 and 10.8 57-83% 3222 Watch DRM content on Firefox 66-70% 209049 “never had a problem watching videos on amazon prime till you people came up with this explanation that to non tech people is just jibberish”
“all of a sudden not work to stream toytube or netfilx”
“Has no mention of whether Linux will have Widevine support in the future. This seems odd given that Google Chrome already has that support built-in.” Android
(June 7 – June 30)Turn off web fonts in Firefox for Android 100% 340 none What’s new in Firefox for Android — 8 none Firefox Marketplace Apps Stop Working on Firefox 47+ for Android 71-95% 26612 none iOS What’s new in Firefox for iOS (version 4.0) 60-87% 1,618,561 None Add Firefox to the Today view on your iOS device 75-85% 18,071 None Certificate warnings in Firefox for iOS 72-76% 2,751 none
**No articles were linked from major publications (via Google Analytics.) but if you see any in your region, please mention them.Localization Article Top 10 locale coverage Top 20 locale coverage Desktop (June 7 – June 30) Allow Firefox to load multiple tabs in the background 100% 66.6% Pages appear tiny when I print or view them in Print Preview 100% 66.6% Firefox support has ended for OS X 10.6, 10.7 and 10.8 40% 23.8% Watch DRM content on Firefox 100% 80% Android (June 7 – June 30) Turn off web fonts in Firefox for Android 100% 66.6% What’s new in Firefox for Android 60% 33% Firefox Marketplace Apps Stop Working on Firefox 47+ for Android 100% 57% Support Forum Threads
One of the major impacted issues during the first three weeks of the release was an increase in reports to fake updates and malware from those updates. Many of them were reported and still be investigated.
- I received an “urgent Firefox Patch” notification: is this legitimate?
- Is this a patch from firefox? Firefox-patch.exe
- I got a Urgent Firefox update from https://phaitxiaoshoubang.org/8301092144808/e8f07d9270bdade361fab48a9d15e67e.html Should I manually install ?
- I got a red screen with the Firefox logo that says “Urgent Firefox update” asking to save the file firefox-patch.exe. This this real?
- More on this in this thread post: here
Not solved top viewed threads – GA
- [Noah] This typo has embarrassingly been in release on the Argentina locale since July 2015! :The word “nombre” (name) was misspelled “mombre”.
- https://bugzil.la/1282867 – [es-AR] Translation update proposed for browser/chrome/browser/preferences/sync.dtd:changeSyncDevice
- “version 47.0 browser is not responding when i want to see print version”
- A significant crash caused by Flash was investigated, still hunting down the link…they uplifted it for beta 4
- There were not many this time, if you recall any for next time we would love to hear about them. What was searched searched:
Brought to you by Sprinklr
Total contributors in program
In this time we had a total of 14 of you login and participate in the Firefox 47 release.
New users added in period of the report
Welcome Magno, Daniella, Luis, and TheoC to the team you were very active these past three weeks and thank you for supporting Mozilla Open Source users on Facebook and Twitter on the Sprinklr tool.
Top 5 ContributorsUser Number of Replies Andrew Truong 56 Noah Y 24 Jhonatas Rodrigues Machado 12 Alex_Mayorga 6 Magno Reis 4
Number of Replies: 111Trending issues in Sprinklr Outbound top engagement:
Each Facebook outbound post reached one person for support, the two major engagements overlapped with Code Emoji and the plane,Top Twitter Posts
This version we removed the tag summary and are currently working on items that translate to more specific categories. Not working will be removed and more will be added. However taking a deep dive in the top categories for outbound messages to Mozilla Open Source product users this what we found.
- “Not working” is associated with these suggested troubleshooting steps and categories: antivirus was mentioned 3 times, malware was mentioned once, trying a Firefox Refresh 1, hanging 1, Trying Safe mode 1, and Kaspersky 1, changing a Firefox setting was mentioned once, and a more complicated website issue was asked to troubleshoot in the forums. This can be concluded to be some of the most common troubleshooting steps for Firefox for Desktop described here: http://mzl.la/16zLrEU
- “Crashes” is associated with outbound troubleshooting that say check out how to ttp://mzl.la/14A6XM2 or go to the support forum 50/50 of the time.
- “Video” tags were the third top category and basic troubleshooting were to clear the cache, asks if the website is the issue and suggests html5.
Topical Data Post: Pokemon Go
I live in Canada which means we hear a lot about things that are United States-only. The latest (and the largest, outstripping in volume and velocity even the iPhone (which I may misremember being the last must-have-thing back in 2007)) is the hit augmented-reality mobile game Pokémon GO.
One gameplay mechanic of Pokémon GO (I am told) revolves around hatching eggs. These eggs hatch not after a certain period of time, but after you have walked a certain distance with the application open on your phone.
The kicker is that the distance is measured in kilometres, a unit whose use the United States and United Kingdom have evaded (yes, despite the latter’s metrication since 1965). People in the United States are being confronted with unfamiliar distance units of 2km, 5km, and 10km.
This, via some Twitter jabs, lead me to Google Trends and a prediction: what if Pokémon GO’s release date in a region that still uses miles as a unit of distance could be detected simply through the rise in search volume for the term “5km”?
So far, the data for the United States is consistent:
I await the UK launch date to follow-up.
Weekly project updates from the Mozilla Connected Devices team.
Q3 planning is in full swing. Chief priorities continue to be the migration to TaskCluster (TC) from buildbot, and release process improvements.
Aki finished porting configman to python3 (merged!). https://github.com/mozilla/configman/pull/163
Windows try builds were enabled on TC Windows 2012 worker types in staging (allizom) (win32/win64, opt/debug). If all goes well, this will propagate to production in the coming days. This is the first set on non-Linux tasks we’ve had running reliably in TC, which is obviously a huge step in our migration away from buildbot.
Improve Release Pipeline:
We implemented a short cache for Balrog rules, which greatly reduced load on the database. https://bugzilla.mozilla.org/show_bug.cgi?id=1111032
Kim stood up beta builds with addon signing preferences disabled. This allows addon developers to test their addons prior to signing on release-equivalent builds. https://bugzilla.mozilla.org/show_bug.cgi?id=1135781
Improve CI Pipeline:
Francis disabled valgrind buildbot builds. Turning stuff off in buildbot #feelsgoodman. https://bugzilla.mozilla.org/show_bug.cgi?id=1278611
Kim enabled Android x86 builds on trunk running in TC. https://bugzilla.mozilla.org/show_bug.cgi?id=1174206
See you next week!
tl;dr: We’ll be shutting down the Firefox mirrors on Bitbucket.
A long time ago we started an experiment to see if there was any support for developing Mozilla products on social coding sites. Well, the community-at-large has spoken, with the results many predicted:
There was so much interest from GitHub users that the site has been clear win, from the very start. There are currently several efforts underway to make it easier for contributors on GitHub to contribute directly to Firefox (which remains hosted on our mercurial server).
However, there hasn’t been any similar interest on Bitbucket. Only one person ever forked even one of the repositories. In addition, the Firefox repos have grown to exceed the 2GiB maximum size that is supported by Bitbucket. (And has long been over the 1GiB maximum free hosting size, which means community members would need to pay to have a copy there.)
As we replace the legacy vcs-sync system with modern vcs-sync, we will stop updating the repositories on Bitbucket. As we stop updates, we’re remove the repositories to avoid confusion as to their status.
Hello and welcome to another issue of This Week in Rust! Rust is a systems language pursuing the trifecta: safety, concurrency, and speed. This is a weekly summary of its progress and community. Want something mentioned? Tweet us at @ThisWeekInRust or send us an email! Want to get involved? We love contributions.
- Announcing Rust 1.10.
- Refining Rust's RFCs.
- Translating C to Rust using Corrode (and how you can help).
- Rust and Rest. Lessons Learned from talking to Sentry's HTTP API from Rust.
- Pairing cryptography in Rust.
- Shave some time from your Travis builds.
- Overview of open source game engines in Rust.
- Rust & Docker in production @ Coursera.
- Integer 32, a Rust consultancy startup by Carol Nichols and Jake Goulding.
- Dyon 0.8 is released.
- Corrode. Automatic semantics-preserving translation from C to Rust.
- Rustls. A new, modern TLS library written in Rust.
- rulinalg. A linear algebra library in Rust designed for machine learning, extracted from rusty-machine.
- task-hookrs. A Rust library for writing taskwarrior hooks and general interfacing with taskwarrior.
- jsf. A simple JSON file store.
- This week in Rust docs 12.
No create was selected for CotW.
Always wanted to contribute to open-source projects but didn't know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!
Some of these tasks may also have mentors available, visit the task page for more information.
- [easy] cargo: Warn on duplicate entry points for libs and bins.
- [easy] cargo: Can't specify precise crate version if there are multiple versions.
- [easy] cargo: Add --dry-run to cargo publish.
- [easy] rust: E0502 not rendered correctly.
- [easy] rust: Move some tests into run-pass-valgrind.
- [moderate] rust: Convert compiler-rt builtins to a Rust crate.
- [moderate] rust: Teach rustc to print CPU, etc. features.
- [easy] rustfmt: Overlong function signatures.
- [easy] rustfmt: Overlong impl signatures.
- [easy] rust-by-example: Add a Mutex chapter.
- [easy] rust-by-example: Add an Arc chapter.
- [easy] imag: Make imag forward --debug and --verbose to subcommands.
- [moderate] imag: Add Iterator-shortcut for iter.fold(Ok(()), ...).
If you are a Rust project owner and are looking for contributors, please submit tasks here.Updates from Rust Core
100 pull requests were merged in the last two weeks.
- Implement workspaces in Cargo.
- Drive trans from the output of the translation item collector.
- std: Stabilize APIs for the 1.11.0 release.
- Update jemalloc to include a fix for startup issues on OSX 10.12.
- Cargo: Add support for RUSTDOCFLAGS.
- Add x86 intrinsics for bit manipulation (BMI 1.0, BMI 2.0, and TBM).
- Added a pretty printer for &mut slices.
- Use lazy iterator in vec/slice gdb pretty printers.
- Introducing TokenStreams and TokenSlices for procedural macros.
- Hariharan R
- Ivan Nejgebauer
- Jared Manning
- Kaivo Anastetiks
- Mike Hommey
- Phlogistic Fugu
- Sam Payson
- Ximin Luo
Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:
- RFC 1327: Dropck Eyepatch. Refine the unguarded-escape-hatch from RFC 1238 (nonparametric dropck).
- Default and expanded errors for rustc.
- Dedicated strike team to resolve unsafe code guidelines.
- RFC process for formatting style and Rustfmt defaults.
- Introduce more conventions around documenting Rust projects.
- Allow all literals in attributes.
- Add global_asm! for module-level inline assembly.
- Exclude macros from importing with #[macro_use(not(...))].
- Add space-friendly arguments. Add -C link-arg and -C llvm-arg which allow you to pass along argument with spaces.
- Add support for 128-bit integers.
- Add a used attribute to prevent symbols from being discarded.
- Add language support for bitfields.
- Add an unwrap! macro.
- Semantic "private in public" enforcement. Enforce that public APIs do not expose private definitions at the semantic level, while allowing the use of private aliases and blanket implementations for convenience and automation.
- Disjointness based on associated types. During coherence checking, when determining if the receivers of two impls are disjoint, treat bounds with disjoint associated types as mutually exclusive bounds.
- 7/13. Rust Community Team Meeting at #rust-community on irc.mozilla.org.
- 7/13. Rust Boulder/Denver - Hello, Rust!.
- 7/14. Rust release triage at #rust-triage on irc.mozilla.org.
- 7/14. Columbus Rust Society: Monthly Meeting.
- 7/18. Rust Paris Meetup #30.
- 7/20. Rust Community Team Meeting at #rust-community on irc.mozilla.org.
Tweet us at @ThisWeekInRust to get your job offers listed here!Quote of the Week
No quote was selected for QotW.
This piece is about too few names for too many things, as well as a kind of origin story for a web standard. For the past year or so, I’ve been contributing to a Mozilla project broadly named Marionette — a set of tools for automating and testing Gecko-based browsers like Firefox. Marionette is part of a larger browser automation universe that I’ve managed to mostly ignore so far, but the time has finally come to make sense of it.
The main challenge for me has been nailing down imprecise terms that have changed over time. From my perspective, “Marionette” may refer to any combination of two to four things, and it’s related to equally vague names like “Selenium” and “WebDriver”… and then there are things like “FirefoxDriver” and “geckodriver”. Blargh. Untangling needed.
Aside: integrating a new team member (like, say, a volunteer contributor or an intern) is the best! They ask big questions and you get to teach them things, which leads to filling in your own knowledge. Everyone wins.The W3C WebDriver Specification
Okay, so let’s work our way backwards, starting from the future. (“The future is now.”) We want to remotely control browsers so that we can do things like write automated tests for the content they run or tests for the browser UI itself. It sucks to have to write the same test in a different way for each browser or each platform, so let’s have a common interface for testing all browsers on all platforms. (Yay, open web standards!) To this end, a group of people from several organizations is working on the WebDriver Specification.
The server side of the protocol, which might be implemented as a browser add-on or might be built into the browser itself, listens for commands and sends responses. The client side, such as a Python library for automating browsers, send commands and processes the responses.
This broad idea is already implemented and in use: an open source project for browser automation, Selenium WebDriver, became widely adopted and is now the basis for an open web standard. Awesome! (On the other hand, oh no! The overlapping names begin!)Selenium WebDriver
Where does this WebDriver concept come from? You may have noticed that lots of web apps are tested across different browsers with Selenium — that’s precisely what it was built for back in 2004-20092. One of its components today is Selenium WebDriver.
Selenium WebDriver provides APIs so that you can write code in your favourite language to simulate user actions like this:client.get("https://www.mozilla.org/") link = client.find_element_by_id("participate") link.click()
Underneath that API, commands are transmitted via JSON over HTTP, as described in the previous section. A fair name for the protocol currently implemented in Selenium is Selenium JSON Wire Protocol. We’ll come back to this distinction later.
As mentioned before, we need a server side that understands incoming commands and makes the browser do the right thing in response. The Selenium project provides this part too. For example, they wrote FirefoxDriver which is a Firefox add-on that takes care of interpreting WebDriver commands. There’s also InternetExplorerDriver, AndroidDriver and more. I imagine it takes a lot of effort to keep these browser-specific “drivers” up-to-date.Then something cool happened
A while after Selenium 2 was released, browser vendors started implementing the Selenium JSON Wire Protocol themselves! Yay! This makes a lot of sense: they’re in the best position to maintain the server side and they can build the necessary behaviour directly into the browser.
Selenium Webdriver (a.k.a. Selenium 2, WebDriver) provides a common API, protocol and browser-specific “drivers” to enable browser automation. Browser vendors started implementing the Selenium JSON Wire Protocol themselves, thus gradually replacing some of Selenium’s browser-specific drivers. Since WebDriver is already being implemented by all major browser vendors to some degree, it’s being turned into a rigorous web standard, and some day all browsers will implement it in a perfectly compatible way and we’ll all live happily ever after.
Is the Selenium JSON Wire Protocol the same as the W3C WebDriver protocol? Technically, no. The W3C spec is describing the future of WebDriver5, but it’s based on what Selenium WebDriver and browser vendors are already doing. The goal of the spec is to coordinate the browser automation effort and make sure we’re all implementing the same interface; each command in the protocol should mean the same thing across all browsers.A Fresh Look at the Marionette Family
Now that I understand the context, my view of Marionette’s components is much clearer.
- Marionette Server together with geckodriver make up Mozilla’s implementation of the W3C WebDriver protocol.
- Marionette Server is built directly into Firefox (into the Gecko rendering engine) and it speaks a slightly different protocol. To make Marionette truly WebDriver-compatible, we need to translate between Marionette’s custom protocol and the WebDriver protocol, which is exactly what geckodriver does. The Selenium client can talk to geckodriver, which in turn talks to Marionette Server.
- As I mentioned earlier, the plan for Selenium 3 is to have geckodriver replace Selenium’s FirefoxDriver. This is an important change: since FirefoxDriver is a Firefox add-on, it has limitations and is going to stop working altogether with future releases.
- Marionette Client is Mozilla’s official Python library for remote control of Gecko, but it’s not covered by the W3C WebDriver spec and it’s not compatible with WebDriver in general. Think of it as an alternative to Selenium’s Python client with Gecko-specific features. Selenium + geckodriver should eventually replace Marionette Client, including the Gecko-specific features.
- The Marionette project also includes tools for integrating with Mozilla’s intricate test infrastructure: Marionette Test Runner, a.k.a. the Marionette test harness. This part of the project has nothing to do with WebDriver, really, except that it knows how to run tests that depend on Marionette Client. The runner collects the tests you ask for, takes care of starting a Marionette session with the right browser instance, runs the tests and reports the results.6
As you can see, “Marionette” may refer to many different things. I think this ambiguity will always make me a little nervous… Words are hard, especially as a loose collection of projects evolves and becomes unified. In a few years, the terms will firm up. For now, let’s be extra careful and specify which piece we’re talking about.Acknowledgements
Thanks to David Burns for patiently answering my half-baked questions last week, and to James Graham and Andreas Tolfsen for providing detailed and delightful feedback on a draft of this article. Bonus high-five to Anjana Vakil for contributions to Marionette Test Runner this year and for inspiring me to write this post in the first place.
I give a range of years because Selenium WebDriver is a merger of two projects that started at different times. ↩
Abbreviated Selenium history and roadmap: Selenium 1 used an old API and mechanism called SeleniumRC, Selenium 2 favours the WebDriver API and JSON Wire Protocol, Selenium 3 will officially designate SeleniumRC as deprecated (“LegRC”, harhar), and Selenium 4 will implement the authoritative W3C WebDriver spec. ↩
For example, until recently Selenium WebDriver only included commands that are common to all browsers, with no way to use features that are specific to one. In contrast, the W3C WebDriver spec allows the possibility of extension commands. Extension commands are being implemented in Selenium clients right now! The future is now! ↩
Fun fact: Marionette is not only used for “Marionette Tests” at Mozilla. The client/server are also used to instrument Firefox for other test automation like mochitests and Web Platform Tests. ↩
In the lead-up to the London all hands we had a Town Hall where Mark Mayo and Nick Nguyen previewed the three year strategy for Firefox. That talk mostly covered an emerging area of focus and investment we’re calling the Context Graph.
This last week, Nick posted a vision for the Context Graph over at Medium. If you haven’t, I encourage you to go read it at medium.com/@osunick
So what is the Context Graph. The context graph is an understanding of how pages on the web are connected to each other and to a user’s current context. With Context Graph, we’re going to build a recommendation engine for the Web and features that help people discover relevant content outside of the popular search and social silos.
What does that look like in practice? Well, if you’re learning about how to do something new, like bike repair, our recommender features should help you learn bike repair based on others who have already taken the same journey on the Web. If you’re on YouTube watching a music video, Firefox should help you find the top lyrics or commentary sites that embed or link to that YouTube video. Or, if you’re walking into a WalMart, our mobile apps should automatically show you WalMart’s website or perhaps a WalMart deals and coupons site.
Building a recommendation engine for thew Web is a large project that will take time and effort but we believe the payoff for users and the health of the Open Web is going to be well worth it.
The Monday Project Meeting
Many people have now heard of the EFF-backed free certificate authority Let's Encrypt. Not only is it free of charge, it has also introduced a fully automated mechanism for certificate renewals, eliminating a tedious chore that has imposed upon busy sysadmins everywhere for many years.
These two benefits - elimination of cost and elimination of annual maintenance effort - imply that server operators can now deploy certificates for far more services than they would have previously.
For example, somebody hosting basic Drupal or Wordpress sites for family, friends and small community organizations can now offer them all full HTTPS encryption, WebRTC, SIP and XMPP without having to explain annual renewal fees or worry about losing time in their evenings and weekends renewing certificates manually.
Even people who were willing to pay for a single certificate for their main web site may have snubbed their nose at the expense and ongoing effort of having certificates for their SMTP mail server, IMAP server, VPN gateway, SIP proxy, XMPP server, WebSocket and TURN servers too. Now they can all have certificates.Early efforts at SIP were doomed without encryption
In the early days, SIP messages would be transported across the public Internet in UDP datagrams without any encryption. SIP itself wasn't originally designed for NAT and a variety of home routers were created with "NAT helper" algorithms that would detect and modify SIP packets to try and work through NAT. Sadly, in many cases these attempts to help actually clash with each other and lead to further instability. Conversely, many rogue ISPs could easily detect and punish VoIP users by blocking their calls or even cutting their DSL line. Operating SIP over TLS, usually on the HTTPS port (TCP port 443) has been an effective way to quash all of these different issues.
While the example of SIP is one of the most extreme, it helps demonstrate the benefits of making encryption universal to ensure stability and cut out the "man-in-the-middle", regardless of whether he is trying to help or hinder the end user.Is one certificate enough?
Modern SIP, XMPP and WebRTC require additional services, TURN servers and WebSocket servers. If they are all operated on port 443 then it is necessary to use different hostnames for each of them (e.g. turn.example.org and ws.example.org. Each different hostname requires a certificate. Let's Encrypt can provide those additional certificates too, without additional cost or effort.The future with Let's Encrypt
The initial version of the Let's Encrypt client, certbot, fully automates the workflow for people using popular web servers such as Apache and nginx. The manual or certonly modes can be used for other services but hopefully certbot will evolve to integrate with many other popular applications too.
Currently, Let's Encrypt's certbot tool issues certificates to servers running on TCP port 443 or 80. These are considered to be a privileged ports whereas any port over 1023, including the default ports used by applications such as SIP (5061), XMPP (5222, 5269) and TURN (5349), are not privileged ports. As long as certbot maintains this policy, it is generally necessary to either run a web server for the domain associated with each certificate or run the services themselves on port 443. There are other mechanisms for domain validation and various other clients supporting different subsets of them. Running the services themselves on port 443 turns out to be a good idea anyway as it ensures that RTC services can be reached through HTTP proxy servers who fail to let the HTTP CONNECT method access any other ports.
Many configuration tasks are already scripted during the installation of packages on a GNU/Linux distribution (such as Debian or Fedora) or when setting up services using cloud images (for example, in Docker or OpenStack). Due to the heavily standardized nature of Let's Encrypt and the widespread availability of the tools, many of these package installation scripts can be easily adapted to find or create Let's Encrypt certificates on the target system, ensuring every service is running with TLS protection from the minute it goes live.
If you have questions about Let's Encrypt for RTC or want to share your experiences, please come and discuss it on the Free-RTC mailing list.