mozilla

Mozilla Nederland LogoDe Nederlandse
Mozilla-gemeenschap

Hacks.Mozilla.Org: MDN Changelog for March 2018

Mozilla planet - vr, 06/04/2018 - 17:01

Editor’s note: A changelog is “a log or record of all notable changes made to a project. [It] usually includes records of changes such as bug fixes, new features, etc.”

Publishing a changelog is kind of a tradition in open source, and a long-time practice on the web. We thought readers of Hacks and folks who use and contribute to MDN Web Docs would be interested in learning more about the work of the MDN engineering team, and the impact they have in a given month. We’ll also introduce code contribution opportunities, interesting projects, and new ways to participate.

Done in March

Here’s what happened in March to the code, data, and tools that support MDN Web Docs:

Here’s the plan for April:

Improved compatibility data at the Hack on MDN event

In March, the MDN team focused on the Browser Compatibility Data project, meeting in Paris with dozens of collaborators for the “Hack on MDN” event. The work resulted in 221 Pull Requests in the BCD repo, as well as additional work in other repos. See Jean-Yves Perrier’s Hack on MDN article for details about the tools created and data updated during the week.

We reviewed and merged over 250 PRs in March. The compatibility data conversion jumped from 57% to 63% complete. Jeremie Patonnier led the effort to convert SVG data (BCD PR 1371). API data was also a focus, both converting the existing MDN tables and using other data sources to fill out missing APIs.

There’s now 264 open PRs in the BCD repo, about a month’s worth, and the contributions keep coming in. BCD is one of the most active GitHub repositories at Mozilla this year by pull requests (880) and by authors (95), only second to rust (1268 PRs, 267 authors). The rate of contributions continue to increase, so BCD may be #1 for Q2 2018.

Graph that shows weekly commits to the browser-compat-data project growing from about 25 commits per week to over 60 per week in March.

Experimented with brotli compression

Brotli is a Google-developed compression algorithm designed for serving web assets, which can outperform the widely used gzip algorithm. By the end of 2017, all major browsers support the br content-encoding method, and Florian requested a test of brotli on MDN in bug 1412891. Maton Anthony wrote a Python-2 compatible brotli middleware in PR 4686, with the default compression level of 11. This went live on March 7th.

Brotli does compress our content better than gzip. The homepage goes from 36k uncompressed to 9.5k with gzip to 7k with brotli. The results are better on wiki pages like CSS display page, which goes from 144k uncompressed to 20k with gzip and 15k with brotli.

However, brotli was a net-negative for MDN performance. Average response time measured at the server was slower, going from 75 ms to 120 ms, due to the increased load of the middleware. Google Analytics showed a 6% improvement in page download time (1 ms better), but a 23% decline in average server response time (100 ms worse). We saw no benefit in static assets, because CloudFront handles gzip compression and ignores brotli.

Antony adjusted to compression level 5 (Kuma PR 4712), which a State of Brotli article tested to be comparable to gzip 9 in compression time but still results in smaller assets. When this went live on March 26th, we saw similar results, with our response time returning to pre-brotli levels, but with a slight improvement in HTML size when br was used.

Graph of average server response time from New Relic, showing a doubling from about 50 ms to 100 ms around March 7, then back to 50 ms after March 21.

When we ship CloudFront in April, the CDN will take care of compression, and brotli will go away again. It looks like a promising technology, but requires a CDN that supports it, and works best with a “pre-compress” workflow rather than a “when requested” middleware workflow, which means it won’t be a good fit for MDN for a while.

Shipped tweaks and fixes

There were 370 PRs merged in March:

137 of these were from first-time contributors:

Other significant PRs:

Planned for April

We’ll be busy in April with the output of the Hack on MDN event, reviewing PRs and turning prototypes into production code. We’ll continue working on our other projects as well.

Get HTML interactive examples ready for production

The interactive examples are transitioning from rapid experimentation to a shipping feature. Will Bamberg published Bringing interactive examples to MDN in March, which details how this project went from idea to implementation.

The interactive examples team will firm up the code and processes, to build a better foundation for contributors and for future support. At the same time, they will firm up the design for HTML examples, which often require a mix of CSS and HTML to illustrate usage.

Ship the CDN and Django 1.11

Ryan Johnson finished up caching headers for the 60 or so MDN endpoints, and he and Dave Parfitt added CloudFront in front of the staging site. We’ll spend some time with automated and manual testing, and then reconfigure production to also be behind a CDN.

We’ve missed the April 1 deadline for the Django 1.11 update. We plan to focus on the minimum tasks needed to update in April, and will defer other tasks, like updating out-of-date but compatible 3rd-party libraries, until later.

Improve the front-end developer experience

Schalk Neethling has had some time to get familiar with the front-end code of MDN, using tools like Sonarwhal and Chrome devtools’ performance and network panels to find performance issues faced by Chrome visitors to MDN. He’s put together a list of changes that he think will improve development and performance.

Analyzing the performance of developer.mozilla.org with Chrome developer tools, showing how long it takes to download assets, run javascript, etc. JS starts running around 1 second, and the page is ready around 3 seconds.

One quick win is to replace FontAwesome’s custom font face with SVG icons. Instead of loading all the icons in a single file, only the icons used by a page will be loaded. The SVG will be included in the HTML, avoiding additional HTTP requests. SVG icons are automatically scaled, so they will look good on all devices. This should also improve the development experience. It’s easy to make mistakes when using character encodings like "\f108\0020\eae6", and much clearer to use names like "icon-ie icon-mobile".

Schalk is thinking about other challenges to front-end development, and how to bring in current tools and techniques. He’s cleaning up code to make it more consistent, and formalizing the rules in MDN Fiori, a front-end style guide and pattern library. This may be the first step to switching to a UI component structure, such as React.

A bigger improvement would be updating the front-end build pipeline. MDN’s build system was developed years ago (by Python developers) when JavaScript was less mature, and the system hasn’t kept up. Webpack is widely used to bundle code and assets for deployment, and a new pipeline could allow developers to use a broader ecosystem of tools like Babel to write modern JavaScript.

Finally, we continue to look for the right way to test JavaScript. We’re currently using Selenium to automate testing in a browser environment. We’ve had issues getting a stable set of tools for consistent Selenium testing, and it has proven to be too heavy for unit testing of JavaScript. Schalk has had good experiences with Jest, and wants to start testing MDN Javascript with Jest in April.

Categorieën: Mozilla-nl planet

Support.Mozilla.Org: Proposal: Knowledge Base Spring Cleaning at SUMO – June 2018

Mozilla planet - vr, 06/04/2018 - 16:00
Hi everyone,

People say that spring is a good time to try new things and refresh one’s body and mind. While our site does not really have a body (unless you count the HTML tags…) and its collective mind is made up of all of us (including you, reading these words), it does need refreshing every now and then, mostly due to the nature of the open, living web we are all part of.

That said, let’s get to the details, some of which may sound like Rosana’s post from about 4 years ago.

What’s the proposal about?

The localization coordinators across Mozilla want to consolidate Mozillians and our resources around active locales. In the context of SUMO’s Knowledge Base, this means taking a closer look at the Knowledge Base, which at the moment is available for 76 locales (at least “on paper”).

The proposal is to redirect the mostly inactive locales of the SUMO Knowledge Base to English (or another best equivalent locale). You can find the proposed (draft) list of all the locales here.

  • 23 locales will remain active, with localizers providing good coverage both via Pontoon and SUMO’s Knowledge Base – and no action will be required for them.
  • In 30 cases, the existing (and historically active) localizers will be contacted to decide on reviving the localization effort for SUMO content. If there is enough interest, they will remain active (with a plan to update content). If there is not enough interest, they will be redirected at the end of June.
  • In 23 cases, the locales will be redirected at the end of June due to little or no localization activity over an extended period of time. These can also be reactivated at a later time, if need be.

It is important to note that no content created so far would be deleted.

Why do we want to do this?

There are several reasons behind this proposal:

  1. Fewer locales mean more administrator focus on actually active locales – more time for joint projects or experiments, more attention to the needs of localizers putting a lot of time and effort into making SUMO global on a continuous basis.
  2. Firefox and the main Mozilla pages have a higher priority than SUMO, so for many localizers it’s better to focus on these projects, rather than getting involved with Knowledge Base localization.
  3. The “long tail” locales on SUMO are accessed by a very small number of users each month, so there is little need for serving them at this point.
  4. Revisiting this initiative in 6 months will help us see progress made by local communities in building up their active localizer numbers.
What are the next steps?

The 23 locales counted as “no change” will keep functioning as usual, with more locally driven activities coming this year – check the last section of this L10n blog post for just one of the many possibilities.

During April and May, I will reach out to all the contributors listed in SUMO and Pontoon for the 30 locales that are listed as candidates for the clean up – using Private Messages on SUMO or emails listed in Pontoon. Depending on the answers received, we may be keeping some of these locales online, and coming up with a realistic (but ambitious) localization progress timeline for each of them.

At the end of June (after the All Hands), all the locales that are not active will be redirected to English (or another best equivalent locale).

After that, localization for the redirected locales will be focused on Firefox and other Mozilla properties. If there is interest in reactivating a locale, it will happen according to a “re/launch plan” – including creating or participating in a SUMO Knowledge Base sprint event aimed at localizing at least the 20 most popular articles in the Knowledge Base as the minimum requirement, and additional sprints to localize an additional 80 most popular articles.

Is anything else being cleaned up at this stage?

No, the Knowledge Base is a big enough project for now.

Still, this is just the start of this year’s clean up – we will also look into reviewing and reorganizing our contributor documentation, English Knowledge Base, and other properties containing content relevant to SUMO (for example our MozWiki pages).

Please let us know what you think about this in the comments or on our forums: SUMO / Discourse.

Categorieën: Mozilla-nl planet

The Rust Programming Language Blog: The Rust Team All Hands in Berlin: a Recap

Mozilla planet - vr, 06/04/2018 - 02:00

Last week we held an “All Hands” event in Berlin, which drew more than 50 people involved in 15 different Rust Teams or Working Groups, with a majority being volunteer contributors. This was the first such event, and its location reflects the current concentration of team members in Europe. The week was a smashing success which we plan to repeat on at least an annual basis.

The impetus for this get-together was, in part, our ambitious plans to ship Rust, 2018 edition later this year. A week of work-focused facetime was a great way to kick off these efforts!

We’ve also asked attendees to blog and tweet about their experiences at the #RustAllHands hashtag; the Content Team will be gathering up and summarizing this content as well.

Highlights by team

Below we’ll go through the biggest items addressed last week. Note that, as always, reaching consensus in a meeting does not mean any final decision has been made. All major decisions will go through the usual RFC process.

Language design
  • Macros: put together a proposal for the 2018 edition.
    • Stabilize a forward-compatible subset of procedural macros that explicitly opts out of hygiene (by asking all names to be interpreted at the call site).
    • Work out a revised API surface for procedural macros based on what we’ve learned so far.
    • Stabilize importing macros through normal use statements.
    • Alex Crichton will drive the stabilization process.
  • Extern types: worked through remaining issues for stabilization.
  • Improvements to derive: a proposal to make derive more ergonomic in Rust 2018.
  • Best practices: set up a cross-cutting guidelines effort, encompassing the API guidelines but also including style, lints, and Lang Team recommendations.
Libraries
  • SIMD: talked through final steps of stabilization; we hope to stabilize in 1.27.
  • Allocators: developed a conservative proposal for stabilizing global allocators; Simon Sapin has set up a new tracking issue.
Compiler
  • Tool integration: extensive discussion and planning about the needs of compiler clients, composable plugins, and the compiler’s new query architecture.
  • Query compilation: a plan for end-to-end query compilation, i.e. fully-incremental compilation.
  • libsyntax: a long-run plan for a revamped libsyntax, shared amongst a variety of tools.
  • Contribution: improved contribution experience for the compiler.
Community Documentation
  • Edition planning: determined resources needed for the 2018 edition, what that means for the Rust Bookshelf, and who will be responsible for each.
  • Team blog: “This week in Rust docs” is going to move to a new docs team blog!
  • Doxidize (aka rustdoc2): made initial public release; it’s like https://docusaurus.io/ but for Rust.
  • Intermediate-level docs: contributed to idea generation.
Tools
  • Edition planning, especially for the rustfix tool.
  • Clippy lint audit: developed plan for reaching 1.0 on Clippy this year, based on categorizing lints into Correctness, Performance, Style, Complexity, Pedantry, and Nursery categories.
  • Custom test frameworks: reached consensus on most of the details for the RFC
  • IDEs: discussed improvements to code completion, stability improvements, and new features like refactoring and auto-import support.
Cargo
  • Xargo integration: making a few more platforms tier 1 addresses the immediate need for embedded; otherwise, plan to go the “std-aware Cargo” route late in 2018. Key insight: will entirely obsolete the concept of “targets” in rustup.
  • Rustup integration: with Xargo integration we can simplify rustup; plan to expose new interface via Cargo late in 2018.
  • Custom registries: ready to stabilize.
  • Profiles: the v2 design is clear, and Aleksey Kladov plans to implement.
  • Public dependencies: significantly revised plan to have more conservative ecosystem impact. Aaron Turon will write a blog post.
  • Build system integration: determined two pieces of functionality to implement to decouple the RLS from Cargo.
  • Project templates: developed a minimal design; withoutboats will write an RFC.
  • Custom workflows: designed workflow customization, which is useful for frameworks; Aaron Turon has written a blog post.
Infrastructure
  • bors queue: hatched and resourced lots of ideas to reduce the PR queue for Rust 2018.
  • crater: pietroalbini is testing a bot for running crater!
  • Travis log bot: TimNN has written a bot to extract errors from Travis logs
WG: CLI apps
  • WG overview slides.
  • Survey and strategy: dove into survey data and developed strategy from it; posts forthcoming.
  • Distribution: “distribution-friendly” badge on crates.io
  • Extensible Cargo install: wrote an RFC on-site!
WG: network services
  • WG overview slides.
  • Launching the WG: determined goals for the WG, including async/await, documentaiton, mid-level HTTP libraries, and the Tower ecosystem. Kickoff announcement coming soon!
  • Async/await: finalized design and stabilization approach for RFCs (blog post and links to RFCs here).
WG: embedded devices
  • WG overview slides
  • Embedded Rust on stable: addressed all known blockers and several mid-priority issues as well.
  • The Embedded Rust book: defined audience and basic outline.
WG: WebAssembly
  • WG overview slides.
  • 2018 edition planning, including scoping the toolchain and book efforts for the release.
  • JS integration: dove into integrating JS callbacks vs Rust functions in wasm-bindgen.
  • Ecosystem integration: wasm-bindgen now works with CommonJS!
  • Code bloat, reducing the footprint of panicking from 44k to 350 bytes.
Unsafe code guidelines
  • Restarted the WG: dug back into two competing approaches (“validity” and “access”-based), substantially iterated on both. Niko Matsakis and Ralf Jung will be writing blog posts about these ideas.
  • Pressing issues: tackled a few near-term decisions that need to be made, including union semantics, Pin semantics, thread::abort and more.
Web site
  • Domain WG sketching: over a couple of sessions, the four domain-centered working groups (listed above) developed some initial sketches of landing pages for each domain.
Rust reach New working groups

In addition to the work by existing teams, we had critical mass to form two new working groups:

  • Verification: bringing together folks interested in testing, static analysis, and formal verification.
  • Codegen: work to improve the quality of code rustc generates, both in terms of size and runtime performance.

The Verification WG has been formally announced, and the Codegen WG should be announced soon!

General reflections and notes of appreciation

The combination of having a clear goal – Rust 2018 – and a solid majority of team member present made the All Hands extremely fun and productive. We strive to keep the Rust community open and inclusive through asynchronous online communication, but it’s important to occasionally come together in person. The mix of ambition and kindness at play last week really captured the spirit of the Rust Community.

Of course, an event like this is only possible with a lot of help, and I’d like to thank the co-organizers and Mozilla Berlin office staff:

  • Johann Hofmann
  • Jan-Erik Rediger
  • Florian Gilcher
  • Ashley Williams
  • Martyna Sobczak

as well as all the team leads who organized and led sessions!

Categorieën: Mozilla-nl planet

The Firefox Frontier: uBlock Origin is Back-to-Back March Addonness Champion

Mozilla planet - do, 05/04/2018 - 23:01

It’s been three weeks and we’ve almost run out of sports metaphors. We’re happy to announce that after three rounds and thousands of votes you have crowned uBlock Origin March … Read more

The post uBlock Origin is Back-to-Back March Addonness Champion appeared first on The Firefox Frontier.

Categorieën: Mozilla-nl planet

Mozilla VR Blog: Progressive WebXR

Mozilla planet - do, 05/04/2018 - 18:21
Progressive WebXR

Imagine you wanted to have your store’s web page work in 2D, and also take advantage of the full range of AR and VR devices. WebXR will provide the foundation you need to create pages that work everywhere, and let you focus on compelling User Experiences on each of the devices.

In a recent blog post, we touched on one aspect of progressive WebXR, showcasing a version of A-Painter that was adapted to handheld AR and immersive VR. In this post, we will dive a bit deeper into the idea of progressive WebXR apps that are accessible across a much wider range of XR-supported devices.

The WebXR Device API expands on the WebVR API to include a broader range of mixed reality devices (i.e., AR/VR, immersive/handheld). By supporting all mixed reality devices in one API, the Immersive Web community hopes to make it easier for web apps to respond to the capabilities of a user’s chosen device, and present an appropriate UI for AR, VR, or traditional 2D displays.

At Mozilla, this move aligns with experiments we started last fall, when we created a draft WebXR API proposal, a WebXR polyfill based on it, and published our WebXR Viewer experimental web browser application to the iOS App Store. Publishing the app for iOS allowed us (and others) to experiment with WebXR on iOS, and is one of the target platforms for the XR Store demo that is the focus of this article. This demo shows how future sites can support the WebXR API across many different devices.

Before introducing the example store we've create, I’ll give an overview of the spectrum of devices that might need to be supported by a UX strategy to design this kind of WebXR-compatible site.

The spectrum of WebXR displays/realities

Progressive WebXR

Broadly speaking, there are three categories of displays that need to be supported by a responsive WebXR application:

  • current non-WebXR “flat displays” on desktop and handheld devices,
  • “portal displays” where these same screens present the illusion of a portal into a 3D world by leveraging device motion and 3D sensing, and
  • “immersive displays” such as head-worn displays that encompass the user’s senses in the 3D world.
Non-WebXR Displays

Current flat displays, such as desktop monitors, phones and tablets, may not have access to VR/AR capabilities via WebXR, although some will be able to simulate WebXR using a WebXR polyfill. Such desktop and mobile displays will remain the most common ways to consume web content for the forseeable future.

Mobile devices with 3DoF orientation sensors (that use accelerometers, gyroscopes, and magnetometers to give 3 Degrees of Freedom for the device pose) can simulate monoscopic 3D VR (and AR, if they use getUserMedia to access the video camera on the device), by leveraging the deviceorientation or Orientation Sensor APIs to access the device orientation.

Apps written for "Cardboard" display holders for these mobile devices (i.e., cases that use the phone's screen as their display, such as a Google Cardboard) use the same 3DoF sensors, but render stereoscopic VR on the phone display.

XR Displays

XR displays come in two sorts: AR and VR. The most common XR displays right now are Handheld or "Magic Window" AR, made possible by Apple’s ARKit for iOS (used by our WebXR Viewer) or Google’s ARCore for Android (used by the WebAROnARCore experimental browser). These give the user the illusion that the phone is a portal, or magic window, into an AR scene surrounding the user.

While currently less common, optically see-through headsets such as Microsoft’s Hololens provide an immersive 3D experience where virtual content can more convincingly surround the user. Other displays are in development or limited release, such as those from Magic Leap and Meta. We’ve shown the Meta 2 display working with WebVR in Firefox, and hope most upcoming AR displays will support WebXR eventually.

Thanks to WebVR, web-based VR is possible now on a range of immersive VR displays. The most common VR displays are 3DoF Headsets, such as Google Daydream or Samsung Gear VR. 6DoF Headsets (that provide position and orientation, giving 6 degrees of freedom), such as the HTC Vive, Oculus Rift, and Windows Mixed Reality-compatible headsets, can deliver fully immersive VR experiences where the user can move around.

The second category of VR displays are what we call Magic Window VR. These displays use flat 2D displays, but leverage 3D position and orientation tracking provided by the WebXR API to determine the pose of the camera and simulate a 3D scene around the user. Similar to Magic Window AR, these are possible now by leveraging the tracking capabilities of ARKit and ARCore, but not showing video of the world around the user.

In the table below we have listed the common OS, browsers, and devices for each category.

Progressive WebXR

A Case Study: XR Store

Creating responsive web apps that adapt to the full range of non-XR and WebXR devices will allow web developers to reach the widest audience. The challenge is to create a site that provides an appropriate interface for each of the modalities, rather than designing for one modality and simulating it on the others. To demonstrate this, we created a realistic XR Store demo, implementing a common scenario for WebXR: a product sheet on a simulated e-commerce site that allows the visitor to see a product in 3D on a traditional display, in a virtual scene in VR, or in the world around them in AR.

Progressive WebXR

Applying a Progressive Enhancement XR Design Strategy

We selected a representative set of current XR modes to let us experiment with a wide range of user experiences: Desktop / Mobile on Flat displays, AR Handheld on Portal Displays), and 3DoF / 6DoF Headsets on Immersive Displays.

The image below shows some initial design sketches of the interfaces and interactions for the XR Store application, focusing on the different user experiences for each platform.

Progressive WebXR

Selecting the Best User Interface for Each Platform

In the XR Store demo we used four types of User Interfaces (UIs), borrowing from terminology commonly used in 3D UIs (including in video games):

  • Diegetic: UI elements exist within the 3D world.
  • Spatial: UI elements placed in a fixed position in the 3D world.
  • Non-Diegetic: UI elements in 2D over the 3D scene. These are sometimes known as HUD (Heads-Up Display) UI elements.
  • Page: UI elements in 2D in the DOM of the website.

We also considered using these two types as well:

  • Anchored (VR exclusive): A mixed version of spatial and diegetic where the UI elements are anchored within the user’s gaze, or to the interaction controllers’ positions.
  • Direct manipulation (AR exclusive): UIs to directly manipulate objects using the touch screen, as opposed to manipulating the scene and camera.

For Non-XR / Flat Displays we are using exclusively Page UI controls, and a Non-Diegetic element for a button to enter Fullscreen mode (a common pattern in UIs in 2D applications). We opted not to mix the page elements with diegetic controls embedded in the 3D scene, as page-based metaphors are what current web users would expect to find on a product detail page.

Progressive WebXR

For XR / AR, we start with a very similar interface to the one used on Flat Displays (page controls, a diegetic button to switch to AR mode), but once in AR use Non-Diegetic UI elements over the 3D scene to make it easier to change the product properties. We could also have used a Direct Manipulation UI to scale, rotate, or move the product in the projected reality (but decided not to, in this example, for simplicity).

Progressive WebXR

For XR / VR we are using Diegetic UI for interaction controllers and a Spatial UI to give the user a representation of the selectable part of the product sheet. We could have used an Anchored UI to make it easier to find this panel, as we did in the VR painting interface for A-Painter. We ended up using the same UI for 3DoF and 6DoF VR, but could have changed the UI slightly for these two different cases, such as (for example) moving the 2D panel to stay behind-and-to-the-right of the object as a 6DoF user walks around. Another option would have been to have the 2D panel slide into view whenever it goes offscreen, a common metaphor used in applications for Microsoft’s HoloLens. Each of these choices has its pros and cons; the important part is enabling designers to explore different options and use the one that is more appropriate for their particular use case.

Progressive WebXR
Progressive WebXR

Technical Details

The source code for the XR Store demo currently uses a custom version of A-Frame combined with our aframe-xr WebXR binding for A-Frame. The aframe-xr bindings are based on our experimental webxr-polyfill and our three-xr WebXR bindings for three.js.

To try it out, visit XR Store on the appropriate device and browser. To try the 3DoF VR mode, you can use Google Chrome on an Android phone supporting Daydream, or the Samsung Internet/Oculus Browser on a phone supporting Gear VR. The demo supports any 6DoF VR device that works with Firefox, such as the HTC Vive or Oculus Rift, or Windows Mixed Reality headsets with Edge.

If you want to try the application in AR mode, you can use any iOS device that supports ARKit with our experimental WebXR Viewer app, or any Android device that supports ARCore with Google’s experimental WebAROnARCore (WebARonARCore has some limitations that limit entering and leaving AR mode).

Future Challenges

This demo is an exploration of how the web could be accessed in the near future, where people will be able to connect from multiple devices with very diverse capabilities. Beyond the platforms implement here, we will soon face an explosion of AR see-through head-worn displays that will offer a new ways of interact with our content. Such displays will likely support voice commands and body gestures as input, rather than 3D controllers like their immersive VR counterparts. One day, people may have multiple devices they use simultaneously: instead of visiting a web page on their phone and then doing AR or VR on that phone, they may visit on their phone and then send the page to their AR or VR headset, and expect the two devices to coordinate with each other.

One interface we intentionally didn't explore here is the idea of presenting the full 2D web page in 3D (AR or VR) and having the product "pop" out of the 2D page into the 3D world. As web browsers evolve to displaying full HTML pages in 3D, such approaches might be possible, and desirable in some cases. One might imagine extensions to HTML that mark parts of the page as "3D capable" and provide 3D models that can be rendered instead of 2D content.

Regardless of how the technology evolves, designers should continue to focus on offering visitors the best options for each modality, as they do with 2D pages, rather than offering all the possible UI options in every possible modality. Designers should still focus on their users’ experience, and offering the best support for each of the possible displays that the users might have, building on the Progressive-Enhancement design strategy popular on todays 2D web.

Today, the Web is a 2D platform accessible by all, and with the WebXR API, we will soon be using it to connect with one another in the VR and AR. As we move toward this future, supporting the widest range of devices will continue to be a critical aspect of designing experiences for the web.

Categorieën: Mozilla-nl planet

The Firefox Frontier: Facebook Container extension now includes Instagram and Facebook Messenger

Mozilla planet - do, 05/04/2018 - 18:15

To help you control the amount of data Facebook can gather about you, we have updated the Facebook Container extension to include Instagram and Facebook Messenger. This way, users of … Read more

The post Facebook Container extension now includes Instagram and Facebook Messenger appeared first on The Firefox Frontier.

Categorieën: Mozilla-nl planet

Hacks.Mozilla.Org: What Makes a Great Extension?

Mozilla planet - do, 05/04/2018 - 16:40

We’re in the middle of our Firefox Quantum Extensions Challenge and we’ve been asking ourselves: What makes a great extension?

Great extensions add functionality and fun to Firefox, but there’s more to it than that. They’re easy to use, easy to understand, and easy to find. If you’re building one, here are some simple steps to help it shine.

Make It Dynamic

Firefox 57 added dynamic themes. What does that mean? They’re just like standard themes that change the look and feel of Firefox, but they can change over time. Create new themes for daytime, nighttime, private browsing, and all your favorite things.

Mozilla developer advocate Potch created a wonderful video explaining how dynamic themes work in Firefox:

Make It Fun

Browsing the web is fun, but it can be downright hilarious with an awesome extension. Firefox extensions support JavaScript, which means you can create and integrate full-featured games into the browser. Tab Invaders is a fun example. This remake of the arcade classic Space Invaders lets users blast open tabs into oblivion. It’s a cathartic way to clear your browsing history and start anew.

Tab Invaders is an extension game that harks back to Space Invaders, and lets you blast inactive open tabs.

But you don’t have to build a full-fledged game to have fun. Tabby Cat adds an interactive cartoon cat to every new tab. The cats nap, meow, and even let you pet them. Oh, and the cats can wear hats.

An image from Tabby Cat, a playful extension that puts a kitty cat on every tab.

Make It Functional

A fantastic extension helps users do everyday tasks faster and more easily. RememBear, from the makers of TunnelBear, remembers usernames and passwords (securely) and can generate new strong passwords. Tree Style Tab lets users order tabs in a collapsible tree structure instead of the traditional tab structure. The Grammarly extension integrates the entire Grammarly suite of writing and editing tools in any browser window. Excellent extensions deliver functionality. Think about ways to make browsing the web faster, easier, and more secure when you’re building your extension.

RememBear is an add-on companion to the RememBear app.

Make It Firefox

The Firefox UI is built on the Photon Design System. A good extension will fit seamlessly into the UI design language and seem to be a native part of the browser. Guidelines for typography, color, layout, and iconography are available to help you integrate your extension with the Firefox browser. Try to keep edgy or unique design elements apart from the main Firefox UI elements and stick to the Photon system when possible.

Make It Clear

When you upload an extension to addons.mozilla.org (the Firefox add-ons site), pay close attention to its listing information. A clear, easy-to-read description and well-designed screenshots are key. The Notebook Web Clipper extension is a good example of an easy-to-read page with detailed descriptions and clear screenshots. Users know exactly what the extension does and how to use it. Make it easy for users to get started with your extension.

Make It Fresh

Firefox 60, now available in Firefox Beta, includes a host of brand-new APIs that let you do even more with your extensions. We’ve cracked open a cask of theme properties that let you control more parts of the Firefox browser than ever before, including tab color, toolbar icon color, frame color, and button colors.

The tabs API now supports a tabs.captureTab method that can be passed a tabId to capture the visible area of the specified tab. There are also new or improved APIs for proxies, network extensions, keyboard shortcuts, and messages.

For a full breakdown of all the new improvements to extension APIs in Firefox 60, check out Firefox engineer Mike Conca’s excellent post on the Mozilla Add-ons Blog.

Submit Your Extension Today

The Quantum Extensions Challenge is running until April 15, 2018. Visit the Challenge homepage for rules, requirements, tips, tricks, and more. Prizes will be awarded to the top extensions in three categories: Games & Entertainment, Dynamic Themes, and Tab Manager/Organizer. Winners will be awarded an Apple iPad Pro 10.5” Wifi 256GB and be featured on addons.mozilla.org. Runners up in each category will receive a $250 USD Amazon gift card. Enter today and keep making awesome extensions!

Categorieën: Mozilla-nl planet

Eric Shepherd: Results of the MDN “Internal Link Optimization” SEO experiment

Mozilla planet - do, 05/04/2018 - 16:08

Our fourth and final SEO experiment for MDN, to optimize internal links within the open web documentation, is now finished. Optimizing internal links involves ensuring that each page (in particular, the ones we want to improve search engine results page (SERP) positions for, are easy to find.

This is done by ensuring that each page is linked to from as many topically relevant pages as possible. In addition, it should in turn link outward to other relevant pages. The more quality links we have among related pages, the better our position is likely to be. The object, from a user’s perspective, is to ensure that even if the first page they find doesn’t answer their question, it will link to a page that does (or at least might help them find the right one).

Creating links on MDN is technically pretty easy. There are several ways to do it, including:

  • Selecting the text to turn into a link and using the toolbar’s “add link” button
  • Using the “add link” hotkey (Ctrl-K or Cmd-K)
  • Any one of a large number of macros that generate properly-formatted links automatically, such as the domxref macro, which creates a link to a page within the MDN API reference; for example: {{domxref(“RTCPeerConnection.createOffer()”)}} creates a link to https://developer.mozilla.org/en-US/docs/Web/API/RTCPeerConnection/createOffer, which looks like this: RTCPeerConnection.createOffer(). Many of the macros offer customization options, but the default is usually acceptable and is almost always better than trying to hand-make the links.

Our style guide talks about where links should be used. We even have a guide to creating links on MDN that covers the most common ways to do so. Start with these guidelines.

The content updates

10 pages were selected for the internal link optimization experiment.

Changes made to the selected pages

In general, the changes made were only to add links to pages; sometimes content had to be added to accomplish this but ideally the changes were relatively small.

  • Existing text that should have been a link but was not, such as mentions of terms that need definition, concepts for which pages exist that should have been linked to, were turned into links.
  • The uses of API names, element names, attributes, CSS properties, SVG element names, and so forth were turned into links when either first used in a section or if a number of paragraphs had elapsed since they were last a link. While repeated links to the same page don’t count, this is good practice for usability purposes.
  • Any phrase for which a more in-depth explanation is available was turned into a link.
  • Links to related concepts or topics were added where appropriate; for example, on the article about HTMLFormElement.elements, a note is provided with a link to the related Document.forms property.
  • Links to related functions or HTML elements or whatever were added.
  • The “See also” section was reviewed and updated to include appropriate related content.
Changes made to targeted pages

Targeted pages—pages to which links were added—in some cases got smaller changes made, such as the addition of a link back to the original page, and in some cases new links were added to other relevant content if the pages were particularly in need of help.

Pages to be updated

The pages selected to be updated for this experiment:

The results

The initial data was taken during the four weeks from December 29, 2017 through January 25th, 2018. The “after” data was taken just over a month after the work was completed, covering the period of March 6 through April 2, 2018.

The results from this experiment were fascinating. Of all of the SEO experiments we’ve done, the results of this one were the most consistently positive.

Landing Page Unique Pageviews Organic Searches Entrances Hits /en-us/docs/web/api/document_object_model/locating_dom_elements_using_selectors +46.38% +23.68% +29.33% +59.35% /en-us/docs/web/api/htmlcollection/item +52.89% +35.71% +53.60% +38.56% /en-us/docs/web/api/htmlformelement/elements +69.14% +57.83% +69.30% +70.74% /en-us/docs/web/api/mediadevices/enumeratedevices +23.55% +14.53% +16.71% +15.67% /en-us/docs/web/api/xmlhttprequest/html_in_xmlhttprequest +49.93% -3.50% +24.67% +59.63% /en-us/docs/web/api/xmlserializer +36.24% +46.94% +31.50% +37.66% /en-us/docs/web/css/all +46.15% +16.52% +23.51% +48.28% /en-us/docs/web/css/inherit +22.55% +27.16% +20.58% +17.12% /en-us/docs/web/css/object-position +102.78% +119.01% +105.56% +405.52% /en-us/docs/web/css/unset +24.60% +18.45% +19.20% +35.01%

These results are amazing. Every single value is up, with the sole exception of a small decline in organic search views (that is, appearances in Google search result lists) for the article “HTML in XMLHttpRequest.” Most values are up substantially, with many being impressively improved.

Note: The data in the table above was updated on April 12, 2018 after realizing the “before” data set was inadvertently one day shorter than the “after” set. This reduced the improvements marginally, but did not affect the overall results.

Uncertainties

Due to the implementation of the experiment and certain timing issues, there are uncertainties surrounding these results. Those include:

  • Ideally, much more time would have elapsed between completing the changes and collecting final data.
  • The experiment began during the winter holiday season, when overall site usage is at a low point.
  • There was overall site growth of MDN traffic over the time this experiment was underway.
Decisions

Certain conclusions can be reached:

  1. The degree to which internal link improvements benefited traffic to these pages can’t be ignored, even after factoring in the uncertainties. This is easily the most benefit we got from any experiment, and on top of that, the amount of work required was often much lower. This should be a high priority portion of our SEO plans.
  2. The MDN meta-documentation will be further improved to enhance recommendations around linking among pages on MDN.
  3. We should consider enhancements to macros used to build links to make them easier to use, especially in cases where we commonly have to override default behavior to get the desired output. Simplifying the use of macros to create links will make linking faster and easier and therefore more common.
  4. We’ll re-evaluate the data after more time has passed to ensure the results are correct.
Discussion?

If you’d like to comment on this, or ask questions about the results or the work involved in making these changes, please feel free to follow up or comment on the thread I’ve created on our Discourse forum.

Categorieën: Mozilla-nl planet

Francesco Lodolo: Why Fluent Matters for Localization

Mozilla planet - do, 05/04/2018 - 10:29

In case you don’t know what Fluent is, it’s a localization system designed and developed by Mozilla to overcome the limitations of the existing localization technologies. If you have been around Mozilla Localization for a while, and you’re wondering what happened to L20n, you can read this explanation about the relation between these two projects.

With Firefox 58 we started moving Firefox Preferences to Fluent, and today we’re migrating the last pane (Firefox Account – Sync) in Firefox Nightly (61). The work is not done yet, there are still edge cases to migrate in the existing panes, and subdialogs, but we’re on track. If you’re interested in the details, you can read the full journey in two blog posts from Zibi (2017 and 2018), covering not only Fluent, but also the huge amount of work done on the Gecko platform to improve multilingual support.

At this point, you might be wondering: do we really need another localization system? What’s wrong with what we have?

The truth is that there is a lot wrong with the current systems. In Gecko alone, we support 4 different file formats to localize content: .dtd, .properties, .inc, .inc. And since none of them support plural forms, we built hacks on top of .properties to support pluralization.

Here are a few practical examples of why Fluent is a huge improvement over existing technologies, and will allow us to improve the quality of the localizations we ship.

DTDs and Concatenations

You want to localize this simple fragment of XUL code without using JavaScript.

Please sign in to reconnect

This turns into 2 separate strings in a DTD file, and a long localization comment:

<!ENTITY signedInLoginFailure.beforename.label "Please sign in to reconnect"> <!ENTITY signedInLoginFailure.aftername.label "">

Why the empty string at the end? Because, while English doesn’t need it, other languages might need to change the structure of the sentence, adding content after the email address. On top of that, some localization tools don’t support empty strings correctly, not allowing localizers to mark an empty translation as a “translated” string.

In Fluent, this is simply:

sync-signedin-login-failure = Please sign in to reconnect { $email }

One single string, full visibility on the context, flexibility to move around the email address.

Plural Forms

Plural forms are supported in Gecko only for .properties files. Fluent supports plural forms natively, and with a lot of additional flexibility.

First of all, if you’re not familiar with the complexity of plurals across languages (limiting the discussion to cardinal integer numbers):

  • English, like many other European languages, only has 2 plural forms: n=1 uses one form (“1 page”), all other numbers (n!=1) use a different form (“2 pages”). Sadly, this makes a lot of people think about plural in terms of “1 vs many”, while that’s not really the case for most languages.
  • French still has 2 plural forms, but uses the same form for both 0 and 1.
  • Other languages can only have one form (e.g. Chinese), or have up to 6 different plural forms (e.g. Arabic). Fluent uses the CLDR categories (zero, one, two, few, many, other) to match a number to the correct plural form. For example, in Russian 1 and 21 will use the form “one”, but 11 will use “other”.
  • The behavior might change if the actual number is present or not. For example, Turkic languages don’t need to pluralize a noun after a number (“1 page”), but need plural forms in sentences referencing to one or more elements (“this” vs “them”).

Consider for example this use case: in Firefox, the button to set the home page changes from “Use Current Page” to “Use Current Pages”, depending on the number of open tabs.

If you want to use a proper plural form, you need to add the number of tabs to the string. In .properties, it would look like this (plural forms are separated by a semicolon):

use-current-pages = Use Current Page;Use Current #1 Pages

This will force languages to create all plural forms for their locales, even if they might not be needed. If your language has 6 forms, you need to provide all 6 forms, even if they’re all identical. Fun, isn’t it? Note that this is not just a limitation of the plural system used in .properties, the same happens in GetText (.po files).

Here’s how Fluent improve things: first, you don’t need to add all plural forms, you can rely on the fall back to the default value (indicated by *), without raising any error:

use-current-pages = .label = { *[other] Использовать текущие страницы }

More important, you can match a specific number (1 in this case):

use-current-pages = .label = { [1] Использовать текущую страницу *[other] Использовать текущие страницы }

In Russian, the “[one]” form would be also used for 21 tabs, while here it’s only used for 1 tab.

Let’s assume the English message looks like this:

use-current-pages = .label = { $tabCount -> [1] Use Current Page *[other] Use Current Pages }

Do you need to expose the number in your message and treat it like a standard plural form? You can:

use-current-pages = .label = { $tabCount -> [one] Use { $tabCount } Page *[other] Use { $tabCount } Pages }

Do you only need one form? Again, you can simplify it into:

use-current-pages = .label = Use { $tabCount } Pages

or even:

use-current-pages = .label = Use Current Pages Variants

This is one of the most exciting changes introduced to the localization paradigm.

Consider this example: “Firefox Account” is a special brand within Firefox. While “Firefox” itself should not be localized or declined, “account” can be localized and moved. In Italian it’s “Account Firefox”, “Cuenta de Firefox” in Spanish.

A special entity is defined in order to be reused in other strings:

<!ENTITY syncBrand.fxAccount.label "Firefox Account">

For example:

<!ENTITY signedOut.accountBox.title "Connect with a &syncBrand.fxAccount.label;">

In Italian this results in “Connetti &syncBrand.fxAccount.label;”. It’s not natural, and it looks wrong, because we don’t capitalize nouns in the middle of a sentence.

My only option to improve the translation, and make it sound more natural, would have been to drop the entity and just add the translated name. That defies the entire concept of having a central definition for the brand.

Here’s what I can do in Fluent. The brand is defined as a term, a special type of message that can only be referenced from other strings (not code), and can have additional attributes.

-fxaccount-brand-name = Firefox Account sync-signedout-account-title = Connect with a { -fxaccount-brand-name }

And now in Italian I can do:

-fxaccount-brand-name = { [lowercase] account Firefox *[uppercase] Account Firefox } sync-signedout-account-title = Connetti il tuo { -fxaccount-brand-name[lowercase] }

While uppercase vs lowercase is a trivial example, variants can have a much deeper impact on localization quality for complex languages that use declensions, where the word “account” changes based on its role within the sentence (nominative, accusative, etc.).

This is only the tip of the iceberg, there’s more you can do with Fluent, and the new localization API will allow us to drastically improve the experience for non English users in Firefox. Here are some additional links if you want to learn more about Fluent:

Categorieën: Mozilla-nl planet

Daniel Stenberg: curl another host

Mozilla planet - do, 05/04/2018 - 10:26

Sometimes you want to issue a curl command against a server, but you don't really want curl to resolve the host name in the given URL and use that, you want to tell it to go elsewhere. To the "wrong" host, which in this case of course happens to be the right host. Because you know better.

Don't worry. curl covers this as well, in several different ways...

Fake the host header

The classic and and easy to understand way to send a request to the wrong HTTP host is to simply send a different Host: header so that the server will provide a response for that given server.

If you run your "example.com" HTTP test site on localhost and want to verify that it works:

curl --header "Host: example.com" http://127.0.0.1/

curl will also make cookies work for example.com in this case, but it will fail miserably if the page redirects to another host and you enable redirect-following (--location) since curl will send the fake Host: header in all further requests too.

The --header option cleverly cancels the built-in provided Host: header when a custom one is provided so only the one passed in from the user gets sent in the request.

Fake the host header better

We're using HTTPS everywhere these days and just faking the Host: header is not enough then. An HTTPS server also needs to get the server name provided already in the TLS handshake so that it knows which cert etc to use. The name is provided in the SNI field. curl also needs to know the correct host name to verify the server certificate against (server certificates are rarely registered for an IP address). curl extracts the name to use in both those case from the provided URL.

As we can't just put the IP address in the URL for this to work, we reverse the approach and instead give curl the proper URL but with a custom IP address to use for the host name we set. The --resolve command line option is our friend:

curl --resolve example.com:443:127.0.0.1 https://example.com/

Under the hood this option populates curl's DNS cache with a custom entry for "example.com" port 443 with the address 127.0.0.1, so when curl wants to connect to this host name, it finds your crafted address and connects to that instead of the IP address a "real" name resolve would otherwise return.

This method also works perfectly when following redirects since any further use of the same host name will still resolve to the same IP address and redirecting to another host name will then resolve properly. You can even use this option multiple times on the command line to add custom addresses for several names. You can also add multiple IP addresses for each name if you want to.

Connect to another host by name

As shown above, --resolve is awesome if you want to point curl to a specific known IP address. But sometimes that's not exactly what you want either.

Imagine you have a host name that resolves to a number of different host names, possibly a number of front end servers for the same site/service. Not completely unheard of. Now imagine you want to issue your curl command to one specific server out of the front end servers. It's a server that serves "example.com" but the individual server is called "host-47.example.com".

You could resolve the host name in a first step before curl is used and use --resolve as shown above.

Or you can use --connect-to, which instead works on a host name basis. Using this, you can make curl replace a specific host name + port number pair with another host name + port number pair before the name is resolved!

curl --connect-to example.com:443:host-47.example.com:443 https://example.com/ Crazy combos

Most options in curl are individually controlled which means that there's rarely logic that prevents you from using them in the awesome combinations that you can think of.

-- resolve, -- connect-to and -- header can all be used in the same command line!

Connect to a HTTPS host running on localhost, use the correct name for SNI and certificate verification, but then still ask for a separate host in the Host: header? Sure, no problem:

curl --resolve example.com:443:127.0.0.1 https://example.com/ --header "Host: diff.example.com" All the above with libcurl?

When you're done playing with the curl options as described above and want to convert your command lines to libcurl code instead, your best friend is called --libcurl.

Just append --libcurl example.c to your command line, and curl will generate the C code template for you in that given file name. Based on that template, making use of  that code correctly is usually straight-forward and you'll get all the options to read up in a convenient way.

Good luck!

Update: thanks to @Manawyrm, I fixed the ndash issues this post originally had.

Categorieën: Mozilla-nl planet

Mozilla Addons Blog: April’s Featured Extensions

Mozilla planet - do, 05/04/2018 - 00:23

Firefox Logo on blue background

Pick of the Month: Passbolt

by Passbolt
A password manager made for secure team collaboration. Great for safekeeping wifi and admin passwords, or login credentials for your company’s social media accounts.

“The interface is user-friendly, installation is explained step by step.”

Featured: FoxClocks

by Andy McDonald
Display times from locations all over the world in a tidy spot at the bottom of your browser.

“This is one of the best extensions for people who work with globally distributed teams.”

Featured: Video DownloadHelper

by mig
A simple but powerful tool for downloading video. Works well with the web’s most popular video sharing sites, like YouTube, Vimeo, Facebook, and others.

“Without a doubt the best completely trouble-free add-on which does exactly what it claims to do without fuss, complicated settings, or preferences. It just works.”

Nominate your favorite add-ons

Featured extensions are selected by a community board made up of developers, users, and fans. Board members change every six months. Here’s further information on AMO’s featured content policies.

If you’d like to nominate an add-on for featuring, please send it to amo-featured [at] mozilla [dot] org for the board’s consideration. We welcome you to submit your own add-on!

The post April’s Featured Extensions appeared first on Mozilla Add-ons Blog.

Categorieën: Mozilla-nl planet

Air Mozilla: Bugzilla Project Meeting, 04 Apr 2018

Mozilla planet - wo, 04/04/2018 - 22:00

Bugzilla Project Meeting The Bugzilla Project Developers meeting.

Categorieën: Mozilla-nl planet

Air Mozilla: Bugzilla Project Meeting, 04 Apr 2018

Mozilla planet - wo, 04/04/2018 - 22:00

Bugzilla Project Meeting The Bugzilla Project Developers meeting.

Categorieën: Mozilla-nl planet

Air Mozilla: Weekly SUMO Community Meeting, 04 Apr 2018

Mozilla planet - wo, 04/04/2018 - 18:00

Weekly SUMO Community Meeting This is the SUMO weekly call

Categorieën: Mozilla-nl planet

Will Kahn-Greene: Socorro Smooth Mega-Migration 2018

Mozilla planet - wo, 04/04/2018 - 18:00
Summary

Socorro is the crash ingestion pipeline for Mozilla's products like Firefox. When Firefox crashes, the Breakpad crash reporter asks the user if the user would like to send a crash report. If the user answers "yes!", then the Breakpad crash reporter collects data related to the crash, generates a crash report, and submits that crash report as an HTTP POST to Socorro. Socorro collects and saves the crash report, processes it, and provides an interface for aggregating, searching, and looking at crash reports.

Over the last year and a half, we've been working on a new infrastructure for Socorro and migrating the project to it. It was a massive undertaking and involved changing a lot of code and some architecture and then redoing all the infrastructure scripts and deploy pipelines.

On Thursday, March 28th, we pushed the button and switched to the new infrastructure. The transition was super smooth. Now we're on new infra!

This blog post talks a little about the old and new infrastructures and the work we did to migrate.

Read more… (13 mins to read)

Categorieën: Mozilla-nl planet

Air Mozilla: Weekly SUMO Community Meeting, 04 Apr 2018

Mozilla planet - wo, 04/04/2018 - 18:00

Weekly SUMO Community Meeting This is the SUMO weekly call

Categorieën: Mozilla-nl planet

Hacks.Mozilla.Org: JavaScript to Rust and Back Again: A wasm-bindgen Tale

Mozilla planet - wo, 04/04/2018 - 16:58

Recently we’ve seen how WebAssembly is incredibly fast to compile, speeding up JS libraries, and generating even smaller binaries. We’ve even got a high-level plan for better interoperability between the Rust and JavaScript communities, as well as other web programming languages. As alluded to in that previous post, I’d like to dive into more detail about a specific component, wasm-bindgen.

Today the WebAssembly specification only defines four types: two integer types and two floating-point types. Most of the time, however, JS and Rust developers are working with much richer types! For example, JS developers often interact with document to add or modify HTML nodes, while Rust developers work with types like Result for error handling, and almost all programmers work with strings.

Being limited only to the types that WebAssembly provides today would be far too restrictive, and this is where wasm-bindgen comes into the picture. The goal of wasm-bindgen is to provide a bridge between the types of JS and Rust. It allows JS to call a Rust API with a string, or a Rust function to catch a JS exception. wasm-bindgen erases the impedance mismatch between WebAssembly and JavaScript, ensuring that JavaScript can invoke WebAssembly functions efficiently and without boilerplate, and that WebAssembly can do the same with JavaScript functions.

The wasm-bindgen project has some more description in its README. To get started here, let’s dive into an example of using wasm-bindgen and then explore what else it has to offer.

Hello, World!

Always a classic, one of the best ways to learn a new tool is to explore its equivalent of printing “Hello, World!” In this case we’ll explore an example that does just that — alert “Hello, World!” to the page.

The goal here is simple, we’d like to define a Rust function that, given a name, will create a dialog on the page saying Hello, ${name}!. In JavaScript we might define this function as:

export function greet(name) { alert(`Hello, ${name}!`); }

The caveat for this example, though, is that we’d like to write it in Rust. Already there’s a number of pieces happening here that we’ll have to work with:

  • JavaScript is going to call into a WebAssembly module, namely the greet export.
  • The Rust function will take a string as an input, the name that we’re greeting.
  • Internally Rust will create a new string with the name interpolated inside.
  • And finally Rust will invoke the JavaScript alert function with the string that it has created.

To get started, we create a fresh new Rust project:

$ cargo new wasm-greet --lib

This initializes a new wasm-greet folder, which we work inside of. Next up we modify our Cargo.toml (the equivalent of package.json for Rust) with the following information:

[lib] crate-type = ["cdylib"] [dependencies] wasm-bindgen = "0.2"

We ignore the [lib] business for now, but the next part declares a dependency on the wasm-bindgen crate. The crate here contains all the support we need to use wasm-bindgen in Rust.

Next up, it’s time to write some code! We replace the auto-created src/lib.rs with these contents:

#![feature(proc_macro, wasm_custom_section, wasm_import_module)] extern crate wasm_bindgen; use wasm_bindgen::prelude::*; #[wasm_bindgen] extern { fn alert(s: &str); } #[wasm_bindgen] pub fn greet(name: &str) { alert(&format!("Hello, {}!", name)); }

If you’re unfamiliar with Rust this may seem a bit wordy, but fear not! The wasm-bindgen project is continually improving over time, and it’s certain that all this won’t always be necessary. The most important piece to notice is the #[wasm_bindgen] attribute, an annotation in Rust code which here means “please process this with a wrapper as necessary”. Both our import of the alert function and export of the greet function are annotated with this attribute. In a moment, we’ll see what’s happening under the hood.

But first, we cut to the chase and open this up in a browser! Let’s compile our wasm code:

$ rustup target add wasm32-unknown-unknown --toolchain nightly # only needed once $ cargo +nightly build --target wasm32-unknown-unknown

This gives us a wasm file at target/wasm32-unknown-unknown/debug/wasm_greet.wasm. If we look inside this wasm file using tools like wasm2wat it might be a bit scary. It turns out this wasm file isn’t actually ready to be consumed just yet by JS! Instead we need one more step to make it usable:

$ cargo install wasm-bindgen-cli # only needed once $ wasm-bindgen target/wasm32-unknown-unknown/debug/wasm_greet.wasm --out-dir .

This step is where a lot of the magic happens: the wasm-bindgen CLI tool postprocesses the input wasm file, making it suitable to use. We’ll see a bit later what “suitable” means, for now it suffices to say that if we import the freshly created wasm_greet.js file (created by the wasm-bindgen tool) we’ve got the greet function that we defined in Rust.

Finally, all we’ve got to do is package it up with a bundler, and create an HTML page to run our code. At the time of this writing, only Webpack’s 4.0 release has enough WebAssembly support to work out-of-the box (although it also has a Chrome caveat for the time being). In time, more bundlers will surely follow. I’ll skip the details here, but you can follow the example configuration in the Github repo. If we look inside though, our JS for the page looks like:

const rust = import("./wasm_greet"); rust.then(m => m.greet("World!"));

…and that’s it! Opening up our webpage should now show a nice “Hello, World!” dialog, driven by Rust.

How wasm-bindgen works

Phew, that was a bit of a hefty “Hello, World!” Let’s dive into a bit more detail as to what’s happening under the hood and how the tooling works.

One of the most important aspects of wasm-bindgen is that its integration is fundamentally built on the concept that a wasm module is just another kind of ES module. For example, above we wanted an ES module with a signature (in Typescript) that looks like:

export function greet(s: string);

WebAssembly has no way to natively do this (remember that it only supports numbers today), so we’re relying on wasm-bindgen to fill in the gaps. In our final step above, when we ran the wasm-bindgen tool you’ll notice that a wasm_greet.js file was emitted alongside a wasm_greet_bg.wasm file. The former is the actual JS interface we’d like, performing any necessary glue to call Rust. The *_bg.wasm file contains the actual implementation and all our compiled code.

When we import the ./wasm_greet module we get what the Rust code wanted to expose but couldn’t quite do so natively today. Now that we’ve seen how the integration works, let’s follow the execution of our script to see what happens. First up, our example ran:

const rust = import("./wasm_greet"); rust.then(m => m.greet("World!"));

Here we’re asynchronously importing the interface we wanted, waiting for it to be resolved (by downloading and compiling the wasm). Then we call the greet function on the module.

Note: The asynchronous loading here is required today with Webpack, but this will likely not always be the case and may not be a requirement for other bundlers.

If we take a look inside the wasm_greet.js file that the wasm-bindgen tool generated we’ll see something that looks like:

import * as wasm from './wasm_greet_bg'; // ... export function greet(arg0) { const [ptr0, len0] = passStringToWasm(arg0); try { const ret = wasm.greet(ptr0, len0); return ret; } finally { wasm.__wbindgen_free(ptr0, len0); } } export function __wbg_f_alert_alert_n(ptr0, len0) { // ... }

Note: Keep in mind this is unoptimized and generated code, it may not be pretty or small! When compiled with LTO (Link Time Optimization) and a release build in Rust, and after going through JS bundler pipelines (minification) this should be much smaller.

Here we can see how the greet function is being generated for us by wasm-bindgen. Under the hood it’s still calling the wasm’s greet function, but it is called with a pointer and a length rather than a string. For more detail about passStringToWasm you can see Lin Clark’s previous post. This is all boilerplate you’d otherwise have to write except the wasm-bindgen tool is taking care of it for us! We’ll get to the __wbg_f_alert_alert_n function in a moment.

Moving a level deeper, the next item of interest is the greet function in WebAssembly. To look at that, let’s see what code the Rust compiler sees. Note that like the JS wrapper generated above, you’re not writing the greet exported symbol here, but rather the #[wasm_bindgen] attribute is generating a shim, which does the translation for you, namely:

pub fn greet(name: &str) { alert(&format!("Hello, {}!", name)); } #[export_name = "greet"] pub extern fn __wasm_bindgen_generated_greet(arg0_ptr: *mut u8, arg0_len: usize) { let arg0 = unsafe { ::std::slice::from_raw_parts(arg0_ptr as *const u8, arg0_len) } let arg0 = unsafe { ::std::str::from_utf8_unchecked(arg0) }; greet(arg0); }

Here we can see our original code, greet, but the #[wasm_bingen] attribute inserted this funny looking function __wasm_bindgen_generated_greet. This is an exported function (specified with #[export_name] and the extern keyword) and takes the pointer/length pair the JS passed in. Internally it then converts the pointer/length to a &str (a String in Rust) and forwards to the greet function we defined.

Put a different way, the #[wasm_bindgen] attribute is generating two wrappers: One in JavaScript which takes JS types to convert to wasm, and one in Rust which receives wasm types and converts to Rust types.

Ok let’s look at one last set of wrappers, the alert function. The greet function in Rust uses the standard format! macro to create a new string and then passes it to alert. Recall though that when we declared the alert function we declared it with #[wasm_bindgen], so let’s see what rustc sees for this function:

fn alert(s: &str) { #[wasm_import_module = "__wbindgen_placeholder__"] extern { fn __wbg_f_alert_alert_n(s_ptr: *const u8, s_len: usize); } unsafe { let s_ptr = s.as_ptr(); let s_len = s.len(); __wbg_f_alert_alert_n(s_ptr, s_len); } }

Now that’s not quite what we wrote, but we can sort of see how this is coming together. The alert function is actually a thin wrapper which takes the Rust &str and then converts it to wasm types (numbers). This calls our __wbg_f_alert_alert_n funny-looking function we saw above, but a curious aspect of this is the #[wasm_import_module] attribute.

All imports of function in WebAssembly have a module that they come from, and since wasm-bindgen is built on ES modules, this too will be interpreted as an ES module import! Now the __wbindgen_placeholder__ module doesn’t actually exist, but it indicates that this import will be rewritten by the wasm-bindgen tool to import from the JS file we generated as well.

And finally, for the last piece of the puzzle, we’ve got our generated JS file which contains:

export function __wbg_f_alert_alert_n(ptr0, len0) { let arg0 = getStringFromWasm(ptr0, len0); alert(arg0) }

Wow! It turns out that quite a bit is happening under the hood here and we’ve got a relatively long chain from greet in JS to alert in the browser. Fear not though, the key aspect of wasm-bindgen is that all this infrastructure is hidden! You only need to write Rust code with a few #[wasm_bindgen] here and there. Then your JS can use Rust just as if it were another JS package or module.

What else can wasm-bindgen do?

The wasm-bindgen project is ambitious in scope and we don’t quite have the time to go into all the details here. A great way to explore the functionality in wasm-bindgen is to explore the examples directory which range from Hello, World! like we saw above to manipulating DOM nodes entirely in Rust.

At a high-level the features of wasm-bindgen are:

  • Importing JS structs, functions, objects, etc., to call in wasm. You can call JS methods on a struct and access properties, giving a bit of a “native” feel to the Rust you write once the #[wasm_bindgen] annotations are all hooked up.

  • Exporting Rust structures and functions to JS. Instead of having JS only work with numbers you can export a Rust struct which turns into a class in JS. Then you can pass structs around instead of only having to pass integers around. The smorgasboard example gives a good taste of the interop supported.

  • Allowing other miscellaneous features such as importing from the global scope (aka the alert function), catching JS exceptions using a Result in Rust, and a generic way to simulate storing JS values in a Rust program.

If you’re curious to see more functionality, stay tuned especially with the issue tracker!

What’s next for wasm-bindgen?

Before we close out I’d like to take a moment and describe the future vision for wasm-bindgen because I think it’s one of the most exciting aspects of the project today.

Supporting more than just Rust

From day 1, the wasm-bindgen CLI tool was designed with multiple language support in mind. While Rust is the only supported language today, the tool is designed to plug in C or C++ as well. The #[wasm_bindgen] attribute creates a custom section of the output *.wasm file which the wasm-bindgen tool parses and later removes. This section describes what JS bindings to generate and what their interface is. There’s nothing Rust-specific about this description, so a C++ compiler plugin could just as easily create the section and get processed by the wasm-bindgen tool.

I find this aspect especially exciting because I believe it enables tooling like wasm-bindgen to become a standard practice for integration of WebAssembly and JS. Hopefully, it is beneficial to all languages compiling to WebAssembly and recognized automatically by bundlers, to avoid almost all of the configuration and build tooling needed above.

Automatically binding the JS ecosystem

One of the downsides today when importing functionality with the #[wasm_bindgen] macro is that you have to write everything out and make sure you haven’t made any mistakes. This can sometimes be a tedious (and error-prone) process that is ripe for automation.

All web APIs are specified with WebIDL and it should be quite feasible to generate #[wasm_bindgen] annotations from WebIDL. This would mean you wouldn’t need to define the alert function like we did above, instead you’d just need to write something like:

#[wasm_bindgen] pub fn greet(s: &str) { webapi::alert(&format!("Hello, {}!", s)); }

In this case, the webapi crate could automatically be generated entirely from the WebIDL descriptions of web APIs, guaranteeing no errors.

We can even take this a step further and leverage the awesome work of the TypeScript community and generate #[wasm_bindgen] from TypeScript as well. This would allow automatic binding of any package with TypeScript available on npm for free!

Faster-than-JS DOM performance

And last, but not least, on the wasm-bindgen horizon: ultra-fast DOM manipulation — the holy grail of many JS frameworks. Today calls into DOM functions have to go through costly shims as they transition from JavaScript to C++ engine implementations. With WebAssembly, however, these shims will not be necessary. WebAssembly is known to be well typed… and, well, has types!

The wasm-bindgen code generation has been designed with the future host bindings proposal in mind from day 1. As soon as that’s a feature available in WebAssembly, we’ll be able to directly invoke imported functions without any of wasm-bindgen‘s JS shims. Furthermore, this will allow JS engines to aggressively optimize WebAssembly manipulating the DOM as invocations are well-typed and no longer need the argument validation checks that calls from JS need. At that point wasm-bindgen will not only make it easy to work with richer types like strings, but it will also provide best-in-class DOM manipulation performance.

Wrapping up

I’ve personally found that WebAssembly is incredibly exciting to work with not only because of the community but also the leaps and bounds of progress being made so quickly. The wasm-bindgen tool has a bright future ahead. It’s making interoperability between JS and languages like Rust a first-class experience while also providing long-term benefits as WebAssembly continues to evolve.

Try giving wasm-bindgen a spin, open an issue for feature requests, and otherwise stay involved with Rust and WebAssembly!

Categorieën: Mozilla-nl planet

Pagina's