mozilla

Mozilla Nederland LogoDe Nederlandse
Mozilla-gemeenschap

Abonneren op feed Mozilla planet
Planet Mozilla - http://planet.mozilla.org/
Bijgewerkt: 4 maanden 4 dagen geleden

Mozilla Localization (L10N): L10n Report: October Edition

wo, 11/10/2017 - 21:00

Please note some of the information provided in this report may be subject to change as we are sometimes sharing information about projects that are still in early stages and are not final yet.

Welcome!

New localizers

  • Mika just started getting involved in the Finnish localization team. Welcome Mika!

Are you a locale leader and want us to include new members in our upcoming reports? Contact us!

New content and projects What’s new or coming up in Firefox desktop

Firefox 57 is now in Beta. The deadline to get your localization updates into release is November 1st, please make sure to catch up with missing strings (if you need) and get as many eyes as possible on the versions of Firefox localized in your language.

On a positive note, now that cross-channel is ready and running, you only have to translate strings once for both Nightly and Beta.

Going into localization details: we’ll soon start localizing the Form Autofill system add-on, while we’re studying how to expose strings for about:studies.

Nepali (ne-NP), after a bit of a pause, is trying to ride the release train with 58: if you speak the language, download the Nightly build and help them finding and fixing issues.

Do you speak Assamese, Interlingua, Lao, Latgalian, Maithili, Malayalam, or Tagalog and want to help? Read this blog post.

What’s new or coming up in mobile

Focus iOS and Focus Android are not only now following a bi-weekly release cadence – but the releases are finally synced! Check out the Focus iOS release schedule, as well as the Focus Android one, for more specific details. Note that we do not put the l10n deadlines in Pontoon anymore, given we would have to change them every two weeks.

Firefox iOS strings for v10 strings should be coming shortly. There will be some great improvements with the v10, so stayed tuned on the dev-l10n mailing list to know when that happens (l10n deadline will also be entered in Pontoon)!

On a related note: have you noticed that Firefox iOS v9 now has Tracking Protection turned on by default for Private Browsing mode – and can easily be enabled for normal mode? It was one of the most requested features, and was therefore implemented. Firefox iOS 9.0 also provides improved bookmarks synchronization across devices. It makes the sync functionality – updating passwords, history and bookmarks between mobile and desktop – better and seamless.

On the Focus Android side, there are now multiple tabs! More information about the new Firefox iOS and Focus Android features can be found here. You can also join the public beta for Focus on Android channel on the Google Play Store now. Setup auto-update in the Google Play store to automatically get a new beta version once it’s available. More details here.

And last but not least, there’s a new mobile project targeting Indonesia, called Firefox Rocket. The soft launch was Monday, and it was released as a public beta. It is an experimental fast and lightweight browser made for the needs of emerging markets. The actual launch is targeted some time in November. If you are not in Indonesia, you can try out the app via the apk here.

As you can see, many of our projects are not only growing and improving – more and more new ones are created as well! As part of this ongoing growth, we’d also like to expand the number of localizations that we ship on mobile. As a first step in this direction, we’d like to invite anyone speaking Swahili or Amharic to come join the localization effort. Feel free to contact Delphine about this if you’re interested!

What’s new or coming up in web projects

[Mozilla.org]: Thanks to all the communities for working hard to clear the web project dashboard. The anticipated new and updated pages will be coming your way on a weekly basis in the next few weeks. Keep an eye on the mailing list and always refer to the web dashboard for pending tasks.

[Marketing]: Firefox Rocket campaign messages written for the Indonesian market were prepared and localized by a local marketing agency. Huge thanks to the Indonesian community who was involved in reviewing and revising the campaign messages.

[Engagement]: Expect requests for the monthly snippets and emails in the default language sets in the weeks leading up to Firefox Quantum launch. Also the monthly email content is now staged in Pontoon for localization. If your community wants to move this project from spreadsheet to Pontoon, let us know!

[Legal]:

  • Firefox Privacy Notice was rewritten for #56 and localized in select few languages. Thanks to all who took the time to review and fix linguistic and formatting errors. Your feedback was relayed to the agency. There will be an update for #57. Impacted teams will be notified when the updates are ready for review.
  • Firefox + Cliqz Privacy Notice in German: This was rewritten but largely leveraged from the new Firefox Privacy Notice. German docs were reviewed by legal counsel hired by legal team. There could be a revision as well.
  • Firefox Rocket Privacy Notice in indonesian: This is a brand new doc for a brand new product. The l10n community will be notified when it is ready for review.
What’s new or coming up in Foundation projects

We are happy to report some nice improvements to the donate experience, we’ve finally added support for SEPA donations! And we also removed fees for checks in foreign currencies. All that for both donations to Mozilla Foundation and Thunderbird. We expect this to have a positive impact on donation, and to lower frustration of our donors. We’ve updated the FAQs and donation instructions, so if your locale is not complete for the Fundraising project, help us reach 100% before our big push over the next weeks!

Next steps for fundraising are tweaking our snippets (a longer copy test should be starting really soon), then localizing emails starting mid-November that should be sent at the end of November and until the end of the year.

After several schedule changes at the European Parliament, the Copyright campaign is kicking off again this week.

We’ve got great findings to report after analyzing the (many) responses to the IoT survey launched in August. If you want to be among the first people to read them, help us localize our report in German, Spanish, French, Italian or Brazilian Portuguese!

We are also working in the U.S. with Univision on a holiday guide in English and Spanish featuring product reviews (of toys, game systems, fitness trackers, home assistants, smart TVs, and more) written by experts in our network. The main goal is to help people make educated choices, taking privacy & security into consideration before buying new connected devices.

Finally, MozFest is starting in a few days in London! If you’re attending, you may run into Théo. Say hi!

What’s new or coming up in Pontoon

As of the last week of September Pontoon exposes data about projects and locales through a publicly available API. Read more about it in the blog post. If you’re interested in the planning process of the upcoming milestones refer to the L10n:Pontoon/API wikipage.

We’ve also landed several optimizations; searching for strings is now 60% faster and we’ve also improved load times of various pages:

What’s new or coming up in Transvision

In the next few days, Transvision will support cross-channel for Gecko-based products, this means you will be able to search for strings in Beta and Nightly from a single place (and Release as soon as 57 is released).

We are also adding support for Focus for iOS and Android! There will also be some minor improvements, stay tuned.

Newly published localizer facing documentation

Several documents have been published over the last month, check them out!

Events
  • Want to showcase an event coming up that your community is participating in? Reach out to any l10n-driver and we’ll include that (see links to emails at the bottom of this report)
Friends of the Lion

Image by Elio Qoshi

  • Benny Chandra is instrumental in the success of a brand new Mozilla product launch in Indonesia – a test market. He led the community in advising marketing team the message in English, reviewing and revising localized copy from a local marketing agency.  Thank you for being always responsive and for finding the time to address requests from the marketing team.
  • Kohei’s work and impact can be seen in many web projects. He is the unofficial liaison between the legal team and l10n team. He is the go-to person to review updates, manage and coordinate staging and production push which is manual. His involvement is much needed with recent rounds of legal documentation updates.

Know someone in your l10n community who’s been doing a great job and should appear here? Contact on of the l10n-drivers and we’ll make sure they get a shout-out (see list at the bottom)!

Useful Links Questions? Want to get involved?

Did you enjoy reading this report? Let us know how we can improve by reaching out to any one of the l10n-drivers listed above.

Categorieën: Mozilla-nl planet

Air Mozilla: The Joy of Coding - Episode 116

wo, 11/10/2017 - 19:00

The Joy of Coding - Episode 116 mconley livehacks on real Firefox bugs while thinking aloud.

Categorieën: Mozilla-nl planet

Air Mozilla: Singapore Math

wo, 11/10/2017 - 19:00

Singapore Math Why your kids should learn math the Singapore Math way: Singapore is the #1 ranked country globally*, and it is due in large part to...

Categorieën: Mozilla-nl planet

Air Mozilla: Singapore Math

wo, 11/10/2017 - 19:00

Singapore Math Why your kids should learn math the Singapore Math way: Singapore is the #1 ranked country globally*, and it is due in large part to...

Categorieën: Mozilla-nl planet

Air Mozilla: Weekly SUMO Community Meeting October 11, 2017

wo, 11/10/2017 - 18:00

Weekly SUMO Community Meeting October 11, 2017 This is the SUMO weekly call

Categorieën: Mozilla-nl planet

Air Mozilla: Weekly SUMO Community Meeting October 11, 2017

wo, 11/10/2017 - 18:00

Weekly SUMO Community Meeting October 11, 2017 This is the SUMO weekly call

Categorieën: Mozilla-nl planet

The Firefox Frontier: No-Judgment Digital Definitions: Internet, Search Engine, Browser

wo, 11/10/2017 - 15:05

Real talk: this web stuff can get confusing. And it’s really important that we all understand how it works, so we can be as informed and empowered as possible. Let’s … Read more

The post No-Judgment Digital Definitions: Internet, Search Engine, Browser appeared first on The Firefox Frontier.

Categorieën: Mozilla-nl planet

Emma Humphries: What Happened to the Weekly Triage Reports?

wo, 11/10/2017 - 08:48

I'm not finding them useful. They were reporting back to June of last year but not capturing release cycles.

I have a spreadsheet for watching bugs which need a decision for the Firefox Quantum (57) release.

I'm defining decision differently than triage. Triage is an engineering decision on priority; decision means if the bug would affect 57. And there are plenty of bugs to review. It's a large haystack and we're trying ways to find the bugs that could affect our release.

If you want to see all un-triaged bugs, you can still view the snapshot report.



comment count unavailable comments
Categorieën: Mozilla-nl planet

Mozilla Open Innovation Team: A Framework of Open Practices

di, 10/10/2017 - 21:12

This is the second in a series of posts describing findings from industry research into best practices around open, collaborative methods and how companies share knowledge, work, or influence in order to shape a market towards their business goals. This blog post introduces a framework of open practices co-developed with the Copenhagen Institute for Interaction Design (CIID) that may help other organisations as they evaluate and implement open and participatory strategies themselves.

Mozilla has been developing open source software and managing open source communities since its inception. Some of the most significant innovations in Firefox came from outside the boundaries of the organization — such as tabbed browsing, pop-up blockers, and the awesome bar. Further, crucial factors to Firefox’ global success, such as product localization and technical support, were only possible through countless hours of work and dedication of external communities and contributors. With Add-ons, Mozilla also took a major architectural decision with Firefox: not to build every feature, but to focus on basic excellence and then create opportunity — and a platform — for others. This allowed more people to deliver more value to Firefox users, creating completely personalized web experience.

Revitalising Open and Innovation

Firefox is widely considered as a landmark in open source software production, and the use of several different open practices (as we call them) gave Mozilla a way to compete asymmetrically with much larger organizations.

In the subsequent decade since Firefox launched, Mozilla’s portfolio of technology projects has become much more diverse, and this in turn calls for a more systematic way to identify competitive advantage through open practices. We’ve experimented with different practices in order to solicit external ideas and foster research-based relationships. Recent examples include the Mozilla Awards grant program, and the Equal Rating Innovation Challenge, and sponsoring projects at the margin of Mozilla development, such as the C-to-Rust translation project Corrode. And with the revival of the Test Pilot program, the Firefox team has a way for users to try out experimental features and to help determine which of these ultimately end up in a Firefox release.

From Experiments to Strategy

We’ve been encouraged by the outcomes of these explorations. We therefore broadened efforts in working with users, developers, and industry allies in a more structured and comprehensive way.

We researched activation techniques to build communities and work across organizational boundaries — throughout the product lifecycle — in multiple industries. Many of the techniques and practices identified were not new, but their goal-oriented application and scale in different technology ecosystems clearly was.

However, just knowing what others do is only the first step. Adapting and applying your learnings to your own working processes and mind models around product and technology development is another. For that reason, we developed a framework that could help guiding decisions, supporting our conversations and thinking.

A Framework for Considering Benefits of Open Practices

As we said earlier: Being Open by Design demands clarity on why you’re doing something and what the intended outcomes are. Together with CIID we took a closer look — through the lense of a software and technology organisation — at key benefits of open practices. We organised a list of 12 key potential benefits into three overall categories, in which companies are competing:

<figcaption>Benefits of Open Practices</figcaption>

In our study of organisations that are building value using open practices, and of the literature of Open Innovation, we’ve furthermore distilled six major ways of building value together with an outside community or organisation.

Gifting, or simply giving away something of value for others to adopt in creating value for themselves, is not a novel one: the “loss leader” has been with us for many years. However, this practice has seen increasing adoption by commercial software firms where development costs may be sunk, distribution costs are zero and they are able to capitalise on the resulting installed base.

Co-creating Together, i.e. inviting others to contribute to a set objective or project is familiar in the open source environment, and has become more widespread in other industries, where it has been applied to the entire product development cycle, from setting strategy, to designing, building, to broadening mindshare.

Soliciting Ideas, or asking a question or giving a challenge to either specific communities or ‘the crowd’, isn’t quite a novelty. But driven by technology over the past years this practice is now much more efficient and scalable, and has developed into a tool that is delivering visible and measurable value.

Companies are increasingly able to create value by closely studying usage patterns, and offering subsequent enhancements of products and services. Learning Through Use is the way we have characterised this type of offering: in the new age of constant connectivity and Big Data, offerings built on such practices are more prevalent and value-generating than ever.

Many organizations are finding ways to Enhance the Value Exchange between individuals or organisations, via services or technology platforms, and through this interaction create new kinds of businesses and value reliant on the interaction. This type of practice is prevalent among Internet businesses.

Lastly, by looking outside to larger societal, shared agendas, organisations are achieving business objectives through Networking Common Interests to build mutually-reinforcing commercial relationships with the power to attract a passionate community. Or in other words: facilitating a diversity of networks and communities in such that activities achieve more together than possible for a lone actor. This is a well-established practice (one can look at standardisation activities in this light, for example), but it is now enabled on new and unprecedented scales by the internet.

The three overall benefit areas outlined earlier can be considered in relation to the six methods of interaction in a matrix. We have used this simple framework to facilitate exploration of potential opportunities for Mozilla. We typically do this by considering all the stakeholders in any given technology ecosystem, and from there asking ourselves, “how might we engage them through these different approaches?”.

<figcaption>Framework of Open Practices</figcaption>

It’s a practical impossibility (and therefore not the best use of time trying) to produce an exhaustive list of open practices. That wasn’t our goal. And you will probably find enough practices today that may blur the boundaries in this model: Open source projects, for example, may represent Creating Together, but they may equally be about Gifting or even Networking Interests. The point of this framework, then, is to stimulate thinking about how you arrive at value creation, rather than to be concerned with the details of any specific practice.

In upcoming blog posts, we will explore this framework in more detail, through stories of organisations which successfully applied combinations of open practices to realize their specific objectives.

Gitte Jonsdatter (CIID) & Alex Klepel

A Framework of Open Practices was originally published in Mozilla Open Innovation on Medium, where people are continuing the conversation by highlighting and responding to this story.

Categorieën: Mozilla-nl planet

Firefox Test Pilot: Firefox Containers are Go!

di, 10/10/2017 - 20:57
Containers are Go!tl;dr

Containers is an innovative and unique Firefox feature. The reception has been overwhelmingly positive, and so we used a Test Pilot experiment to refine our user experience. We have graduated Containers from Test Pilot to addons.mozilla.org to make it available to all Firefox users!

Firefox Multi-Account Containers

This post provides details of our experience in Test Pilot. We learned:

  • People make heavy use of Container tabs, customize their Containers, and use Containers & tabs in a variety of ways
  • Key bug-fixes and enhancement areas for Containers
  • Which underlying contextualIdentities APIs will be needed most for add-on developers to make Containers add-ons covering many more use-cases
A brief history of Containers

We all do lots of different things online: work, shopping, research, watching too many YouTube videos etc. etc. Containers is an experimental feature that lets users separate their browsing into distinct contexts within Firefox by isolating things like cookies, local storage, and caches into discrete buckets. In practical terms this system has several positive benefits:

  • Online privacy: Online ads and cookies cannot follow users from one Container to the next.
  • Account Management: Multi-account users can stay logged in to multiple account instances at the same time.
  • Organization: For heavy tab users, Containers add a layer of visual organization to the Firefox interface.

Containers began as an experiment in Firefox Nightly, and received extremely positive notice from tech press. From the start, we believed the feature showed great potential, but suffered from a number of underlying usability issues. Missing features and easily accessible entry points made Containers clunky to use. More significantly, it wasn’t clear how to explain the feature to mainstream users. With so many potential use-cases, which one would resonate most with users? What were the big user problems that Containers solved? How could we introduce users to such a novel user interface?

So we went to work, gathering feedback from our Nightly users to determine what features were lacking from Containers. In addition, we conducted a design sprint to revisit underlying assumptions about contextual identity online. The prototypes that emerged from this sprint laid the foundation for the presentation and UX of the eventual Test Pilot experiment.

<figcaption>Early prototype from our design sprint</figcaption>Here’s how it went down

In the transition from Nightly to Test Pilot, we left the underlying technology behind containers intact but made numerous changes to the presentation and user experience of the feature. Most notably, we did the following:

We moved the UI for Containers from the preferences into an icon in the browser’s toolbar.

We added an onboarding flow for first time users reiterating the value proposition(s) of the feature.

<figcaption>We added a Container on-boarding experience for the add-on.</figcaption>

We also enhanced the UI for users to manage their Containers and tabs, and introduced features to show, hide, and move a container’s tabs.

<figcaption>We created a new interface for editing Containers.</figcaption><figcaption>We added a Container detail panel to hide, show, move, and manage its tabs.</figcaption>

To increase discover-ability and kick-start new users’ habits into opening container tabs, we also changed the tab bar behavior to show available containers as soon as the user hovered over the plus button.

<figcaption>Hovering over the tab-bar “+” button shows available Containers for a new tab.</figcaption>

For this experiment, we collected data to learn:

  • Do users install and run this?
  • How do users create container tabs?
  • Do users customize their containers?
  • (How) do users manage their container tabs?
  • and more
Here’s what we learnedDo users install and run this?

We hypothesized that Containers would be a popular feature for power-users. As expected, our overall installation number of ~10,000 users was lower than most other Test Pilot experiments. However, since the Containers feature is part of users’ critical path of actions (opening tabs), the daily active user ratio for Containers was higher than all other experiments.

(Note: The engagement data had already been truncated by the time of this post, so we’re unable to provide specific engagement ratio numbers here. But it was high!)

How do users create container tabs?

Our tab bar hover UI certainly led people to use it the most.. In an early month of the experiment, 228K container tabs were opened via the tab bar — nearly 10x more than the next most popular source: the pop-up panel.

<figcaption>“tab-bar” was the source for nearly 10x as many container tabs as the panel pop-up</figcaption>

However, we quickly received a topic on our Mozilla Discourse and an issue in our GitHub project (again — Containers users tend to be power-users) that the hover UI was too intrusive. While we agreed that we would probably not “ship” Containers with hover UI, we decided to keep it for a while to keep reminding new users about their Containers.

Do users customize their Containers?

We also measured how many times users clicked to add, edit, or delete Containers.

<figcaption>“edit-container” and “add-container” were clicked approx. 3 times per user</figcaption>

On average, “edit container” and “add container” were clicked approximately 3 times per user, suggesting that users do indeed customize their containers. But the clearest signal that users customize their Container experience is the number of tabs opened in each Container.

We shipped the experiment with 4 Containers: “Personal”, “Work”, “Finance”, and “Shopping” with User Context IDs 1–4. And while Personal (1) and Work (2) were the most common user context IDs for tabs, the next most popular user context IDs for tabs were 6 and 7 — custom containers!

<figcaption>Note: to protect users from revealing potentially personal information, we never recorded container names — only the context IDs</figcaption>

This told us that users indeed make their own Containers and open many tabs in their custom Containers!

(How) do users manage their container tabs?

In this experiment, we added the ability to hide, show, and sort Container tabs. So, we wanted to learn how many people would use those features.

Of our experiment users, 13% clicked to sort their container tabs, 3.4% clicked to hide a container’s tabs, and just 1.5% clicked to move a container’s tabs to a new window.

Building, Measuring, and Learning in Test Pilot

Containers users turned out to be tech-savvy and passionate. Many of them filed issues in our GitHub project for bugs and enhancements. We embraced this early in the experiment: we encouraged users to up-vote issues and we prioritized our changes by the most popular issues. These issues turned into important bug-fixes and great product enhancements.

New Feature: “Site Assignment”

From very early on, the most popular feature request was to assign sites to automatically open into a certain Container. It was first filed on March 2. In 2 weeks, Jonathan Kingston (the #1 contributor on this experiment!) had built a work-in-progress version of the feature, and in another 2 weeks we had refined it, and instrumented it with additional telemetry to measure its effects.

<figcaption>We added the ability to assign sites to always open into a certain Container</figcaption><figcaption>To prevent accidental container tabs (and potential data leakage), we provide a confirmation dialog.</figcaption>

So, in Test Pilot, we were able to build, deploy, measure, and learn about a new feature in 1 month.

And its effects were dramatic. 9% of all users added at least 1 container assignment.

And those assignments resulted in 127k more container tabs!

In fact, as the experiment went on, the site assignment feature became the #1 most common source for container tabs — even more than the tab bar.

<figcaption>Site assignment became the #1 source for users opening Container tabs</figcaption>

This is a great improvement, because the more people use container tabs, the more their online activity is separated and protected from various kinds of tracking and hacking! The ability to build, measure, and learn during the course of a Test Pilot experiment is quite powerful.

New Bug-fixes

In addition to feature requests, we received many bug reports in our GitHub issues. While many of these were bugs in the add-on code, some of them were actually bugs in the underlying Firefox platform.

Over the course of the Test Pilot experiment, 36 underlying bugzilla bugs were resolved. The Test Pilot audience brought a fresh perspective to Containers that helped us to prioritize and fix some of the lower-level Firefox bugs. So, our Test Pilot work also contributed to the core Firefox feature.

New Features Forever: contextualIdentities API for developers

After adding some new features, it became obvious that we would not be able to solve every use-case for Containers. The architecture of Containers is designed first and foremost as a security and privacy technique. The effects for online accounts and tab management are not always on the top of the minds of Containers engineers. And many of the feature & change requests for tab management actually conflict: different people use Containers in different ways.

So, while experimenting with the front-end UI for Containers, the team also built out the contextualIdentities API for Firefox add-on developers. In fact, over its development lifecycle, the Test Pilot add-on used different libraries — Bootstrap, SDK, and Embedded WebExtension code — to maintain Firefox compatibility as the underlying contextualIdentities API matured. (Watch one of those videos of people changing the tires of a car while it’s moving and you’ll have an understanding of what it was like.)

Because the contextualIdentities API is available to developers, we are encouraging users to upvote enhancement issues, and encouraging add-on authors to look at those most popular enhancements as ideas for their own add-ons. There’s already a growing number of additional Containers add-ons, and Jonathan has started to curate them into a collection on addons.mozilla.org (AMO).

Here’s what happens next

Speaking of AMO … we are graduating the Containers add-on from Test Pilot to AMO. Our Test Pilot experiment demonstrated that Containers is a unique, innovative, and powerful feature for Firefox users. We also learned that — besides a few quick changes to make Containers more effective for everyone — different users have different use-cases for Containers.

So we’ll maintain the core experience— we will triage new issues as they are filed, and work on the most up-voted issues for our future releases. We’ll enhance the contextualIdentities API, and support and encourage add-on developers to create a wide array of Containers add-ons.

Thank you!

Thank you to all the Test Pilots who installed Containers, provided feedback, and filed issues for bugs and enhancements! We are very happy with the refined Containers experience we were able to launch on addons.mozilla.org.

Want to try a new experiment? Visit https://testpilot.firefox.com.

Luke Crouch, Privacy & Security Engineer

Tanvi Vyas, User Security & Privacy Tech Lead

Jonathan Kingston, Privacy & Security Engineer

John Gruen, Test Pilot Product Manager

Firefox Containers are Go! was originally published in Firefox Test Pilot on Medium, where people are continuing the conversation by highlighting and responding to this story.

Categorieën: Mozilla-nl planet

Mozilla Cloud Services Blog: Why WebPush Doesn’t Allow Broadcast

di, 10/10/2017 - 19:24

One of the common questions we get working on the Web Push backend team, is “How do I broadcast a Push message to all my customers?” The short answer is, you don’t.

In the early days, I used say that Web Push is more like a doorbell than a walkie-talkie. Web push is designed to send a timely message of interest from a website to a specific customer. Like a doorbell, it’s pretty much a one to one thing.

There’s a lot you can do once you make the decision to make things one to one rather than one to many. For instance, it’s very easy to do end-to-end encryption. When you encrypt a message you make it so that only a certain number of people can read it. Ideally, a message should be readable by just two people, the person who created the message and the person who receives the message. Right now, a message is encrypted by you for your recipient and Mozilla can’t read it. We don’t have and will never see the key.

You can share the message with a group by sharing the key, but with every share, you run the risk of the key leaking to someone you don’t want to have it. On my wall at work, there are two pictures. One is of the TSA luggage security keys, the other is of a Yale 1620 key. The second one you may not have heard about. The 1620 is the master firefighter key for much of New York City, and many firefighters and building supervisors have a copy. Technically, it’s against the law to have an unauthorized copy, but that doesn’t stop many folks from acquiring a copy or some publications from printing very high definition versions so you can make them at home with a blank and a metal file. It’s a good example of having encryption that’s not really encryption. We want to avoid that kind of situation.

There are other issues at hand with doing a “broadcast”. One of the bigger ones is that “broadcast” has already been solved, every time you go to a web page. Web pages can be delivered securely via any number of means, and there are a whole host of existing protocols and procedures in place that make delivery fast and safe. How a browser knows to check a given page is a bit fuzzy, but again, there are hosts of protocols and functions in place to make that as lightweight as possible.

An important consideration for broadcasts (and one to one messages too): when do they need to arrive? Now? Soon? What does that mean really mean in the context of your app? Our system tries hard to deliver messages quickly, but we will never deliver them instantly. Likewise, there are all sorts of reasons that a device may not get a message quickly. The device may be off, out of range, or traveling and have no net access for the next few hours. Once a device is back online, it will try to reconnect and retrieve messages, but even this is essentially polling, and again, there are long established methods for doing these sorts of things. Determining how soon is “now” may help determine when your app really needs to poll for the broadcast elements.

Much like a doorbell or Philips head screwdriver, Web push is a tool for a specific task. It’s possible to use it for other tasks, but it’s ill suited and there are far better tools available.

If you’re interested in some of the more technical details, you can read much of the lively discussion that was held among the working group, as well as a preliminary draft for a webpush-aggregation service.

Categorieën: Mozilla-nl planet

Hacks.Mozilla.Org: The whole web at maximum FPS: How WebRender gets rid of jank

di, 10/10/2017 - 17:00

The Firefox Quantum release is getting close. It brings many performance improvements, including the super fast CSS engine that we brought over from Servo.

But there’s another big piece of Servo technology that’s not in Firefox Quantum quite yet, though it’s coming soon. That’s WebRender, which is being added to Firefox as part of the Quantum Render project.Drawing of a jet engine labeled with the different Project Quantum projects

WebRender is known for being extremely fast. But WebRender isn’t really about making rendering faster. It’s about making it smoother.

With WebRender, we want apps to run at a silky smooth 60 frames per second (FPS) or better no matter how big the display is or how much of the page is changing from frame to frame. And it works. Pages that chug along at 15 FPS in Chrome or today’s Firefox run at 60 FPS with WebRender.

So how does WebRender do that? It fundamentally changes the way the rendering engine works to make it more like a 3D game engine.

Let’s take a look at what this means. But first…

What does a renderer do?

In the article on Stylo, I talked about how the browser goes from HTML and CSS to pixels on the screen, and how most browsers do this in five steps.

We can split these five steps into two halves. The first half basically builds up a plan. To make this plan, it combines the HTML and CSS with information like the viewport size to figure out exactly what each element should look like—its width, height, color, etc. The end result is something called a frame tree or a render tree.

The second half—painting and compositing—is what a renderer does. It takes that plan and turns it into pixels to display on the screen.

Diagram dividing the 5 stages of rendering into two groups, with a frame tree being passed from part 1 to part 2

But the browser doesn’t just have to do this once for a web page. It has to do it over and over again for the same web page. Any time something changes on this page—for example, a div is toggled open—the browser has to go through a lot of these steps.

 style, layout, paint, and composite

Even in cases where nothing’s really changing on the page—for example where you’re scrolling or where you are highlighting some text on the page—the browser still has to go through at least some of the second part again to draw new pixels on the screen.

 composite

If you want things like scrolling or animation to look smooth, they need to be going at 60 frames per second.

You may have heard this phrase—frames per second (FPS)—before, without being sure what it meant. I think of this like a flip book. It’s like a book of drawings that are static, but you can use your thumb to flip through so that it looks like the pages are animated.

In order for the animation in this flip book to look smooth, you need to have 60 pages for every second in the animation.

Picture of a flipbook with a smooth animation next to it

The pages in this flip book are made out of graph paper. There are lots and lots of little squares, and each of the squares can only contain one color.

The job of the renderer is to fill in the boxes in this graph paper. Once all of the boxes in the graph paper are filled in, it is finished rendering the frame.

Now, of course there is not actual graph paper inside of your computer. Instead, there’s a section of memory in the computer called a frame buffer. Each memory address in the frame buffer is like a box in the graph paper… it corresponds to a pixel on the screen. The browser will fill in each slot with the numbers that represent the color in RGBA (red, green, blue, and alpha) values.

A stack of memory addresses with RGBA values that are correlated to squares in a grid (pixels)

When the display needs to refresh itself, it will look at this section of memory.

Most computer displays will refresh 60 times per second. This is why browsers try to render pages at 60 frames per second. That means the browser has 16.67 milliseconds to do all of the setup —CSS styling, layout, painting—and fill in all of the slots in the frame buffer with pixel colors. This time frame between two frames (16.67 ms) is called the frame budget.

Sometimes you hear people talk about dropped frames. A dropped frame is when the system doesn’t finish its work within the frame budget. The display tries to get the new frame from the frame buffer before the browser is done filling it in. In this case, the display shows the old version of the frame again.

A dropped frame is kind of like if you tore a page out of that flip book. It would make the animation seem to stutter or jump because you’re missing the transition between the previous page and the next.

Picture of a flipbook missing a page with a janky animation next to it

So we want to make sure that we get all of these pixels into the frame buffer before the display checks it again. Let’s look at how browsers have historically done this, and how that has changed over time. Then we can see how we can make this faster.

A brief history of painting and compositing

Note: Painting and compositing is where browser rendering engines are the most different from each other. Single-platform browsers (Edge and Safari) work a bit differently than multi-platform browsers (Firefox and Chrome) do.

Even in the earliest browsers, there were some optimizations to make pages render faster. For example, if you were scrolling content, the browser would keep the part that was still visible and move it. Then it would paint new pixels in the blank spot.

This process of figuring out what has changed and then only updating the changed elements or pixels is called invalidation.

As time went on, browsers started applying more invalidation techniques, like rectangle invalidation. With rectangle invalidation, you figure out the smallest rectangle around each part of the screen that changed. Then, you only redraw what’s inside those rectangles.

This really reduces the amount of work that you need to do when there’s not much changing on the page… for example, when you have a single blinking cursor.

Blinking cursor with small repaint rectangle around it

But that doesn’t help much when large parts of the page are changing. So the browsers came up with new techniques to handle those cases.

Introducing layers and compositing

Using layers can help a lot when large parts of the page are changing… at least, in certain cases.

The layers in browsers are a lot like layers in Photoshop, or the onion skin layers that were used in hand-drawn animation. Basically, you paint different elements of the page on different layers. Then you then place those layers on top of each other.

They have been a part of the browser for a long time, but they weren’t always used to speed things up. At first, they were just used to make sure pages rendered correctly. They corresponded to something called stacking contexts.

For example, if you had a translucent element, it would be in its own stacking context. That meant it got its own layer so you could blend its color with the color below it. These layers were thrown out as soon as the frame was done. On the next frame, all the layers would be repainted again.

Layers for opacity generated, then frame rendered, then thrown out

But often the things on these layers didn’t change from frame to frame. For example, think of a traditional animation. The background doesn’t change, even if the characters in the foreground do. It’s a lot more efficient to keep that background layer around and just reuse it.

So that’s what browsers did. They retained the layers. Then the browser could just repaint layers that had changed. And in some cases, layers weren’t even changing. They just needed to be rearranged—for example, if an animation was moving across the screen, or something was being scrolled.

Two layers moving relative to each other as a scroll box is scrolled

This process of arranging layers together is called compositing. The compositor starts with:

  • source bitmaps: the background (including a blank box where the scrollable content should be) and the scrollable content itself
  • a destination bitmap, which is what gets displayed on the screen

First, the compositor would copy the background to the destination bitmap.

Then it would figure out what part of the scrollable content should be showing. It would copy that part over to the destination bitmap.

Source bitmaps on the left, destination bitmap on the right

This reduced the amount of painting that the main thread had to do. But it still means that the main thread is spending a lot of time on compositing. And there are lots of things competing for time on the main thread.

I’ve talked about this before, but the main thread is kind of like a full-stack developer. It’s in charge of the DOM, layout, and JavaScript. And it also was in charge of painting and compositing.

Main thread doing DOM, JS, and layout, plus paint and composite

Every millisecond the main thread spends doing paint and composite is time it can’t spend on JavaScript or layout.

CPU working on painting and thinking "I really should get to that JS soon"

But there was another part of the hardware that was lying around without much work to do. And this hardware was specifically built for graphics. That was the GPU, which games have been using since the late 90s to render frames quickly. And GPUs have been getting bigger and more powerful ever since then.

A drawing of a computer chip with 4 CPU cores and a GPU

GPU accelerated compositing

So browser developers started moving things over to the GPU.

There are two tasks that could potentially move over to the GPU:

  1. Painting the layers
  2. Compositing them together

It can be hard to move painting to the GPU. So for the most part, multi-platform browsers kept painting on the CPU.

But compositing was something that the GPU could do very quickly, and it was easy to move over to the GPU.

Main thread passing layers to GPU

Some browsers took this parallelism even further and added a compositor thread on the CPU. It became a manager for the compositing work that was happening on the GPU. This meant that if the main thread was doing something (like running JavaScript), the compositor thread could still handle things for the user, like scrolling content up when the user scrolled.

Compositor thread sitting between main thread and GPU, passing layers to GPU

So this moves all of the compositing work off of the main thread. It still leaves a lot of work on the main thread, though. Whenever we need to repaint a layer, the main thread needs to do it, and then transfer that layer over to the GPU.

Some browsers moved painting off to another thread (and we’re working on that in Firefox today). But it’s even faster to move this last little bit of work — painting — to the GPU.

GPU accelerated painting

So browsers started moving painting to the GPU, too.

Paint and composite handled by the GPU

Browsers are still in the process of making this shift. Some browsers paint on the GPU all of the time, while others only do it on certain platforms (like only on Windows, or only on mobile devices).

Painting on the GPU does a few things. It frees up the CPU to spend all of its time doing things like JavaScript and layout. Plus, GPUs are much faster at drawing pixels than CPUs are, so it speeds painting up. It also means less data needs to be copied from the CPU to the GPU.

But maintaining this division between paint and composite still has some costs, even when they are both on the GPU. This division also limits the kinds of optimizations that you can use to make the GPU do its work faster.

This is where WebRender comes in. It fundamentally changes the way we render, removing the distinction between paint and composite. This gives us a way to tailor the performance of our renderer to give you the best user experience on today’s web, and to best support the use cases that you will see on tomorrow’s web.

This means we don’t just want to make frames render faster… we want to make them render more consistently and without jank. And even when there are lots of pixels to draw, like on 4k displays or WebVR headsets, we still want the experience to be just as smooth.

When do current browsers get janky?

The optimizations above have helped pages render faster in certain cases. When not much is changing on a page—for example, when there’s just a single blinking cursor—the browser will do the least amount of work possible.

Blinking cursor with small repaint rectangle around it

Breaking up pages into layers has expanded the number of those best-case scenarios. If you can paint a few layers and then just move them around relative to each other, then the painting+compositing architecture works well.

Rotating clock hand as a layer on top of another layer

But there are also trade offs to using layers. They take up a lot of memory and can actually make things slower. Browsers need to combine layers where it makes sense… but it’s hard to tell where it makes sense.

This means that if there are a lot of different things moving on the page, you can end up with too many layers. These layers fill up memory and take too long to transfer to the compositor.

Many layers on top of each other

Other times, you’ll end up with one layer when you should have multiple layers. That single layer will be continually repainted and transferred to the compositor, which then composites it without changing anything.

This means you’ve doubled the amount of drawing you have to do, touching each pixel twice without getting any benefit. It would have been faster to simply render the page directly, without the compositing step.

Paint and composite producing the same bitmap

And there are lots of cases where layers just don’t help much. For example, if you animate background color, the whole layer has to be repainted anyway. These layers only help with a small number of CSS properties.

Even if most of your frames are best-case scenarios—that is, they only take up a tiny bit of the frame budget—you can still get choppy motion. For perceptible jank, only a couple of frames need to fall into worst-case scenarios.

Frame timeline with a few frames that go over the frame budget, causing jank

These scenarios are called performance cliffs. Your app seems to be moving along fine until it hits one of these worst-case scenarios (like animating background color) and all of the sudden your app’s frame rate topples over the edge.

Person falling over the edge of a cliff labeled animating background color

But we can get rid of these performance cliffs.

How do we do this? We follow the lead of 3D game engines.

Using the GPU like a game engine

What if we stopped trying to guess what layers we need? What if we removed this boundary between painting and compositing and just went back to painting every pixel on every frame?

This may sound like a ridiculous idea, but it actually has some precedent. Modern day video games repaint every pixel, and they maintain 60 frames per second more reliably than browsers do. And they do it in an unexpected way… instead of creating these invalidation rectangles and layers to minimize what they need to paint, they just repaint the whole screen.

Wouldn’t rendering a web page like that be way slower?

If we paint on the CPU, it would be. But GPUs are designed to make this work.

GPUs are built for extreme parallelism. I talked about parallelism in my last article about Stylo. With parallelism, the machine can do multiple things at the same time. The number of things it can do at once is limited by the number of cores that it has.

CPUs usually have between 2 and 8 cores. GPUs usually have at least a few hundred cores, and often more than 1,000 cores.

These cores work a little differently, though. They can’t act completely independently like CPU cores can. Instead, they usually work on something together, running the same instruction on different pieces of the data.

CPU cores working independently, GPU cores working together

This is exactly what you need when you’re filling in pixels. Each pixel can be filled in by a different core. Because it can work on hundreds of pixels at a time, the GPU is a lot faster at filling in pixels than the CPU… but only if you make sure all of those cores have work to do.

Because cores need to work on the same thing at the same time, GPUs have a pretty rigid set of steps that they go through, and their APIs are pretty constrained. Let’s take a look at how this works.

First, you need to tell the GPU what to draw. This means giving it shapes and telling it how to fill them in.

To do this, you break up your drawing into simple shapes (usually triangles). These shapes are in 3D space, so some shapes can be behind others. Then you take all of the corners of those triangles and put their x, y, and z coordinates into an array.

Then you issue a draw call—you tell the GPU to draw those shapes.

CPU passing triangle coordinates to GPU

From there, the GPU takes over. All of the cores will work on the same thing at the same time. They will:

  1. Figure out where all of the corners of the shapes are. This is called vertex shading.GPU cores drawing vertexes on a graph
  2. Figure out the lines that connect those corners. From this, you can figure out which pixels are covered by the shape. That’s called rasterization.GPU cores drawing lines between vertexes
  3. Now that we know what pixels are covered by a shape, go through each pixel in the shape and figure out what color it should be. This is called pixel shading.GPU cores filling in pixels

This last step can be done in different ways. To tell the GPU how to do it, you give the GPU a program called a pixel shader. Pixel shading is one of the few parts of the GPU that you can program.

Some pixel shaders are simple. For example, if your shape is a single color, then your shader program just needs to return that color for each pixel in the shape.

Other times, it’s more complex, like when you have a background image. You need to figure out which part of the image corresponds to each pixel. You can do this in the same way an artist scales an image up or down… put a grid on top of the image that corresponds to each pixel. Then, once you know which box corresponds to the pixel, take samples of the colors inside that box and figure out what the color should be. This is called texture mapping because it maps the image (called a texture) to the pixels.

Hi-res image being mapped to a much lower resolution space

The GPU will call your pixel shader program on each pixel. Different cores will work on different pixels at the same time, in parallel, but they all need to be using the same pixel shader program. When you tell the GPU to draw your shapes, you tell it which pixel shader to use.

For almost any web page, different parts of the page will need to use different pixel shaders.

Because the shader applies to all of the shapes in the draw call, you usually have to break up your draw calls in multiple groups. These are called batches. To keep all of the cores as busy as possible, you want to create a small number of batches which have lots of shapes in them.

CPU passing a box containing lots of coordinates and a pixel shader to the GPU

So that’s how the GPU splits up work across hundreds or thousands of cores. It’s only because of this extreme parallelism that we can think of rendering everything on each frame. Even with the extreme parallelism, though, it’s still a lot of work. You still need to be smart about how you do this. Here’s where WebRender comes in…

How WebRender works with the GPU

Let’s go back to look at the steps the browser goes through to render the page. Two things will change here.

Diagram showing the stages of the rendering pipeline with two changes. The frame tree is now a display list an paint and composite have been combined into Render.

  1. There’s no longer a distinction between paint and composite… they are both part of the same step. The GPU does them at the same time based on the graphics API commands that were passed to it.
  2. Layout now gives us a different data structure to render. Before, it was something called a frame tree (or render tree in Chrome). Now, it passes off a display list.

The display list is a set of high-level drawing instructions. It tells us what we need to draw without being specific to any graphics API.

Whenever there’s something new to draw, the main thread gives that display list to the RenderBackend, which is WebRender code that runs on the CPU.

The RenderBackend’s job is to take this list of high-level drawing instructions and convert it to the draw calls that the GPU needs, which are batched together to make them run faster.

Diagram of the 4 different threads, with a RenderBackend thread between the main thread and compositor thread. The RenderBackend thread translates the display list into batched draw calls

Then the RenderBackend will pass those batches off to the compositor thread, which passes them to the GPU.

The RenderBackend wants to make the draw calls it’s giving to the GPU as fast to run as possible. It uses a few different techniques for this.

Removing any unnecessary shapes from the list (Early culling)

The best way to save time is to not do the work at all.

First, the RenderBackend cuts down the list of display items. It figures out which display items will actually be on the screen. To do this, it looks at things like how far down the scroll is for each scroll box.

If any part of a shape is inside the box, then it is included. If none of the shape would have shown up on the page, though, it’s removed. This process is called early culling.

A browser window with some parts off screen. Next to that is a display list with the offscreen elements removed

Minimizing the number of intermediate textures (The render task tree)

Now we have a tree that only contains the shapes we’ll use. This tree is organized into those stacking contexts we talked about before.

Effects like CSS filters and stacking contexts make things a little complicated. For example, let’s say you have an element that has an opacity of 0.5 and it has children. You might think that each child is transparent… but it’s actually the whole group that’s transparent.

Three overlapping boxes that are translucent, so they show through each other, next to a translucent shape formed by the three boxes where the boxes don't show through each other

Because of this, you need to render the group out to a texture first, with each box at full opacity. Then, when you’re placing it in the parent, you can change the opacity of the whole texture.

These stacking contexts can be nested… that parent might be part of another stacking context. Which means it has to be rendered out to another intermediate texture, and so on.

Creating the space for these textures is expensive. As much as possible, we want to group things into the same intermediate texture.

To help the GPU do this, we create a render task tree. With it, we know which textures need to be created before other textures. Any textures that don’t depend on others can be created in the first pass, which means they can be grouped together in the same intermediate texture.

So in the example above, we’d first do a pass to output one corner of a box shadow. (It’s slightly more complicated than this, but this is the gist.)

A 3-level tree with a root, then an opacity child, which has three box shadow children. Next to that is a render target with a box shadow corner

In the second pass, we can mirror this corner all around the box to place the box shadow on the boxes. Then we can render out the group at full opacity.

Same 3-level tree with a render target with the 3 box shape at full opacity

Next, all we need to do is change the opacity of this texture and place it where it needs to go in the final texture that will be output to the screen.

Same tree with the destination target showing the 3 box shape at decreased opacity

By building up this render task tree, we figure out the minimum number of offscreen render targets we can use. That’s good, because as I mentioned, creating the space for these render target textures is expensive.

It also helps us batch things together.

Grouping draw calls together (Batching)

As we talked about before, we need to create a small number of batches which have lots of shapes in them.

Paying attention to how you create batches can really speed things up. You want to have as many shapes in the same batch as you can. This is for a couple of reasons.

First, whenever the CPU tells the GPU to do a draw call, the CPU has to do a lot of work. It has to do things like set up the GPU, upload the shader program, and test for different hardware bugs. This work adds up, and while the CPU is doing this work, the GPU might be idle.

Second, there’s a cost to changing state. Let’s say that you need to change the shader program between batches. On a typical GPU, you need to wait until all of the cores are done with the current shader. This is called draining the pipeline. Until the pipeline is drained, other cores will be sitting idle.

Mulitple GPU cores standing around while one finishes with the previous pixel shader

Because of this, you want to batch as much as possible. For a typical desktop PC, you want to have 100 draw calls or fewer per frame, and you want each call to have thousands of vertices. That way, you’re making the best use of the parallelism.

We look at each pass from the render task tree and figure out what we can batch together.

At the moment, each of the different kinds of primitives requires a different shader. For example, there’s a border shader, and a text shader, and an image shader.

 

Boxes labeled with the type of batch they contain (e.g. Borders, Images, Rectangles)

We believe we can combine a lot of these shaders, which will allow us to have even bigger batches, but this is already pretty well batched.

We’re almost ready to send it off to the GPU. But there’s a little bit more work we can eliminate.

Reducing pixel shading with opaque and alpha passes (Z-culling)

Most web pages have lots of shapes overlapping each other. For example, a text field sits on top of a div (with a background) which sits on top of the body (with another background).

When it’s figuring out the color for a pixel, the GPU could figure out the color of the pixel in each shape. But only the top layer is going to show. This is called overdraw and it wastes GPU time.

3 layers on top of each other with a single overlapping pixel called out across all three layers

So one thing you could do is render the top shape first. For the next shape, when you get to that same pixel, check whether or not there’s already a value for it. If there is, then don’t do the work.

3 layers where the overlapping pixel isn't filled in on the 2 bottom layers

There’s a little bit of a problem with this, though. Whenever a shape is translucent, you need to blend the colors of the two shapes. And in order for it to look right, that needs to happen back to front.

So what we do is split the work into two passes. First, we do the opaque pass. We go front to back and render all of the opaque shapes. We skip any pixels that are behind others.

Then, we do the translucent shapes. These are rendered back to front. If a translucent pixel falls on top of an opaque one, it gets blended into the opaque one. If it would fall behind an opaque shape, it doesn’t get calculated.

This process of splitting the work into opaque and alpha passes and then skipping pixel calculations that you don’t need is called Z-culling.

While it may seem like a simple optimization, this has produced very big wins for us. On a typical web page, it vastly reduces the number of pixels that we need to touch, and we’re currently looking at ways to move more work to the opaque pass.

At this point, we’ve prepared the frame. We’ve done as much as we can to eliminate work.

… And we’re ready to draw!

We’re ready to setup the GPU and render our batches.

Diagram of the 4 threads with compositor thread passing off opaque pass and alpha pass to GPU

A caveat: not everything is on the GPU yet

The CPU still has to do some painting work. For example, we still render the characters (called glyphs) that are used in blocks of text on the CPU. It’s possible to do this on the GPU, but it’s hard to get a pixel-for-pixel match with the glyphs that the computer renders in other applications. So people can find it disorienting to see GPU-rendered fonts. We are experimenting with moving things like glyphs to the GPU with the Pathfinder project.

For now, these things get painted into bitmaps on the CPU. Then they are uploaded to something called the texture cache on the GPU. This cache is kept around from frame to frame because they usually don’t change.

Even though this painting work is staying on the CPU, we can still make it faster than it is now. For example, when we’re painting the characters in a font, we split up the different characters across all of the cores. We do this using the same technique that Stylo uses to parallelize style computation… work stealing.

What’s next for WebRender?

We look forward to landing WebRender in Firefox as part of Quantum Render in 2018, a few releases after the initial Firefox Quantum release. This will make today’s pages run more smoothly. It also gets Firefox ready for the new wave of high-resolution 4K displays, because rendering performance becomes more critical as you increase the number of pixels on the screen.

But WebRender isn’t just useful for Firefox. It’s also critical to the work we’re doing with WebVR, where you need to render a different frame for each eye at 90 FPS at 4K resolution.

An early version of WebRender is currently available behind a flag in Firefox. Integration work is still in progress, so the performance is currently not as good as it will be when that is complete. If you want to keep up with WebRender development, you can follow the GitHub repo, or follow Firefox Nightly on Twitter for weekly updates on the whole Quantum Render project.

Categorieën: Mozilla-nl planet

Air Mozilla: Martes Mozilleros, 10 Oct 2017

di, 10/10/2017 - 17:00

Martes Mozilleros Reunión bi-semanal para hablar sobre el estado de Mozilla, la comunidad y sus proyectos. Bi-weekly meeting to talk (in Spanish) about Mozilla status, community and...

Categorieën: Mozilla-nl planet

Air Mozilla: Martes Mozilleros, 10 Oct 2017

di, 10/10/2017 - 17:00

Martes Mozilleros Reunión bi-semanal para hablar sobre el estado de Mozilla, la comunidad y sus proyectos. Bi-weekly meeting to talk (in Spanish) about Mozilla status, community and...

Categorieën: Mozilla-nl planet

Mozilla GFX: WebRender newsletter #7

di, 10/10/2017 - 15:10

We are making steady progress on WebRender and its Gecko integration. This newsletter doesn’t show much of the higher level work happening in the background, so I’ll drop a few notes about them here:

I have been working for a while on getting the architecture in place to respect frame consistency which is a rule that in a nutshell says “Two changes happening in the page within the same turn of the JS event loop (say, move a DOM element and paint into a canvas), should be visible in the same frame”. The infrastructure for this is now mostly in place and there is some work remaining to make sure that changes that belong to a given transaction are effectively added to the transaction rather than sent to the renderer with a asynchronous channels.

There is also an ongoing investigation about how to reduce the overhead of serializing and deserializing very large display lists. In the process this investigation bugs were filed in WebRender, Gecko, Serde, rustc, all the way to llvm itself. Lots of fun.

On WebRender’s side we are looking into asynchronously building scenes so that the smoothness of scrolling and animations are never impacted by it, and investigating new optimizations to move more primitives to the opaque pass and more aggressively take advantage of z-culling.

Notable WebRender changes
  • Jeff improved the performance of display lists serialization and deserialization in #1799 and #1830 (hasn’t landed in Gecko yet).
  • Markus worked around yet another driver bug on Mac.
  • Nical improved the quality of the border corner antialiasing.
Notable Gecko changes
  • WebRender display list creation speedups:
    • WebRender UserData property lookup optimized.
    • We preallocate the buffer used for display list building.
    • Avoid a completely unnecessary copy of the display list.
    • Gankro Removed text layers and the client side glyph cache (hasn’t landed in Gecko yet).
    • Jeff removed largely unneeded call to nsDisplayBackgroundColor::GetLayerState.
    • The old layer-full code is gone.
  • We support doing empty transactions properly.
  • Gankro prevented zero-width space characters to cause the fallback rendering path.

Categorieën: Mozilla-nl planet

This Week In Rust: This Week in Rust 203

di, 10/10/2017 - 06:00

Hello and welcome to another issue of This Week in Rust! Rust is a systems language pursuing the trifecta: safety, concurrency, and speed. This is a weekly summary of its progress and community. Want something mentioned? Tweet us at @ThisWeekInRust or send us a pull request. Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub. If you find any errors in this week's issue, please submit a PR.

Updates from Rust Community News & Blog Posts Crate of the Week

Despite there being no votes, the crate of this week is abrute, a crate to brute-force AES keys. Thanks to Daniel P. Clark for the suggestion.

Submit your suggestions and votes for next week!

Call for Participation

Always wanted to contribute to open-source projects but didn't know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

If you are a Rust project owner and are looking for contributors, please submit tasks here.

Updates from Rust Core

117 pull requests were merged in the last week

New Contributors
  • Andreas Jonson
  • Barret Rennie
  • Garrett Berg
  • hinaria
  • James Munns
  • Kevin Hunter Kesling
  • leavehouse
  • Maik Klein
  • mchlrhw
  • Nikolai Vazquez
  • Niv Kaminer
  • Pirh
  • Stephane Raux
  • Suriyaa ✌️️
Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

No RFCs were approved this week.

Final Comment Period

Every week the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now. This week's FCPs are:

New RFCs Upcoming Events

If you are running a Rust event please add it to the calendar to get it mentioned here. Email the Rust Community Team for access.

Rust Jobs

Tweet us at @ThisWeekInRust to get your job offers listed here!

Quote of the Week

The long compile breaks give me time to focus on TV.

/u/staticassert on watching TV while programming in Rust.

Thanks to /u/tomwhoiscontrary and /u/kixunil for the suggestion.

Submit your quotes for next week!

This Week in Rust is edited by: nasa42 and llogiq.

Categorieën: Mozilla-nl planet

Firefox Test Pilot: Test Pilot graduation report: Pulse

ma, 09/10/2017 - 21:56

As we began ramping up for our release of Firefox 57 in November, our product intelligence team faced a difficult problem: they had lots of telemetry to help understand Firefox’s performance, but no way of understanding exactly what those numbers mean to users. Pulse was designed to bridge the gap between hard performance numbers and the incredibly vague concept of user satisfaction.

At what point does page size affect user satisfaction? On which sites do users report the most problems? To what extent can users perceive the impact of new Firefox features like e10s and multiple content processes? These were all questions we hoped to answer.

How it worked

When installed, Pulse tracked performance metrics for every page visited. The page weight, time to load, content process count, and much more were recorded. These were submitted to the Test Pilot team whenever the user filled out a short, four-question survey, which asked for details about their experience. This survey was presented to users in two ways:

  1. A pageAction was registered with the browser, putting an icon in the URL bar that allowed users to report their sentiment at any time.
  2. Around once a day, users were directly prompted by the browser, using a <notificationbox> element.
<figcaption>The Pulse notification box</figcaption>

This arrangement allowed us to collect two distinct but equally important types of data: random-sampled data, reflecting the average experience of the user, and outlier data, reflecting when a user’s experience was so good or bad that they felt compelled to report it.

Since user sentiment is a notoriously challenging thing to measure, we tried to restrict analysis to submissions reporting the primary reason for a score as fast or slow. This reduced noise from users who may have been reporting, for example, whether they liked or disliked a website. We also used a segmentation methodology similar to Net Promoter Score to cluster users into positive- and negative-sentiment groups. Those who provided a rating between one and three stars were considered detractors, while those that provided a five-star rating were considered promoters.

What we learned

In short: a lot. Thanks to the dedication of our users, we collected over 37,000 submissions in just a few months. We can’t thank you enough for all the time and effort you put into helping us understand how to make Firefox great.

<figcaption>Cumulative Pulse submissions, by date</figcaption>

Product intelligence is continuing to comb through the data to look for meaningful findings. A few that stand out:

  1. Performance matters. Nearly every metric showed significant evidence that poor performance negatively affects user sentiment.
  2. Ads hurt user sentiment. One of the strongest effects on sentiment was our proxy for the number of ads: requests made by the page to hostnames on the Disconnect.me tracking protection list. This covaried with the overall number of requests, total page weight, and each of the timers, so it’s unclear if this effect is due to a specific aversion to ads, or to their consequences on performance.
  3. Developers should focus on DOMContentLoaded to improve perceived performance. Timers were placed on a number of events in the page load cycle: the time to first byte, time to first paint, time to the window.load event, and the time to the DOMContentLoaded event. The one that most consistently affected perceived performance was the time to DOMContentLoaded. If you want to see strong returns on a limited amount of time to tune your site’s performance, try to reduce that number. Your users will thank you.

As we move close to the Firefox 57 release in November, the team will continue using the data collected by Pulse to make sure that it’s the fastest Firefox yet.

Test Pilot graduation report: Pulse was originally published in Firefox Test Pilot on Medium, where people are continuing the conversation by highlighting and responding to this story.

Categorieën: Mozilla-nl planet

Air Mozilla: Mozilla Weekly Project Meeting, 09 Oct 2017

ma, 09/10/2017 - 20:00

Mozilla Weekly Project Meeting The Monday Project Meeting

Categorieën: Mozilla-nl planet

Air Mozilla: Mozilla Weekly Project Meeting, 09 Oct 2017

ma, 09/10/2017 - 20:00

Mozilla Weekly Project Meeting The Monday Project Meeting

Categorieën: Mozilla-nl planet

Robert O'Callahan: Type Safety And Data Flow Integrity

ma, 09/10/2017 - 00:30

We talk a lot about memory safety assuming everyone knows what it is, but I think that can be confusing and sell short the benefits of safety in modern programming languages. It's probably better to talk about "type safety". This can be formalized in various ways, but intuitively a language's type system proposes constraints on what is allowed to happen at run-time — constraints that programmers assume when reasoning about their programs; type-safe code actually obeys those constraints. This includes classic memory safety features such as avoidance of buffer overflows: writing past the end of an array has effects on the data after the array that the type system does not allow for. But type safety also means, for example, that (in most languages) a field of an object cannot be read or written except through pointers/references created by explicit access to that field. With this loose definition, type safety of a piece of code can be achieved in different ways. The compiler might enforce it, or you might prove the required properties mechanically or by hand, or you might just test it until you've fixed all the bugs.

One implication of this is that type-safe code provides data-flow integrity. A type system provides intuitive constraints on how data can flow from one part of the program to another. For example, if your code has private fields that the language only lets you access through a limited set of methods, then at run time it's true that all accesses to those fields are by those methods (or due to unsafe code).

Type-safe code also provides control-flow integrity, because any reasonable type system also suggests fine-grained constraints on control flow.

Data-flow integrity is very important. Most information-disclosure bugs (e.g. Heartbleed) violate data-flow integrity, but usually don't violate control-flow integrity. "Wild write" bugs are a very powerful primitive for attackers because they allow massive violation of data-flow integrity; most security-relevant decisions can be compromised if you can corrupt their inputs.

A lot of work has been done to enforce CFI for C/C++ using dynamic checks with reasonably low overhead. That's good and important work. But attackers will move to attacking DFI, and that's going to be a lot harder to solve for C/C++. For example the checking performed by ASAN is only a subset of what would be required to enforce the C++ type system, and ASAN's overhead is already too high. You would never choose C/C++ for performance reasons if you had to run under ASAN. (I guess you could reduce ASAN's overhead if you dropped all the support for debugging, but it would still be too high.)

Note 1: people often say "even type safe programs still have correctness bugs, so you're just solving one class of bugs which is not a big deal" (or, "... so you should just use C and prove everything correct"). This underestimates the power of type safety with a reasonably rich type system. Having fine-grained CFI and DFI, and generally being able to trust the assumptions the type system suggests to you, are essential for sound reasoning about programs. Then you can leverage the type system to build abstractions that let you check more properties; e.g. you can enforce separation between trusted and untrusted data by giving untrusted user input different types and access methods to trusted data. The more your code is type-safe, the stronger is your confidence in those properties.

Note 2: C/C++ could be considered "type safe" just because the specification says any program executing undefined behavior gets no behavioral constraints whatsoever. However, in practice, programmers reasoning about C/C++ code must (and do) assume the constraint "no undefined behavior occurs"; type-safe C/C++ code must ensure this.

Note 3: the presence of unsafe code within a hardware-enforced protection domain can undermine the properties of type-safe code within the same domain, but minimizing the amount of such unsafe code is still worthwhile, because it reduces your attack surface.

Categorieën: Mozilla-nl planet

Pagina's