mozilla

Mozilla Nederland LogoDe Nederlandse
Mozilla-gemeenschap

Bas Schouten: Using WindowRecording to Analyze Visual Pageload

Mozilla planet - do, 04/04/2019 - 14:40
Window Recording


As of bug 1536174, I've added a mechanism for Firefox that uses an internal mechanism that can record content frames during composition. Note that this currently only works when on Windows using D3D11 composition. This is still in very early stages and will likely be getting some improvements over the longer term, but right now, basically this is how it works:

  1. Make sure you're on nightly, the keyboard shortcut we'll be using only works there.
  2. Before initiating the action you want to record, press 'Ctrl+Shift+4', this will begin recording
  3. Execute an action in the browser (for example load a page)
  4. Wait for whatever operation you wish to capture to finish
  5. Press 'Ctrl+Shift+4' again, this will end recording and generate output

You will now find a new directory in your working directory called windowrecording-<timestamp> which contains a list of PNGs, this list of PNGs uses the following naming convention:

frame-<framenumber>-<ms since recording start>.png

You will notice only the frames where actual content changes occurred (no scrolling, asynchronous animation, parent process changes, etc.), this will likely become more flexible in the future. The directory where the output is delivered can optionally be selected using the layers.windowrecording.path preference. Use a trailing slash for expected behavior.

There's a couple of caveats when using recording:

  • In order to minimize overhead during recording frames are stored uncompressed in memory, recordings more than a few seconds will consume very large amounts of memory
  • Lossless compression is used for the output, this means the recordings can be relatively large
Pageload Analysis


Recently I've been working on a script that can use the generated recordings to perform pageload analysis. The scripts can be found here and in this post I will attempt to describe the right way to use them. The scripts, as well as this entire description are meant to be used on windows, so these instructions will not be accurate on other platforms (and in any case, recording doesn't work there yet either, nor will most people have an open source PowerShell implementation installed). This is also in very early stages and not particularly user friendly, you probably shouldn't be using it yet and just skip to the demo video :-).

Prerequisites

Note that visualmetrics.py also expects ffmpeg and the imagemagick binary locations to be included in the path.

The next step is to modify the SetPaths.ps1 script to point to the correct binaries for your system, these are the binaries that the AnalyzeWindowRecording script will use.

Recording Pageload

The next step is recording a pageload. The first step would be to go to 'about:blank' to ensure a solid white reference frame to start from. A current weakness is that the timestamp of the video (and therefore the point in time the analysis will consider the 'beginning' of pageload) is the time when the recording begins, rather than navigation start. Therefore, it is best to navigate immediately after beginning the recording, this could be improved by scripting the recording and navigation start to occur at the same time, but for now we'll assume that a small offset may be considered acceptable.

First start the recording and immediately navigate to the desired page, wait for pageload to finish visually and then end the recording.

Analysis

The final step is to run the analysis script, this is done by executing the script as follows:

.\AnalyzeWindowRecording.ps1 <directory containing recording> <annotated output video>

Note that the script may take a while to execute, particularly on a slower machine. The script will output the recorded FirstVisualChange, LastVisualChange and SpeedIndex to stdout, as well as generate an annotated video that will display information at the stop about the current visual completeness and the different stages of the video.

It is important to note that the script will currently chop off the top 197 pixels of the frames, this was accurate for my recordings but most likely isn't for people recording using different DPI scaling, in the future I will make this parameter configurable or possibly even automatically detected, however for now you will have to manually adjust the $chop variable at the top of the AnalyzeWindowRecording script for your situation.

Finally, I realize these are lengthy instructions and usage of this functionality is currently not for the faint of heart. I wanted to make this information available as quickly as possible though and we expect to be improving on this in the future, let me know in the comments or on IRC if you have any questions or feedback.



Your browser does not support the video tag.

Original post blogged on b2evolution.

Categorieën: Mozilla-nl planet

Mozilla Open Policy & Advocacy Blog: A Path Forward: Rights and Rules to Protect Privacy in the United States

Mozilla planet - do, 04/04/2019 - 10:00

Privacy is on the tip of everyone’s tongue. Lawmakers are discussing how to legislate it, big tech is desperate to show they care about it, and everyday people are looking for tools and tips to help them reclaim it.

That’s why today, we are publishing our blueprint for strong federal privacy legislation in the United States. Our goals are straightforward: put people back in control of their data; establish clear, effective, and enforceable rules for those using that data; and move towards greater global alignment on governing data and the role of the internet in our lives.

For Mozilla, privacy is not optional. It’s fundamental to who we are and the work we do. It’s also fundamental to the health of the internet. Without privacy protections, we cannot trust the internet as a safe place to explore, transact, connect, and create. But thanks to a rising tide of abusive privacy practices and data breaches, trust in the internet is at an all time low.

We’ve reached this point because data practices and public policies have failed. Data has helped spur remarkable innovation and new products, but the long-standing ‘notice-and-consent’ approach to privacy has served people poorly. And the lack of truly meaningful safeguards and user protections have led to our social, financial and even political information being misused and manipulated without our understanding.

What’s needed to combat this is strong privacy legislation in the U.S. that codifies real protections for consumers and ensures more accountability from companies.

While we have seen positive movements on privacy and data protection around the globe, the United States has fallen behind. But this conversation about the problematic data practices of some companies has sparked promising interest in Congress.

Our framework spells out specifically what that law needs to accomplish. It must establish strong rights for people, rights that provide meaningful protection; it must provide clear rules for companies to limit how data is collected and used; and it must empower enforcement with clear authority and effective processes and remedies.

Clear rules for companies

  • Purposeful and limited collection and use – end the era of blanket collection and use, including collecting data for one purpose and then using it for another, by adopting clear rules for purposeful and limited collection and use of personal data.
  • Security – ensure that our data is carefully maintained and secured, and provide clear expectations around inactive accounts.

Strong rights for people

  • Access – we must be able to view the information that has been collected or generated about us, and know how it’s being used.
  • Delete – we should be able to delete our data when reasonable, and we should understand the policies and practices around our data if our accounts and services become inactive.
  • Granular, revocable consent – stop the practice of generic consent to data collection and use; limit consents to apply to specific collection and use practices, and allow them to be revoked.

Empowered enforcement

  • Clear mandate – empower the Federal Trade Commission with a strong authority and resources to keep up with advances in technology and evolving threats to privacy.
  • Civil penalties – streamline and strengthen the FTC’s enforcement through direct civil investigation and penalty authority, without the need for time- and resource-intensive litigation.
  • Rulemaking authority – empower the FTC to set proactive obligations to secure personal information and limits on the use of personal data in ways that may harm users.

We need real action to pass smart, strong privacy legislation that codifies real protections for consumers while preserving innovation. And we need it now, more than ever.

Mozilla U.S. Consumer Privacy Bill Blueprint 4.4.19

 

 

Photo by Louis Velazquez on Unsplash

The post A Path Forward: Rights and Rules to Protect Privacy in the United States appeared first on Open Policy & Advocacy.

Categorieën: Mozilla-nl planet

Daniel Stenberg: More Amsterdamned Workshop

Mozilla planet - do, 04/04/2019 - 00:54

Yesterday we plowed through a large and varied selection of HTTP topics in the Workshop. Today we continued. At 9:30 we were all in that room again. Day two.

Martin Thomson talked about his “hx” proposal and how to refer to future responses in HTTP APIs. He ended up basically concluding that “This is too complicated, I think I’m going to abandon this” and instead threw in a follow-up proposal he called “Reverse Javascript” that would be a way for a client to pass on a script for the server to execute! The room exploded in questions, objections and “improvements” to this idea. There are also apparently a pile of prior art in similar vein to draw inspiration from.

With the audience warmed up like this, Anne van Kasteren took us back to reality with an old favorite topic in the HTTP Workshop: websockets. Not a lot of love for websockets in the room… but this was the first of several discussions during the day where a desire or quest for bidirectional HTTP streams was made obvious.

Woo Xie did a presentation with help from Alan Frindell about Extending h2 for Bidirectional Messaging and how they propose a HTTP/2 extension that adds a new frame to create a bidirectional stream that lets them do messaging over HTTP/2 fine. The following discussion was slightly positive but also contained alternative suggestions and references to some of the many similar drafts for bidirectional and p2p connections over http2 that have been done in the past.

Lucas Pardue and Nick Jones did a presentation about HTTP/2 Priorities, based a lot of research previously done and reported by Pat Meenan. Lucas took us through the history of how the priorities ended up like this, their current state and numbers and also the chaos and something about a possible future, the h3 way of doing prio and mr Meenan’s proposed HTTP/3 prio.

Nick’s second half of the presentation then took us through Cloudflare’s Edge Driven HTTP/2 Prioritisation work/experiments and he showed how they could really improve how prioritization works in nginx by making sure the data is written to the socket as late as possible. This was backed up by audience references to the TAPS guidelines on the topic and a general recollection that reducing the number connections is still a good idea and should be a goal! Server buffering is hard.

Asbjørn Ulsberg presented his case for a new request header: prefer-push. When used, the server can respond to the request with a series of pushed resources and thus save several round-rips. This triggered sympathy in the room but also suggestions of alternative approaches.

Alan Frindell presented Partial POST Replay. It’s a rather elaborate scheme that makes their loadbalancers detect when a POST to one of their servers can’t be fulfilled and they instead replay that POST to another backend server. While Alan promised to deliver a draft for this, the general discussion was brought up again about POST and its “replayability”.

Willy Tarreau followed up with a very similar topic: Retrying failed POSTs. In this this context RFC 2310 – The Safe Response Header Field was mentioned and that perhaps something like this could be considered for requests? The discussion certainly had similarities and overlaps with the SEARCH/POST discussion of yesterday.

Mike West talked about Fetch Metadata Request Headers which is a set of request headers explaining for servers where and what for what purpose requests are made by browsers. He also took us through a brief explained of Origin Policy, meant to become a central “resource” for a manifest that describes properties of the origin.

Mark Nottingham presented Structured Headers (draft). This is a new way of specifying and parsing HTTP headers that will make the lives of most HTTP implementers easier in the future. (Parts of the presentation was also spent debugging/triaging the most weird symptoms seen when his Keynote installation was acting up!) It also triggered a smaller side discussion on what kind of approaches that could be taken for HPACK and QPACK to improve the compression ratio for headers.

Anne van Kesteren talked Web-compatible header value parsers, standardizing on how to parse headers not covered by structured headers.

Yoav Weiss described the current status of client hints (draft). This is shipped by Chrome already and he wanted more implementers to use it and tell how its working.

Roberto Peon presented an idea for doing “Partialy-Reliable HTTP” and after his talk and a discussion he concluded they will implement it, play around and come back and tell us what they’ve learned.

Mark Nottingham talked about HTTP for CDNs. He has this fancy-looking test suite in progress that checks how things are working and what is being supported and there are two drafts in progress: the cache response header and the proxy status header field.

Willy Tarreau talked about a race problem he ran into with closing HTTP/2 streams and he explained how he worked around it with a trailing ping frame and suggested that maybe more users might suffer from this problem.

The oxygen level in the room was certainly not on an optimal level at this point but that didn’t stop us. We knew we had a few more topics to get through and we all wanted to get to the boat ride of the evening on time. So…

Hooman Beheshti polled the room to get a feel for what people think about Early hints. Are people still on board? Turns out it is mostly appreciated but not supported by any browser and a discussion and explainer session followed as to why this is and what general problems there are in supporting 1xx headers in browsers. It is striking that most of us HTTP people in the room don’t know how browsers work! Here I could mention that Cory said something about the craziness of this, but I forget his exact words and I blame the fact that they were expressed to me on a boat. Or perhaps that the time is already approaching 1am the night after this fully packed day.

Good follow-up reads from that discussion is Yoav’s blog post A Tale of Four Caches and Jake Archibalds’s HTTP/2 Push is tougher than I thought.

As the final conversation of the day, Anne van Kesteren talked about Response Sources and the different ways a browser can do requests and get responses.

Boat!

HAproxy had the excellent taste of sponsoring this awesome boat ride on the Amsterdam canals for us at the end of the day

<figcaption>Boating on the Amsterdam canals, sponsored by HAproxy!
</figcaption>

Thanks again to Cory Benfield for feeding me his notes of the day to help me keep things straight. All mistakes are mine. But if you tell me about them, I will try to correct the text!

Categorieën: Mozilla-nl planet

Wladimir Palant: Dear Mozilla, please stop spamming!

Mozilla planet - wo, 03/04/2019 - 21:26

Dear Mozilla, of course I learned about your new file sharing project from the news. But it seems that you wanted to be really certain, so today I got this email:

Email screenshot

Do you still remember how I opted out of all your emails last year? Luckily, I know that email preferences of all your users are managed via Mozilla Basket and I also know how to retrieve raw data. So here it is:

Screenshot of Basket data

It clearly says that I’ve opted out, so you didn’t forget. So why do you keep sending me promotional messages? Edit (2019-04-05): Yes, that optin value is thoroughly confusing but it doesn’t mean what it seems to mean. Basket only uses it to indicate a “verified email,” somebody who either went through double opt-in once or registered with Firefox Accounts.

This isn’t your only issue however. A year ago I reported a security issue in Mozilla Basket (not publicly accessible). The essence is that subscribing anybody to Mozilla’s newsletters is trivial even if that person opted out previously. The consensus in this bug seems to be that this is “working as expected.” This cannot seriously be it, right?

Now there is some legislation that is IMHO being violated here, e.g. the CAN-SPAM Act and GDPR. And your privacy policy ends with the email address one can contact to report compliance issues. So I did.

Screenshot of Mozilla's bounce mail

Oh well…

Categorieën: Mozilla-nl planet

Edge op basis van Chromium speelt in tegenstelling tot Chrome Netflix in 4k - Tweakers

Nieuws verzameld via Google - wo, 03/04/2019 - 17:06
Edge op basis van Chromium speelt in tegenstelling tot Chrome Netflix in 4k  Tweakers

De gelekte versie van Microsofts browser Edge op basis van Chromium heeft functies aan boord om Netflix-video's af te spelen in 4k. Daarmee loopt het voor op ...

Categorieën: Mozilla-nl planet

Oud-topman Mozilla vastgehouden bij Amerikaanse grens - Security.nl

Nieuws verzameld via Google - wo, 03/04/2019 - 13:38
Oud-topman Mozilla vastgehouden bij Amerikaanse grens  Security.nl

De voormalige chief technology officer (CTO) van Mozilla, Andreas Gal, is eind vorig jaar enige tijd bij de Amerikaanse grens vastgehouden, waarbij ...

Categorieën: Mozilla-nl planet

Mozilla Future Releases Blog: DNS-over-HTTPS (DoH) Update – Recent Testing Results and Next Steps

Mozilla planet - wo, 03/04/2019 - 00:21

We are giving several updates on our testing with DNS-over-HTTPS (DoH), a new protocol that uses encryption to protect DNS requests and responses. This post shares the latest results, what we’ve learned, and how we’re fine-tuning our next step in testing.

tl;dr: The results of our last performance test showed improvement or minimal impact when DoH is enabled. Our next experiment continues to test performance with Akamai and Cloudflare, and adds a performance test that takes advantage of a secure protocol for DNS resolvers set up between Cloudflare and Facebook.

What we learned

Back in November 2018, we rolled out a test of DoH in the United States to look at possible impacts to Content Delivery Networks (CDNs). Our goal was to closely examine performance again, specifically the case when users get less localized DNS responses that could slow the browsing experience, even if the DNS resolver itself is accurate and fast. We worked with Akamai to help us understand more about the possible impact.

The results were strong! Like our previous studies, DoH had minimal impact or clearly improved the total time it takes to get a response from the resolver and fetch a web page.

Here’s a sample result for our “time to first byte” measurement:

In this case, the absolute difference across the experiment was less than 10ms regression for 50th percentile and lower. For 75th percentile or higher, we saw no regression or even a performance win, particularly on the long tail of slower connections. We saw very similarly shaped results for the rest of our measurements.

While exploring the data in the last experiment, we discovered a few things we’d like to know more about and will run three additional tests described below.

Additional testing on performance and privacy

First, we saw a higher error rate during DNS queries than expected, with and without DoH. We’d like to know more about those errors — are they something that the browser could handle better? This is something we’re still researching, and this next experiment adds better error capturing for analysis.

And while the performance was good with DoH, we’re curious if we could improve it even more. CDNs and other distributed websites provide localized DNS responses depending on where you are in the network. The goal is to send you to a host near you on the network and therefore give you the best performance, but if your DNS goes through Cloudflare, this might not happen. The EDNS Client Subnet extension allows a centralized resolver like Cloudflare to give the authoritative resolver for the site you are going to some information about your location, which they can use to tune their responses. Ordinarily this isn’t safe for two reasons: first, the query to the authoritative resolver isn’t encrypted and so it leaks your query; and second, the authoritative resolver might be operated by a third party who would then learn about your browsing activity. This was something we needed to test further.

Since our last experiment, Facebook has partnered with Cloudflare to offer DNS-over-TLS (DoT) as a pilot project, which plugs the privacy leak caused by EDNS Client Subnet (ECS) use. There are two key reasons this work improves your privacy when connecting to Facebook.  First, Facebook operates their own authoritative resolvers, so third parties don’t get your query. Second, DoT encrypts your query so snoopers on the network can’t see it.

With DoT in place, we want to see if Cloudflare sending ECS to Facebook’s resolver results in our users getting better response times. To figure this out, we added tests for resolver response and web page fetch for test URLs from Facebook systems.

The experiment will check locally if the browser has a Facebook login cookie. If it does, it will attempt to collect data once per day — about the speed of queries to test URLs. This means that if you have never logged into Facebook, the experiment will not fetch anything from Facebook systems.

As with our last test, we aren’t passing any cookies, these domains aren’t ones that the user would automatically retrieve and just contain dummy content, so we aren’t disclosing anything to the resolver or Facebook about users’ browsing behavior.

We’re also re-running the Akamai tests to get to the bottom of the error messages issue.

Here is a sample of the data that we are collecting as part of this experiment:

The payload only contains timing and error information. For more information about how we are collecting telemetry for this experiment, read here. As always, you can always see what data Firefox sends back to Mozilla in about:telemetry.

We believe these DoH and DoT deployments are both exciting steps toward providing greater security and privacy to all. We’re happy to support the work Cloudflare is doing that might encourage others to pursue DoH and DoT deployments.

How will the next test roll out?

Starting the week of April 1, a small portion of our United States-based users in the Release channel will receive the DoH treatment. As before, this study will use Cloudflare’s DNS-over-HTTPS service and will continue to provide in-browser notifications about the experiment so that participants are fully informed and has the opportunity to decline.

We are working to build a larger ecosystem of trusted DoH providers, and we hope to be able to experiment with other providers soon. As before, we will continue to share the results of the DoH tests and provide updates once future plans solidify.

The post DNS-over-HTTPS (DoH) Update – Recent Testing Results and Next Steps appeared first on Future Releases.

Categorieën: Mozilla-nl planet

DNS-over-HTTPS (DoH) Update – Recent Testing Results and Next Steps

Mozilla Futurereleases - wo, 03/04/2019 - 00:21

We are giving several updates on our testing with DNS-over-HTTPS (DoH), a new protocol that uses encryption to protect DNS requests and responses. This post shares the latest results, what we’ve learned, and how we’re fine-tuning our next step in testing.

tl;dr: The results of our last performance test showed improvement or minimal impact when DoH is enabled. Our next experiment continues to test performance with Akamai and Cloudflare, and adds a performance test that takes advantage of a secure protocol for DNS resolvers set up between Cloudflare and Facebook.

What we learned

Back in November 2018, we rolled out a test of DoH in the United States to look at possible impacts to Content Delivery Networks (CDNs). Our goal was to closely examine performance again, specifically the case when users get less localized DNS responses that could slow the browsing experience, even if the DNS resolver itself is accurate and fast. We worked with Akamai to help us understand more about the possible impact.

The results were strong! Like our previous studies, DoH had minimal impact or clearly improved the total time it takes to get a response from the resolver and fetch a web page.

Here’s a sample result for our “time to first byte” measurement:

In this case, the absolute difference across the experiment was less than 10ms regression for 50th percentile and lower. For 75th percentile or higher, we saw no regression or even a performance win, particularly on the long tail of slower connections. We saw very similarly shaped results for the rest of our measurements.

While exploring the data in the last experiment, we discovered a few things we’d like to know more about and will run three additional tests described below.

Additional testing on performance and privacy

First, we saw a higher error rate during DNS queries than expected, with and without DoH. We’d like to know more about those errors — are they something that the browser could handle better? This is something we’re still researching, and this next experiment adds better error capturing for analysis.

And while the performance was good with DoH, we’re curious if we could improve it even more. CDNs and other distributed websites provide localized DNS responses depending on where you are in the network. The goal is to send you to a host near you on the network and therefore give you the best performance, but if your DNS goes through Cloudflare, this might not happen. The EDNS Client Subnet extension allows a centralized resolver like Cloudflare to give the authoritative resolver for the site you are going to some information about your location, which they can use to tune their responses. Ordinarily this isn’t safe for two reasons: first, the query to the authoritative resolver isn’t encrypted and so it leaks your query; and second, the authoritative resolver might be operated by a third party who would then learn about your browsing activity. This was something we needed to test further.

Since our last experiment, Facebook has partnered with Cloudflare to offer DNS-over-TLS (DoT) as a pilot project, which plugs the privacy leak caused by EDNS Client Subnet (ECS) use. There are two key reasons this work improves your privacy when connecting to Facebook.  First, Facebook operates their own authoritative resolvers, so third parties don’t get your query. Second, DoT encrypts your query so snoopers on the network can’t see it.

With DoT in place, we want to see if Cloudflare sending ECS to Facebook’s resolver results in our users getting better response times. To figure this out, we added tests for resolver response and web page fetch for test URLs from Facebook systems.

The experiment will check locally if the browser has a Facebook login cookie. If it does, it will attempt to collect data once per day — about the speed of queries to test URLs. This means that if you have never logged into Facebook, the experiment will not fetch anything from Facebook systems.

As with our last test, we aren’t passing any cookies, these domains aren’t ones that the user would automatically retrieve and just contain dummy content, so we aren’t disclosing anything to the resolver or Facebook about users’ browsing behavior.

We’re also re-running the Akamai tests to get to the bottom of the error messages issue.

Here is a sample of the data that we are collecting as part of this experiment:

The payload only contains timing and error information. For more information about how we are collecting telemetry for this experiment, read here. As always, you can always see what data Firefox sends back to Mozilla in about:telemetry.

We believe these DoH and DoT deployments are both exciting steps toward providing greater security and privacy to all. We’re happy to support the work Cloudflare is doing that might encourage others to pursue DoH and DoT deployments.

How will the next test roll out?

Starting the week of April 1, a small portion of our United States-based users in the Release channel will receive the DoH treatment. As before, this study will use Cloudflare’s DNS-over-HTTPS service and will continue to provide in-browser notifications about the experiment so that participants are fully informed and has the opportunity to decline.

We are working to build a larger ecosystem of trusted DoH providers, and we hope to be able to experiment with other providers soon. As before, we will continue to share the results of the DoH tests and provide updates once future plans solidify.

The post DNS-over-HTTPS (DoH) Update – Recent Testing Results and Next Steps appeared first on Future Releases.

Categorieën: Mozilla-nl planet

Daniel Stenberg: The HTTP Workshop 2019 begins

Mozilla planet - di, 02/04/2019 - 23:54

The forth season of my favorite HTTP series is back! The HTTP Workshop skipped over last year but is back now with a three day event organized by the very best: Mark, Martin, Julian and Roy. This time we’re in Amsterdam, the Netherlands.

35 persons from all over the world walked in the room and sat down around the O-shaped table setup. Lots of known faces and representatives from a large variety of HTTP implementations, client-side or server-side – but happily enough also a few new friends that attend their first HTTP Workshop here. The companies with the most employees present in the room include Apple, Facebook, Mozilla, Fastly, Cloudflare and Google – having three or four each in the room.

Patrick Mcmanus started off the morning with his presentation on HTTP conventional wisdoms trying to identify what have turned out as successes or not in HTTP land in recent times. It triggered a few discussions on the specific points and how to judge them. I believe the general consensus ended up mostly agreeing with the slides. The topic of unshipping HTTP/0.9 support came up but is said to not be possible due to its existing use. As a bonus, Anne van Kesteren posted a new bug on Firefox to remove it.

Mark Nottingham continued and did a brief presentation about the recent discussions in HTTPbis sessions during the IETF meetings in Prague last week.

Martin Thomson did a presentation about HTTP authority. Basically how a client decides where and who to ask for a resource identified by a URI. This triggered an intense discussion that involved a lot of UI and UX but also trust, certificates and subjectAltNames, DNS and various secure DNS efforts, connection coalescing, DNSSEC, DANE, ORIGIN frame, alternative certificates and more.

Mike West explained for the room about the concept for Signed Exchanges that Chrome now supports. A way for server A to host contents for server B and yet have the client able to verify that it is fine.

Tommy Pauly then talked to his slides with the title of Website Fingerprinting. He covered different areas of a browser’s activities that are current possible to monitor and use for fingerprinting and what counter-measures that exist to work against furthering that development. By looking at the full activity, including TCP flows and IP addresses even lots of our encrypted connections still allow for pretty accurate and extensive “Page Load Fingerprinting”. We need to be aware and the discussion went on discussing what can or should be done to help out.

<figcaption>The meeting is going on somewhere behind that red door.</figcaption>

Lucas Pardue discussed and showed how we can do TLS interception with Wireshark (since the release of version 3) of Firefox, Chrome or curl and in the end make sure that the resulting PCAP file can get the necessary key bundled in the same file. This is really convenient when you want to send that PCAP over to your protocol debugging friends.

Roberto Peon presented his new idea for “Generic overlay networks”, a suggested way for clients to get resources from one out of several alternatives. A neighboring idea to Signed Exchanges, but still different. There was an interested to further and deepen this discussion and Roberto ended up saying he’d at write up a draft for it.

Max Hils talked about Intercepting QUIC and how the ability to do this kind of thing is very useful in many situations. During development, for debugging and for checking what potentially bad stuff applications are actually doing on your own devices. Intercepting QUIC and HTTP/3 can thus also be valuable but at least for now presents some challenges. (Max also happened to mention that the project he works on, mitmproxy, has more stars on github than curl, but I’ll just let it slide…)

Poul-Henning Kamp showed us vtest – a tool and framework for testing HTTP implementations that both Varnish and HAproxy are now using. Massaged the right way, this could develop into a generic HTTP test/conformance tool that could be valuable for and appreciated by even more users going forward.

Asbjørn Ulsberg showed us several current frameworks that are doing GET, POST or SEARCH with request bodies and discussed how this works with caching and proposed that SEARCH should be defined as cacheable. The room mostly acknowledged the problem – that has been discussed before and that probably the time is ripe to finally do something about it. Lots of users are already doing similar things and cached POST contents is in use, just not defined generically. SEARCH is a already registered method but could get polished to work for this. It was also suggested that possibly POST could be modified to also allow for caching in an opt-in way and Mark volunteered to author a first draft elaborating how it could work.

Indonesian and Tibetan food for dinner rounded off a fully packed day.

Thanks Cory Benfield for sharing your notes from the day, helping me get the details straight!

Diversity

We’re a very homogeneous group of humans. Most of us are old white men, basically all clones and practically indistinguishable from each other. This is not diverse enough!

<figcaption>A big thank you to the HTTP Workshop 2019 sponsors!</figcaption>


Categorieën: Mozilla-nl planet

Firefox UX: An exception to our ‘No Guerrilla Research’ practice: A tale of user research at MozFest

Mozilla planet - di, 02/04/2019 - 22:31

Authors: Jennifer Davidson, Emanuela Damiani, Meridel Walkington, and Philip Walmsley

Sometimes, when you’re doing user research, things just don’t quite go as planned. MozFest was one of those times for us.

MozFest, the Mozilla Festival, is a vibrant conference and week-long “celebration for, by, and about people who love the internet.” Held at a Ravensbourne university in London, the festival features nine floors of simultaneous sessions. The Add-ons UX team had the opportunity to host a workshop at MozFest about co-designing a new submission flow for browser extensions and themes. The workshop was a version of the Add-ons community workshop we held the previous day.

On the morning of our workshop, we showed up bright-eyed, bushy-tailed, and fully caffeinated. Materials in place, slides loaded…we were ready. And then, no one showed up.

Perhaps because 1) there was too much awesome stuff going on at the same time as our workshop, 2) we were in a back corner, and 3) we didn’t proactively advertise our talk enough.

After processing our initial heartache and disappointment, Emanuela, a designer on the team, suggested we try something we don’t do often at Mozilla, if at all: guerrilla research. Guerrilla user research usually means getting research participants from “the street.” For example, a researcher could stand in front of a grocery store with a tablet computer and ask people to use a new app. This type of research method is different than “normal” user research methods (e.g. field research in a person’s home, interviewing someone remotely over video call, conducting a usability study in a conference room at an office) because there is much less control in screening participants, and all the research is conducted in the public eye [1].

Guerrilla research is a polarizing topic in the broader user research field, and with good reason: while there are many reasons one might employ this method, it can and is used as a means of getting research done on the cheap. The same thing that makes guerrilla research cheaper — convenient, easy data collection in one location — can also mean compromised research quality. Because we don’t have the same level of care and control with this method, the results of the work can be less effective compared to a study conducted with careful recruitment and facilitation.

For these reasons, we intentionally don’t use guerrilla research as a core part of the Firefox UX practice. However, on that brisk London day, with no participants in sight and our dreams of a facilitated session dashed, guerrilla research presented itself as a last-resort option, as a way we could salvage some of our time investment AND still get some valuable feedback from people on the ground. We were already there, we had the materials, and we had a potential pool of participants — people who, given their conference attendance, were more informed about our particular topic than those you would find in a random coffee shop. It was time to make the best of a tough situation and learn what we could.

And that’s just what we did.

Thankfully, we had four workshop facilitators, which meant we could evolve our roles to the new reality. Emanuela and Jennifer were traveling workshop facilitators. Phil, another designer on the team, took photos. Meridel, our content strategist, stationed herself at our original workshop location in case anyone showed up there, taking advantage of that time to work through content-related submission issues with our engineers.

<figcaption>A photo of Jennifer and Emanuela taking the workshop “on the road”</figcaption>

We boldly went into the halls at MozFest and introduced ourselves and our project. We had printouts and pens to capture ideas and get some sketching done with our participants.

<figcaption>A photo of Jennifer with four participants</figcaption>

In the end, seven people participated in our on-the-road co-design workshop. Because MozFest tends to attract a diverse group of attendees from journalists to developers to academics, some of these participants had experience creating and submitting browser extensions and themes. For each participant we:

  1. Approached them and introduced ourselves and the research we were trying to do. Asked if they wanted to participate. If they did, we proceeded with the following steps.
  2. Asked them if they had experience with extensions and themes and discussed that experience.
  3. Introduced the sketching exercise, where we asked them to come up with ideas about an ideal process to submit an extension or theme.
  4. Watched as the participant sketched ideas for the ideal submission flow.
  5. Asked the participant to explain their sketches, and took notes during their explanation.
  6. Asked participants if we could use their sketches and our discussion in our research. If they agreed, we asked them to read and sign a consent form.

The participants’ ideas, perspectives, and workflows fed into our subsequent team analysis and design time. Combined with materials and learnings from a prior co-design workshop on the same topic, the Add-ons UX team developed creator personas. We used those personas’ needs and workflows to create a submission flow “blueprint” (like a flowchart) that would match each persona.

<figcaption>A snippet of our submission flow “blueprint”</figcaption>

In the end, the lack of workshop attendance was a good opportunity to learn from MozFest attendees we may not have otherwise heard from, and grow closer as a team as we had to think on our feet and adapt quickly.

Guerrilla research needs to be handled with care, and the realities of its limitations taken into account. Happily, we were able to get real value out of our particular application, with note that the insights we gleaned from it were just one input into our final deliverables and not the sole source of research. We encourage other researchers and designers doing user research to consider adapting your methods when things don’t go as planned. It could recoup some of the resources and time you have invested, and potentially give you some valuable insights for your product design process.

Questions for us? Want to share your experiences with being adaptable in the field? Leave us a comment!

Acknowledgements

Much gratitude to our colleagues who created the workshop with us and helped us edit this blog post! In alphabetical order, thanks to Stuart Colville, Mike Conca, Kev Needham, Caitlin Neiman, Gemma Petrie, Mara Schwarzlose, Amy Tsay, and Jorge Villalobos.

Reference

[1] “The Pros and Cons of Guerrilla Research for Your UX Project.” The Interaction Design Foundation, 2016, www.interaction-design.org/literature/article/the-pros-and-cons-of-guerrilla-research-for-your-ux-project.

Also published on Firefox UX Blog.

An exception to our ‘No Guerrilla Research’ practice: A tale of user research at MozFest was originally published in Firefox User Experience on Medium, where people are continuing the conversation by highlighting and responding to this story.

Categorieën: Mozilla-nl planet

Software-update: Mozilla Thunderbird 67.0 bèta 1 - Computer - Downloads - Tweakers

Nieuws verzameld via Google - di, 02/04/2019 - 19:36
Software-update: Mozilla Thunderbird 67.0 bèta 1 - Computer - Downloads  Tweakers

De Mozilla Foundation heeft een bèta van Thunderbird versie 67.0 uitgebracht. Mozilla Thunderbird is een opensourceclient voor e-mail en nieuwsgroepen, met ...

Categorieën: Mozilla-nl planet

François Marier: Recovering from a botched hg histedit on a mercurial bookmark

Mozilla planet - di, 02/04/2019 - 17:16

If you are in the middle of a failed Mercurial hg histedit, you can normally do hg histedit --abort to cancel it, though sometimes you also have to reach out for hg update -C. This is the equivalent of git's git rebase --abort and it does what you'd expect.

However, if you go ahead and finish the history rewriting and only notice problems later, it's not as straighforward. With git, I'd look into the reflog (git reflog) for the previous value of the branch pointer and simply git reset --hard to that value. Done.

Based on a Stack Overflow answer, I thought I could undo my botched histedit using:

hg unbundle ~/devel/mozilla-unified/.hg/strip-backup/47906774d58d-ae1953e1-backup.hg

but it didn't seem to work. Maybe it doesn't work when using bookmarks.

Here's what I ended up doing to fully revert my botched Mercurial histedit. If you know of a simpler way to do this, feel free to leave a comment.

Collecting the commits to restore

The first step was to collect all of the commits hashes I needed to restore. Luckily, I had sumitted my patch to Try before changing it and so I was able to look at the pushlog to get all of the commits at once.

If I didn't have that, I could also go to the last bookmark I pushed and click on parent commits until I hit the first one that's not mine. Then I could collect all of the commits using the browser's back button:

For that last one, I had to click on the changeset commit hash link in order to get the commit hash instead of the name of the bookmark (/rev/hashstore-crash-1434206).

Recreating the branch from scratch

This is what did to export patches for each commit and then re-import them one after the other:

for c in 3c31c543e736 7ddfe5ae2fa6 c04b620136c7 2d1bf04fd155 e194843f5b7a 47906774d58d f6a657bca64f 0d7a4e1c0079 976e25b49758 a1a382f2e773 b1565f3aacdb 3fdd157bb698 b1b041990577 220bf5cd9e2a c927a5205abe ; do hg export $c > ~/$c.patch ; done hg up ff8505d177b9 hg bookmarks hashstore-crash-1434206-new for c in 3c31c543e736 7ddfe5ae2fa6 c04b620136c7 2d1bf04fd155 e194843f5b7a 47906774d58d f6a657bca64f 0d7a4e1c0079 976e25b49758 a1a382f2e773 b1565f3aacdb 3fdd157bb698 b1b041990577 220bf5cd9e2a c927a5205abe 4140cd9c67b0 ; do hg import ~/$c.patch ; done Copying a bookmark

As an aside, if you want to make a copy of a bookmark before you do an hg histedit, it's not as simple as:

hg up hashstore-crash-1434206 hg bookmarks hashstore-crash-1434206-copy hg up hashstore-crash-1434206

While that seemed to work at the time, the histedit ended up messing with both of them.

An alternative that works is to push the bookmark to another machine. That way if worse comes to worse, you can hg clone from there and hg export the commits you want to re-import using hg import.

Categorieën: Mozilla-nl planet

François Marier: Mercurial commit series in Phabricator using Arcanist

Mozilla planet - di, 02/04/2019 - 17:16

Phabricator supports multi-commit patch series, but it's not yet obvious how to do it using Mercurial. So this the "hg" equivalent of this blog post for git users.

Note that other people have written tools and plugins to do the same thing and that an official client is coming soon.

Initial setup

I'm going to assume that you've setup arcanist and gotten an account on the Mozilla Phabricator instance. If you haven't, follow this video introduction or the excellent documentation for it (Bryce also wrote additionnal instructions for Windows users).

Make a list of commits to submit

First of all, use hg histedit to make a list of the commits that are needed:

pick ee4d9e9fcbad 477986 Bug 1461515 - Split tracking annotations from tracki... pick 5509b5db01a4 477987 Bug 1461515 - Fix and expand tracking annotation tes... pick e40312debf76 477988 Bug 1461515 - Make TP test fail if it uses the wrong... Create Phabricator revisions

Now, create a Phabricator revision for each commit (in order, from earliest to latest):

~/devel/mozilla-unified (annotation-list-1461515)$ hg up ee4d9e9fcbad 5 files updated, 0 files merged, 0 files removed, 0 files unresolved (leaving bookmark annotation-list-1461515) ~/devel/mozilla-unified (ee4d9e9)$ arc diff --no-amend Linting... No lint engine configured for this project. Running unit tests... No unit test engine is configured for this project. SKIP STAGING Phabricator does not support staging areas for this repository. Created a new Differential revision: Revision URI: https://phabricator.services.mozilla.com/D2484 Included changes: M modules/libpref/init/all.js M netwerk/base/nsChannelClassifier.cpp M netwerk/base/nsChannelClassifier.h M toolkit/components/url-classifier/Classifier.cpp M toolkit/components/url-classifier/SafeBrowsing.jsm M toolkit/components/url-classifier/nsUrlClassifierDBService.cpp M toolkit/components/url-classifier/tests/UrlClassifierTestUtils.jsm M toolkit/components/url-classifier/tests/mochitest/test_trackingprotection_bug1312515.html M xpcom/base/ErrorList.py ~/devel/mozilla-unified (ee4d9e9)$ hg up 5509b5db01a4 3 files updated, 0 files merged, 0 files removed, 0 files unresolved ~/devel/mozilla-unified (5509b5d)$ arc diff --no-amend Linting... No lint engine configured for this project. Running unit tests... No unit test engine is configured for this project. SKIP STAGING Phabricator does not support staging areas for this repository. Created a new Differential revision: Revision URI: https://phabricator.services.mozilla.com/D2485 Included changes: M toolkit/components/url-classifier/tests/UrlClassifierTestUtils.jsm M toolkit/components/url-classifier/tests/mochitest/test_trackingprotection_bug1312515.html M toolkit/components/url-classifier/tests/mochitest/trackingRequest.html ~/devel/mozilla-unified (5509b5d)$ hg up e40312debf76 2 files updated, 0 files merged, 0 files removed, 0 files unresolved ~/devel/mozilla-unified (e40312d)$ arc diff --no-amend Linting... No lint engine configured for this project. Running unit tests... No unit test engine is configured for this project. SKIP STAGING Phabricator does not support staging areas for this repository. Created a new Differential revision: Revision URI: https://phabricator.services.mozilla.com/D2486 Included changes: M toolkit/components/url-classifier/tests/mochitest/classifiedAnnotatedPBFrame.html M toolkit/components/url-classifier/tests/mochitest/test_privatebrowsing_trackingprotection.html Link all revisions together

In order to ensure that these commits depend on one another, click on that last phabricator.services.mozilla.com link, then click "Related Revisions" then "Edit Parent Revisions" in the right-hand side bar and then add the previous commit (D2485 in this example).

Then go to that parent revision and repeat the same steps to set D2484 as its parent.

Amend one of the commits

As it turns out my first patch wasn't perfect and I needed to amend the middle commit to fix some test failures that came up after pushing to Try. I ended up with the following commits (as viewed in hg histedit):

pick ee4d9e9fcbad 477986 Bug 1461515 - Split tracking annotations from tracki... pick c24f4d9e75b9 477992 Bug 1461515 - Fix and expand tracking annotation tes... pick 1840f68978a7 477993 Bug 1461515 - Make TP test fail if it uses the wrong...

which highlights that the last two commits changed and that I would have two revisions (D2485 and D2486) to update in Phabricator.

However, since the only reason why the third patch has a different commit hash is because its parent changed, theres's no need to upload it again to Phabricator. Lando doesn't care about the parent hash and relies instead on the parent revision ID. It essentially applies diffs one at a time.

The trick was to pass the --update DXXXX argument to arc diff:

~/devel/mozilla-unified (annotation-list-1461515)$ hg up c24f4d9e75b9 2 files updated, 0 files merged, 0 files removed, 0 files unresolved (leaving bookmark annotation-list-1461515) ~/devel/mozilla-unified (c24f4d9)$ arc diff --no-amend --update D2485 Linting... No lint engine configured for this project. Running unit tests... No unit test engine is configured for this project. SKIP STAGING Phabricator does not support staging areas for this repository. Updated an existing Differential revision: Revision URI: https://phabricator.services.mozilla.com/D2485 Included changes: M browser/base/content/test/general/trackingPage.html M netwerk/test/unit/test_trackingProtection_annotateChannels.js M toolkit/components/antitracking/test/browser/browser_imageCache.js M toolkit/components/antitracking/test/browser/browser_subResources.js M toolkit/components/antitracking/test/browser/head.js M toolkit/components/antitracking/test/browser/popup.html M toolkit/components/antitracking/test/browser/tracker.js M toolkit/components/url-classifier/tests/UrlClassifierTestUtils.jsm M toolkit/components/url-classifier/tests/mochitest/test_trackingprotection_bug1312515.html M toolkit/components/url-classifier/tests/mochitest/trackingRequest.html

Note that changing the commit message will not automatically update the revision details in Phabricator. This has to be done manually in the Web UI if required.

Categorieën: Mozilla-nl planet

François Marier: Checking Your Passwords Against the Have I Been Pwned List

Mozilla planet - di, 02/04/2019 - 17:16

Two months ago, Troy Hunt, the security professional behind Have I been pwned?, released an incredibly comprehensive password list in the hope that it would allow web developers to steer their users away from passwords that have been compromised in past breaches.

While the list released by HIBP is hashed, the plaintext passwords are out there and one should assume that password crackers have access to them. So if you use a password on that list, you can be fairly confident that it's very easy to guess or crack your password.

I wanted to check my active passwords against that list to check whether or not any of them are compromised and should be changed immediately. This meant that I needed to download the list and do these lookups locally since it's not a good idea to send your current passwords to this third-party service.

I put my tool up on Launchpad / PyPI and you are more than welcome to give it a go. Install Postgres and Psycopg2 and then follow the README instructions to setup your database.

Categorieën: Mozilla-nl planet

Hacks.Mozilla.Org: Crossing the Rust FFI frontier with Protocol Buffers

Mozilla planet - di, 02/04/2019 - 16:42

My team, the application services team at Mozilla, works on Firefox Sync, Firefox Accounts and WebPush.

These features are currently shipped on Firefox Desktop, Android and iOS browsers. They will soon be available in our new products such as our upcoming Android browser, our password manager Lockbox, and Firefox for Fire TV.

Solving for one too many targets

Until now, for historical reasons, these functionalities have been implemented in widely different fashions. They’ve been tightly coupled to the product they are supporting, which makes them hard to reuse. For example, there are currently three Firefox Sync clients: one in JavaScript, one in Java and another one in Swift.

Considering the size of our team, we quickly realized that our current approach to shipping products would not scale across more products and would lead to quality issues such as bugs or uneven feature-completeness across platform-specific implementations. About a year ago, we decided to plan for the future. As you may know, Mozilla is making a pretty big bet on the new Rust programming language, so it was natural for us to follow suit.

A cross-platform strategy using Rust

Our new strategy is as follows: We will build cross-platforms components, implementing our core business logic using Rust and wrapping it in a thin platform-native layer, such as Kotlin for Android and Swift for iOS.

Rust component bundled in an Android wrapper

The biggest and most obvious advantage is that we get one canonical code base, written in a safe and statically typed language, deployable on every platform. Every upstream business logic change becomes available to all our products with a version bump.

How we solved the FFI challenge safely

However, one of the challenges to this new approach is safely passing richly-structured data across the API boundary in a way that’s memory-safe and works well with Rust’s ownership system. We wrote the ffi-support crate to help with this, if you write code that does FFI (Foreign function interface) tasks you should strongly consider using it. Our initial versions were implemented by serializing our data structures as JSON strings and in a few cases returning a C-shaped struct by pointer. The procedure for returning, say, a bookmark from the user’s synced data looked like this:

Returning Bookmark data from Rust to Kotlin using JSON

Returning Bookmark data from Rust to Kotlin using JSON (simplified)

 

So what’s the problem with this approach?

  • Performance: JSON serializing and de-serializing is notoriously slow, because performance was not a primary design goal for the format. On top of that, an extra string copy happens on the Java layer since Rust strings are UTF-8 and Java strings are UTF-16-ish. At scale, it can introduce significant overhead.
  • Complexity and safety: every data structure is manually parsed and deserialized from JSON strings. A data structure field modification on the Rust side must be reflected on the Kotlin side, or an exception will most likely occur.
  • Even worse, in some cases we were not returning JSON strings but C-shaped Rust structs by pointer: forget to update the Structure Kotlin subclass or the Objective-C struct and you have a serious memory corruption on your hands.

We quickly realized there was probably a better and faster way that could be safer than our current solution.

Data serialization with Protocol Buffers v.2

Thankfully, there are many data serialization formats out there that aim to be fast. The ones with a schema language will even auto-generate data structures for you!

After some exploration, we ended up settling on Protocol Buffers version 2.

The —relative— safety comes from the automated generation of data structures in the languages we care about. There is only one source of truth—the .proto schema file—from which all our data classes are generated at build time, both on the Rust and consumer side.

protoc (the Protocol Buffers code generator) can emit code in more than 20 languages! On the Rust side, we use the prost crate which outputs very clean-looking structs by leveraging Rust derive macros.

Returning Bookmark data from Rust to Kotlin using Protocol Buffers 2

Returning Bookmark data from Rust to Kotlin using Protocol Buffers 2 (simplified)

 

And of course, on top of that, Protocol Buffers are faster than JSON.

There are a few downsides to this approach: it is more work to convert our internal types to the generated protobuf structs –e.g a url::Url has to be converted to a String first– whereas back when we were using serde-json serialization, any struct implementing serde::Serialize was a line away from being sent over the FFI barrier. It also adds one more step during our build process, although it was pretty easy to integrate.

One thing I ought to mention: Since we ship both the producer and consumer of these binary streams as a unit, we have the freedom to change our data exchange format transparently without affecting our Android and iOS consumers at all.

A look ahead

Looking forward, there’s probably a high-level system that could be used to exchange data over the FFI, maybe based on Rust macros. There’s also been talk about using FlatBuffers to squeeze out even more performance. In our case Protobufs provided the right trade-off between ease-of-use, performance, and relative safety.

So far, our components are present both on iOS, on Firefox and Lockbox, and on Lockbox Android, and will soon be in our new upcoming Android browser.

Firefox iOS has started to oxidize by replacing their password syncing engine with the one we built in Rust. The plan is to eventually do the same on Firefox Desktop as well.

If you are interested in helping us build the future of Firefox Sync and more, or simply following our progress, head to the application services Github repository.

The post Crossing the Rust FFI frontier with Protocol Buffers appeared first on Mozilla Hacks - the Web developer blog.

Categorieën: Mozilla-nl planet

The Firefox Frontier: Stop videos from automatically playing with new autoplay controls from Firefox

Mozilla planet - di, 02/04/2019 - 15:30

The web is 30 years old. Over its lifetime we’ve had developments that have brought us to peaks of delight and others to the pits of frustration. The blink tag, … Read more

The post Stop videos from automatically playing with new autoplay controls from Firefox appeared first on The Firefox Frontier.

Categorieën: Mozilla-nl planet

Mike Hoye: Occasionally Useful

Mozilla planet - di, 02/04/2019 - 15:30

A bit of self-promotion: the UsesThis site asked me their four questions a little while ago; it went up today.

A colleague once described me as “occasionally useful, in the same way that an occasional table is a table.” Which I thought was oddly nice of them.

Categorieën: Mozilla-nl planet

Mozilla test maatregelen tegen permissiespam in Firefox - Security.nl

Nieuws verzameld via Google - di, 02/04/2019 - 11:57
Mozilla test maatregelen tegen permissiespam in Firefox  Security.nl

Elke maand vragen websites miljoenen keren aan Firefoxgebruikers toestemming om push notificaties te versturen, maar minder dan 3 procent van deze ...

Categorieën: Mozilla-nl planet

Mozilla Localization (L10N): Changing the Language of Firefox Directly From the Browser

Mozilla planet - di, 02/04/2019 - 08:33
Language Settings

In Firefox there are two main user facing settings related to languages:

  • Web content: when you visit a web page, the browser will communicate to the server which languages you’d like to see content in. Technically, this is done by sending an Accept-Language HTTP header, which contains a list of locale codes in the user’s preferred order.
  • User interface: the language in which you want to see the browser (menus, preferences, etc.).

The difference between the two is not as intuitive as it might seem. A lot of users change the web content settings, and expect the user interface to change.

The Legacy Chaos

While the preferences for web content have been exposed in Firefox since its first versions, changing the language used for the user interface has always been challenging. Here’s the painful journey users had to face until a few months ago:

  • First of all, you need to be aware that there are other languages available. There are no preferences exposing this information, and documentation is spread across different websites.
  • You need to find and install a language pack – a special type of add-on – from addons.mozilla.org.
  • If the language used in the operating system is different from the one you’re trying to install in Firefox, you need to create a new preference in about:config, and set it to the correct locale code. Before Firefox 59 and intl.locale.requested, you would need to manually set the general.useragent.locale pref in any case, not just when there’s a discrepancy.

The alternative is to install a build of Firefox already localized in your preferred language. But, once again, they’re not so easy to find. Imagine you work in a corporate environment that provides you with an operating system in English (en-US). You search the Web for “Firefox download”, and automatically end up on this download page, which doesn’t provide information on the language you’re about to download. You need to pay a lot of attention and notice the link to other languages.

If you already installed Firefox in the wrong language, you need to uninstall it, find the installer in the correct language, and reinstall it. As part of the uninstall process on Windows, we ask users to volunteer for a quick survey to explain why they’re uninstalling the browser, and the amount of them saying something along the line of “wrong language” or “need to reinstall in the right language” is staggering, especially considering this is an optional comment within an optional survey. It’s a clear signal, if one was ever needed, that things need to improve.

Everything Changes with Firefox 65

Fast forward to Firefox 65:

We introduced a new Language section. Directly from the General pane it’s now possible to switch between languages already available in Firefox, removing the need for manually setting preferences in about:config. What if the language is not available? Then you can simply Search for more languages… from the dropdown menu.

Add your preferred language to the list (French in this case).

And restart the browser to have the interface localized in French. Notice how the message is displayed in both languages, to provide users with another hint to the user that they selected the right language.

If you’re curious, you can see a diagram of the complete user interaction here.

A lot happens under the hood for this brief interaction:

  • When the user asks for more languages, Firefox connects to addons.mozilla.org via API to retrieve the list of languages available for the version in use.
  • When the user adds the language, Firefox downloads and installs the language pack for the associated locale code.
  • In order to improve the user experience, if available it also downloads dictionaries associated with the requested language.
  • When the browser is restarted, the new locale code is set as first in the intl.locale.requested preference.

This feature is enabled by default in Beta and Release versions of Firefox. Language packs are not reliable on Nightly, given that strings change frequently, and a language still considered compatible but incomplete could lead to a completely broken browser (condition known as “yellow screen of death“, where the XUL breaks due to missing DTD entities).

What’s Next

First of all, there are still a few bugs to fix (you can find a list in the dependencies of the tracking bug). Sadly, there are still several places in the code that assume the language will never change, and cache the translated content too aggressively.

The path forward has yet to be defined. Completing the migration to Fluent would drastically improve the user experience:

  • Language switching would be completely restartless.
  • Firefox would support a list of fallback locales. With the old technology, if a translation is missing it can only fall back to English. With Fluent, we can set a fallback chain, for example Ligurian->Italian->English.

There are also areas of the browser that are not covered by language packs, and would need to be rewritten (e.g. the profile manager). For these, the language used always remains the one packaged in the build.

The user experience could also be improved. For example, can we make selecting the language part of a multiplatform onboarding experience? There are a lot of hints that we could take from the host operating system, and prompt the user about installing and selecting a different language. What if the user browses a lot of German content, but he’s using the browser in English? Should we suggest them that it’s possible to switch language?

With Firefox 66 we’re also starting to collect Telemetry data about internationalization settings – take a look at the table at the bottom of about:support – to understand more about our users, and how these changes in Preferences impact language distribution and possibly retention.

For once, Firefox is well ahead of the competition. For example, for Chrome it’s only possible to change language on Windows and Chromebook, while Safari only uses the language of your macOS.

This is the result of an intense cross-team effort. Special thanks to Zibi Braniecki for the initial idea and push, and all the work done under the hood to improve Firefox internationalization infrastructure, Emanuela Damiani for UX, and Mark Striemer for the implementation.

Categorieën: Mozilla-nl planet

This Week In Rust: This Week in Rust 280

Mozilla planet - di, 02/04/2019 - 06:00

Hello and welcome to another issue of This Week in Rust! Rust is a systems language pursuing the trifecta: safety, concurrency, and speed. This is a weekly summary of its progress and community. Want something mentioned? Tweet us at @ThisWeekInRust or send us a pull request. Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub. If you find any errors in this week's issue, please submit a PR.

Updates from Rust Community News & Blog Posts Crate of the Week

This week's crate is sonic, a fast, lightweight & schema-less search backend. Thanks to Vikrant for the suggestion!

Submit your suggestions and votes for next week!

Call for Participation

Always wanted to contribute to open-source projects but didn't know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

If you are a Rust project owner and are looking for contributors, please submit tasks here.

Updates from Rust Core

251 pull requests were merged in the last week

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

Final Comment Period

Every week the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now.

RFCs Tracking Issues & PRs New RFCs

No new RFCs were proposed this week.

Upcoming Events Asia Pacific Europe North America

If you are running a Rust event please add it to the calendar to get it mentioned here. Please remember to add a link to the event too. Email the Rust Community Team for access.

Rust Jobs

Tweet us at @ThisWeekInRust to get your job offers listed here!

Quote of the Week

Thanks for walking through the process.

Quite the mental exercise, some people do Sudoku, others solve borrow puzzles!

Gambhiro on rust-users

Thanks to Tom Phinney for the suggestion!

Please submit your quotes for next week!

This Week in Rust is edited by: nasa42, llogiq, and Flavsditz.

Discuss on r/rust.

Categorieën: Mozilla-nl planet

Pagina's