mozilla

Mozilla Nederland LogoDe Nederlandse
Mozilla-gemeenschap

Mozilla Joins New Partners to Fund Open Source Digital Infrastructure Research

Mozilla Blog - do, 23/07/2020 - 17:58

Today, Mozilla is pleased to announce that we’re joining the Ford Foundation, the Sloan Foundation, and the Open Society Foundations to launch a request for proposals (RFP) for research on open source digital infrastructure. To kick off this RFP, we’re joining with our philanthropic partners to host a webinar today at 9:30 AM Pacific. The Mozilla Open Source Support Program (MOSS) is contributing $25,000 to this effort.

Nearly everything in our modern society, from hospitals and banks to universities and social media platforms, runs on “digital infrastructure” – a foundation of open source code that is designed to solve common challenges. The benefits of digital infrastructure are numerous: it can reduce the cost of setting up new businesses, support data-driven discovery across research disciplines, enable complex technologies such as smartphones to talk to each other, and allow everyone to have access to important innovations like encryption that would otherwise be too expensive.

In joining with these partners for this funding effort, Mozilla hopes to propel further investigation into the sustainability of open source digital infrastructure. Selected researchers will help determine the role companies and other private institutions should play in maintaining a stable ecosystem of open source technology, the policy and regulatory considerations for the long-term sustainability of digital infrastructure, and much more. These aims align with Mozilla’s pledge for a healthy internet, and we’re confident that these projects will go a long way towards deepening a crucial collective understanding of the industrial maintenance of digital infrastructure.

We’re pleased to invite interested researchers to apply to the RFP, using the application found here. The application opened on July 20, 2020, and will close on September 4, 2020. Finalists will be notified in October, at which point full proposals will be requested. Final proposals will be selected in November.

More information about the RFP is available here.

The post Mozilla Joins New Partners to Fund Open Source Digital Infrastructure Research appeared first on The Mozilla Blog.

Categorieën: Mozilla-nl planet

A look at password security, Part III: More secure login protocols

Mozilla Blog - di, 21/07/2020 - 02:06

In part II, we looked at the problem of Web authentication and covered the twin problems of phishing and password database compromise. In this post, I’ll be covering some of the technologies that have been developed to address these issues.

This is mostly a story of failure, though with a sort of hopeful note at the end. The ironic thing here is that we’ve known for decades how to build authentication technologies which are much more secure than the kind of passwords we use on the Web. In fact, we use one of these technologies — public key authentication via digital certificates — to authenticate the server side of every HTTPS transaction before you send your password over. HTTPS supports certificate-base client authentication as well, and while it’s commonly used in other settings, such as SSH, it’s rarely used on the Web. Even if we restrict ourselves to passwords, we have long had technologies for password authentication which completely resist phishing, but they are not integrated into the Web technology stack at all. The problem, unfortunately, is less about cryptography than about deployability, as we’ll see below.

Two Factor Authentication and One-Time Passwords

The most widely deployed technology for improving password security goes by the name one-time passwords (OTP) or (more recently) two-factor authentication (2FA). OTP actually goes back to well before the widespread use of encrypted communications or even the Web to the days when people would log in to servers in the clear using Telnet. It was of course well known that Telnet was insecure and that anyone who shared the network with you could just sniff your password off the wire1 and then login with it [Technical note: this is called a replay attack.] One partial fix for this attack was to supplement the user password with another secret which wasn’t static but rather changed every time you logged in (hence a “one-time” password).

OTP systems came in a variety of forms but the most common was a token about the size of a car key fob but with an LCD display, like this:

The token would produce a new pseudorandom numeric code every 30 seconds or so and when you went to log in to the server you would provide both your password and the current code. That way, even if the attacker got the code they still couldn’t log in as you for more than a brief period2 unless they also stole your token. If all of this looks familiar, it’s because this is more or less the same as modern OTP systems such as Google Authenticator, except that instead of a hardware token, these systems tend to use an app on your phone and have you log into some Web form rather than over Telnet. The reason this is called “two-factor authentication” is that authenticating requires both a value you know (the password) and something you have (the device). Some other systems use a code that is sent over SMS but the basic idea is the same.

OTP systems don’t provide perfect security, but they do significantly improve the security of a password-only system in two respects:

  1. They guarantee a strong, non-reused secret. Even if you reuse passwords and your password on site A is compromised, the attacker still won’t have the right code for site B.3
  2. They mitigate the effect of phishing. If you are successfully phished the attacker will get the current code for the site and can log in as you, but they won’t be able to log in in the future because knowing the current code doesn’t let you predict a future code. This isn’t great but it’s better than nothing.

The nice thing about a 2FA system is that it’s comparatively easy to deploy: it’s a phone app you download plus another code that the site prompts you for. As a result, phone-based 2FA systems are very popular (and if that’s all you have, I advise you to use it, but really you want WebAuthn, which I’ll be describing in my next post).

Password Authenticated Key Agreement

One of the nice properties of 2FA systems is that they do not require modifying the client at all, which is obviously convenient for deployment. That way you don’t care if users are running Firefox or Safari or Chrome, you just tell them to get the second factor app and you’re good to go. However, if you can modify the client you can protect your password rather than just limiting the impact of having it stolen. The technology to do this is called a Password Authenticated Key Agreement (PAKE) protocol.

The way a PAKE would work on the Web is that it would be integrated into the TLS connection that already secures your data on its way to the Web server. On the client side when you enter your password the browser feeds it into TLS and on the other side, the server feeds in a verifier (effectively a password hash). If the password matches the verifier, then the connection succeeds, otherwise it fails. PAKEs aren’t easy to design — the tricky part is ensuring that the attacker has to reconnect to the server for each guess at the password — but it’s a reasonably well understood problem at this point and there are several PAKEs which can be integrated with TLS.

What a PAKE gets you is security against phishing: even if you connect to the wrong server, it doesn’t learn anything about your password that it doesn’t already know because you just get a cryptographic failure. PAKEs don’t help against password file compromise because the server still has to store the verifier, so the attacker can perform a password cracking attack on the verifier just as they would on the password hash. But phishing is a big deal, so why doesn’t everyone use PAKEs? The answer here seems to be surprisingly mundane but also critically important: user interface.

The way that most Web sites authenticate is by showing you a Web page with a field where you can enter your password, as shown below:

Firefox accounts login box

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

When you click the “Sign In” button, your password gets sent to the server which checks it against the hash as described in part I. The browser doesn’t have to do anything special here (though often the password field will be specially labelled so that the browser can automatically mask out your password when you type); it just sends the contents of the field to the server.

In order to use a PAKE, you would need to replace this with a mechanism where you gave the browser your password directly. Browsers actually have something for this, dating back to the earliest days of the Web. On Firefox it looks like this:

Basic auth login

 

 

 

 

 

 

 

Hideous, right? And I haven’t even mentioned the part where it’s a modal dialog that takes over your experience. In principle, of course, this might be fixable, but it would take a lot of work and would still leave the site with a lot less control over their login experience than they have now; understandably they’re not that excited about that. Additionally, while a PAKE is secure from phishing if you use it, it’s not secure if you don’t, and nothing stops the phishing site from skipping the PAKE step and just giving you an ordinary login page, hoping you’ll type in your password as usual.

None of this is to say that PAKEs aren’t cool tech, and they make a lot of sense in systems that have less flexible authentication experiences; for instance, your email client probably already requires you to enter your authentication credentials into a dialog box, and so that could use a PAKE. They’re also useful for things like device pairing or account access where you want to start with a small secret and bootstrap into a secure connection. Apple is known to use SRP, a particular PAKE, for exactly this reason. But because the Web already offers a flexible experience, it’s hard to ask sites to take a step backwards and PAKEs have never really taken off for the Web.

Public Key Authentication

From a security perspective, the strongest thing would be to have the user authenticate with a public private key pair, just like the Web server does. As I said above, this is a feature of TLS that browsers actually have supported (sort of) for a really long time but the user experience is even more appalling than for builtin passwords.4 In principle, some of these technical issues could have been fixed, but even if the interface had been better, many sites would probably still have wanted to control the experience themselves. In any case, public key authentication saw very little usage.

It’s worth mentioning that public key authentication actually is reasonably common in dedicated applications, especially in software development settings. For instance, the popular SSH remote login tool (replacing the unencrypted Telnet) is commonly used with public key authentication. In the consumer setting, Apple Airdrop usesiCloud-issued certificates with TLS to authenticate your contacts.

Up Next: FIDO/WebAuthn

This was the situation for about 20 years: in theory public key authentication was great, but in practice it was nearly unusable on the Web. Everyone used passwords, some with 2FA and some without, and nobody was really happy. There had been a few attempts to try to fix things but nothing really stuck. However, in the past few years a new technology called WebAuthn has been developed. At heart, WebAuthn is just public key authentication but it’s integrated into the Web in a novel way which seems to be a lot more deployable than what has come before. I’ll be covering WebAuthn in the next post.

  1. And by “wire” I mean a literal wire, though such sniffing attacks are prevalent in wireless networks such as those protected by WPA2 
  2. Note that to really make this work well, you also need to require a new code in order to change your password, otherwise the attacker can change your password for you in that window. 
  3. Interestingly, OTP systems are still subject to server-side compromise attacks. The way that most of the common systems work is to have a per-user secret which is then used to generate a series of codes, e.g., truncated HMAC(Secret, time) (see RFC6238). If an attacker compromises the secret, then they can generate the codes themselves. One might ask whether it’s possible to design a system which didn’t store a secret on the server but rather some public verifier (e.g., a public key) but this does not appear to be secure if you also want to have short (e.g., six digits) codes. The reason is that if the information that is used to verify is public, the attacker can just iterate through every possible 6 digit code and try to verify it themselves. This is easily possible during the 30 second or so lifetime of the codes. Thanks to Dan Boneh for this insight. 
  4. The details are kind of complicated here, but just some of the problems (1) TLS client authentication is mostly tied to certificates and the process of getting a certificate into the browser was just terrible (2) The certificate selection interface is clunky (3) Until TLS 1.3, the certificate was actually sent in the clear unless you did TLS renegotiation, which had its own problems, particularly around privacy.

Update: 2020-07-21: Fixed up a sentence.

The post A look at password security, Part III: More secure login protocols appeared first on The Mozilla Blog.

Categorieën: Mozilla-nl planet

Mozilla Reps Community: Mozilla Reps at the VirtuAllHands 2020

Mozilla planet - vr, 17/07/2020 - 12:17

This year ‘s Virtual All Hands (aka VirtuAllHands) was different from any other.

Some things were also familiar: plenaries, plenty of interesting conversations, new things to learn, and yes, even a bit of exhaustion. It was even possible to meet and chat with  other people, even if using your avatar using MozillaHubs! All in all, as we were assured by a Mozilla Rep veteran of numerous All-Hands, that although virtual, “it really feels like a real All Hands”.

 

During the VirtuAllHands, the  Mozilla Reps program organized four meetings each led by a Reps Council member.  These meetings focused on a review of history, and future challenges for three central issues: communication, ‘activities & campaigns’, and mentorship. The meetings uncovered many challenges, but also successes and progresses.

 

Two meetings were focused on communication –  the first led by Felipe and the second by Tim. Felipe and Tim spoke about the Reps Council’s existing work to improve communication and then led a discussion on the main communication challenges within the Reps program. The importance of clarity on communication tools and the location of information emerged, as well as a need to centralize and summarize information. The reps also underlined the importance of having clear mechanisms through which information and communications can flow between the organization, the reps and local  communities.

 

Our third meeting was led by Shina and Shahbaz and dealt with “activities & campaigns”. Shina led us through a presentation that can be used by the reps to guide community members on activities and campaigns available, then opened the floor for discussion. The conversation on the engagement of communities in activities and campaigns, shone some lights on the issues that reps often front. Reps talked about the importance of having locally relevant campaigns, challenges in keeping contributors engaged after a campaign has ended, as well as the need to improve the widespread communication of campaigns, and challenges in setting up events.

 

The last meeting was led by Faisal and focused on mentorship within the reps program. Faisal presented a brief history of the mentorship program, and led a discussion over its issues. Again the need for clarity emerged, as the reps discussed how mentors role and activities should be better defined and, in some areas, re-defined.

 

Thanks to you all for taking the time to participate, lead and organize these meetings. All the feedback collected during these discussions will be of crucial importance to focus our work going forward!

 

On behalf of the community development team:

 

Francesca and Konstantina

Categorieën: Mozilla-nl planet

Mozilla Performance Blog: Improving Firefox Startup Time With The about:home Startup Cache

Mozilla planet - vr, 17/07/2020 - 02:54
Don’t bury the lede

We’re working on a thing to make Firefox start faster! It appears to work! Here’s a video showing off a before (left) and after (right):

 

Improving Firefox Startup Time With The about:home Startup Cache

For the past year or so, the Firefox Desktop Front-End Performance team has been concentrating on making improvements to browser startup performance.

The launching of an application like Firefox is quite complex. Meticulous profiling of Firefox startup in various conditions has, thankfully, helped reveal a number of opportunities where we can make improvements. We’ve been evaluating and addressing these opportunities, and several have made it into the past few Firefox releases.

This blog post is about one of those improvements that is currently in the later stages of development. I’m going to describe the improvement, and how we went about integrating it.

In a default installation of Firefox, the first (and only) tab that loads is about:home. (Note: this is only true if the user hasn’t just restarted after applying an update, and if they haven’t set a custom home page or configured Firefox to restore their previous session on start.)

home in an instance of Firefox. There are a series of Top Sites listed including Facebook, YouTube and Reddit. There are three Pocket stories also listed.

The about:home page is actually the same thing that appears when you open a new tab (about:newtab). The fact that they have different addresses allows us to treat their loading differently.

Your about:home might look slightly different from the above — depending on your locale, it may or may not include the Pocket stories.

Do not be fooled by what appears to be a very simple page of images and text. This page is actually quite sophisticated under the hood. It is designed to be customized by the user in the following ways:

Users can

  • Collapse or expand sections
  • Remove sections entirely
  • Reorganize the order of their Top Sites by dragging and dropping
  • Pin and unpin Top Sites to their positions
  • Add their own custom Top Sites with custom thumbnails
  • Add or remove search engines from their Top Sites
  • Change the number of rows in the Top Sites and Recommended by Pocket sections
  • Choose to have the Highlights composed of any of the following:
    • Visited pages
    • Recent bookmarks
    • Recent downloads
    • Pages recently saved to Pocket

The user can customize these things at any time, and any open copies of the page are expected to reflect those customizations immediately.

There are further complexities beyond user customization. The page is also designed to be easy for our design and engineering teams to experiment with reorganizing the layout and composition of the page so that they can test variations on its layout in the wild.

The about:home page also has special privileges not afforded to normal websites. It can

  • Save and remove bookmarks
  • Add pages to Pocket
  • Cause the URL bar to be focused and selected
  • Show thumbnails for pages that the user has visited
  • Access both high and normal resolution favicons
  • Render information about the user’s recent activity (recent page visits, downloads, saves to Pocket, etc.)

So while at first glance, this appears to be a static page of just images and text, rest assured that the page can do much more.

Like the Firefox Developer Tools UI, about:home is written with the help of the React and Redux libraries. This has allowed the about:home development team to create sophisticated, reusable, and composable components that could be easily tested using modern JavaScript testing methods.

Unsurprisingly, this complexity and customizability comes at a cost. The page needs to request a state object from the parent process in order to have the Redux store populated and to have React render it. Essentially, the page is dynamically rendering itself after the markup of the page loads.

Startup is a critical time for an application. The user has expressed a need for their browser, and we have an obligation to serve the user as quickly and efficiently as possible. The user’s time is a resource that we should not squander. Similarly, because so much needs to occur during startup, disk reads, disk writes, and CPU time are also considered precious resources. They should only be used if there’s no other choice.

In this case, we believed that the CPU time and disk accesses spent constructing the state object and dynamically rendering the about:home page was competing with all of the other CPU and disk access happening during startup, and this was slowing us down from presenting about:home to the user in a timely way.

Generally speaking, in my mind there are four broad approaches to performance problems once a bottleneck has been identified.

  • You can widen the bottleneck (make the operations more efficient)
  • You can divide the bottleneck (split the work into smaller slices that can be done over a longer period of time with rests in between)
  • You can move the bottleneck (defer work until later when it seems that there is less competition for resources, or move it to a different thread)
  • You can remove the bottleneck (don’t do the work)

We started by trying to apply the last two approaches, wondering what startup performance would be like if the page did not render itself dynamically, but was instead a static page generated periodically and pulled off of the disk at startup.

Prototype when possible

The first step to improving something is finding a way to measure it. Thankfully, we already have a number of logged measurements for startup. One of those measurements gives us the time from process start to rendering the Top Sites section of about:home. This is not a perfect measurement—ideally, we’d measure to the point that the page finally “settles” and stops changing—but for this project, this measurement served our purposes.

Before investing a bunch of time into a potential improvement, it’s usually a good idea to try to see if what you’re gaining is worth the development time. It’s not always possible to build a prototype for performance improvements, but in this case it was.

The team quickly threw together a static copy of about:home and hacked together a patch to load that document during startup, rather than dynamically rendering the page. We then tested that page on our reference hardware. As of this writing, it’s been about five months since that test was done, but according to this comment, the prototype yielded what appears to be an almost 20% win on time from process start to about:home painting Top Sites.

So, with that information, we thought we had a real improvement opportunity here. We decided to proceed with the idea, and began a long arduous search for “the right way to do it.”

Pre-production

As I mentioned earlier, about:home is complex. The infrastructure that powers it is complex. Coupled with the fact that no one on the Firefox Front-End Performance team had spent much time studying React and Redux meant that we had a lot of learning to do.

The first step was to get some React and Redux fundamentals under our belt. This meant building some small toy applications and getting familiar with the framework idioms and how things are organized.

With that grounding, the next step was to start reading the code — starting from the entrypoint into the code that powers about:home when the browser starts. This was an intense period of study that branched into many different directions. Part of the complexity was because much of the code is asynchronous and launched work on different threads, which introduced some non-determinism. While it is generally good for responsiveness to move work off of the main thread, it can lead to some complex reading and interpretation of the code when more than two threads are involved.

A tool we used during this analysis was the Firefox Profiler, to get a realistic sense of the order of executions during startup. These profiles helped to inform much of our reading of the code.

This analysis helped us solidify our mental model of how about:home loads. With that model in place, it was much easier to propose practical approaches for introducing a static about:home document into the ecosystem of pre-existing code. The Firefox Front-End Performance team documented our findings and recommendations and then presented them to the team that originally built the about:home system to ensure that we were all on the same page and that we hadn’t missed anything critical. They were already aware that we were investigating potential performance improvements, and had very useful feedback for us, as well as historical product decision context that clarified our understanding.

Critically, we presented our recommendation for loading a static about:home page at startup and ensured that there were no upcoming plans for about:home that would break our mental model or render the recommendation no longer valid. Thankfully, it sounded like we were aligned and fine to proceed with our plan.

So what was the plan? We knew that since about:home is quite dynamic and can change over time (Remember: as the user browses, bookmarks and downloads things, their Highlights and Top Sites sections might change—also, if Pocket is enabled, new stories will also be downloaded periodically) we needed a startup cache for about:home that could be periodically updated during the course of a browsing session. We would then load from that cache at startup. Clearly, I’m glossing over some details here, but that was the general plan.

As usual, no plan survives breakfast, and as we started to architect our solution, we identified things we would need to change along the way.

Development

We knew that the process that loads about:home would need to be able to read from the about:home startup cache. We also knew that about:home can potentially contain information about what pages the user has visited, and that about:home can do privileged things that normal web pages cannot. It seemed that this project would be a good opportunity to finish a project that was started (and mothballed) a year or so earlier: creating a special privileged content process for about:home. We would load about:home in that process, and add assertions to ensure that privileged actions from about:home could only happen from that content process type.

On that last point, I want to emphasize that it’s vitally important that content processes have limited abilities. That way, if they’re ever compromised by a bad actor, there are limits to what damage they can do. The assertions mentioned in this case mean that if a compromised content process tries to “pretend” to be the privileged about content process by sending one of its messages, that the parent process will terminate that content process immediately.

So getting the “privileged about content process” fixed up and ready for shipping was the first step.

This also paved the way for solving the next step, which was to enable the moz-page-thumb:// protocol for the “privileged about content process.” The moz-page-thumb:// protocol is used to show the screenshot thumbnails for pages that the user has visited in the past. The previous implementation was using Blob URLs to send those thumbnails down to the page, and those Blob URLs exist only during runtime and would not work properly after a restart.

The next step was figuring out how to build the document that would be stored in the cache. Thankfully, ReactDOMServer has the ability to render a React application to a string. This is normally used for server-side rendering of React-powered applications. This feature also allows the React library to passively attach to the server-side page without causing the DOM to be modified. With some small modifications, we were able to build a simple mechanism in a Web Worker to produce this cached document string off of the main thread. Keeping this work off of the main thread would help maintain responsiveness.

With those pieces of foundational work out of the way, it was time to figure out the cache storage mechanism. Firefox already has a startupcache module that it uses for static resources like markup and JavaScript, but that cache is not designed to be written to periodically at runtime. We would need something different.

We had originally supposed that we would need to give the privileged about content process special access to a file on the filesystem to read from and to write to (since our sandbox prevents content processes from accessing disks directly). Initial experiments along this line worried us — we didn’t like the idea of poking holes in the sandbox if we didn’t need to. Also, adding yet another read from the filesystem during startup seemed counter to our purposes.

We evaluated IndexedDB as a storage mechanism, but the DOM team talked us out of it. The performance characteristics of IndexedDB, especially during startup, were unlikely to work for us.

Finally, after some consulting, we were directed to the HTTP cache. The HTTP cache’s job is to cache pages that the user visits (when appropriate) and to offer those caches to the user instead of hitting the network when retrieving the resource within the expiration time7. Functionally speaking, this seemed like a storage mechanism perfectly suited to our purposes.

After consulting with the Necko team and building a few proof-of-concepts, we figured out how to tie the whole system together. Importantly, we figured out how to get the initial about:home load to pull a document out from the HTTP cache rather than reading it from the application resource package.

We also figured out the cache writing mechanism. The cached document that would periodically get built inside of the privileged about content process inside of a Worker off of the main thread, would then send that data back up to the parent to stream into the cache.

At this point, we felt we had all of the pieces that we needed. Construction on each component began.

Construction was remarkably smooth thanks to our initial research and consulting with the relevant teams. We also took the opportunity to carefully document each component.

Testing

One of the more gratifying parts of implementation was when we modified one of our startup tests to use the new caching mechanism.

home startup cache. The score is about 20% better with the cache enabled.

In this graph, the Y axis is the geometric mean time to render the about:home Top Sites over 20 restarts of the browser, in milliseconds. Lower is better. The dots along the top are without the cache. The dots along the bottom are with the cache enabled. According to our measurements, we improved the rendering time from process start to Top Sites by just over 20%! We beat our prototype!

Noticeable differences

But the real proof will be if there’s actually a noticeable visual change. Here’s that screen recording again from one of our reference devices—an Acer Aspire E-15 E5-575-33BM.

The screen on the left is with the cache disabled, and on the right with the cache enabled. Looks to me like we made a noticeable dent!

Try it out!

We haven’t yet enabled the about:home startup cache in Nightly by default, but we hope to do so soon. In the meantime, Nightly users can try it out right now by going to about:preferences#experimental and toggling it on. If you find problems and have a Bugzilla account, here’s a form for submitting bugs to the right place.

You can tell if the about:home you’re looking at is from the cache by opening up the DevTools Inspector and looking for a <!-- Cached: <some date> --> comment just above the <body> tag.

home page. A comment above the body tag indicates that this was loaded from the cache.

Caveat emptor

There are a few cases where the cache isn’t used or is invalidated.

The first case is if you’ve configured something other than about:home as your home page (where the cache isn’t used). In this case, the cache won’t be read from, and the code to create the cache won’t ever run. If the user ever resets about:home to be their home page, then the caching code will start working for them.

The second case is if you’ve configured Firefox to restore your previous session by default. In this case, it’s unlikely that the first tab you’ll see is about:home, so the cache won’t be read from, and the code to create the cache won’t ever run. As before, if the user switches to not loading their previous session by default, then the cache will start working for them.

Another case is when the Firefox build identifier doesn’t match the build identifier from when the cache was created. This is also how the other startupcache module for static resources works. This ensures that when an update is applied, we don’t accidentally load old assets from the cache. So the first time you launch Firefox after you apply an update will not pull the about:home document from the cache, even if one exists (and will throw the cache out if it does). For Nightly users that generally receive updated builds twice a day, this makes the cache somewhat useless. Beta and Release users update much less frequently, so we expect to see a greater impact there.

The last case is in the event that your disk was in a situation such that reading the dynamic code from the asset bundle was faster than reading from the cache. If by the time the about:home document attempts to load and the cache isn’t ready, we fall back to loading it the old way. We don’t expect this to happen too often, but it’s theoretically possible, so we handle the case.

Future work

The next major step is to get the about:home startup cache turned on by default on Nightly and get it tested by a broader audience. At that point, hopefully we’ll get a better sense of its behaviour out in the wild via bug reports and Telemetry. Then our improvement will either ride the release train, or we might turn it on for subsets of the Beta or Release populations to measure its impact on more realistic restart scenarios. Once we’re confident that it’s helping more than hindering, we’ll turn it on by default for everyone.

After that, I think it would be worth seeing if we can load from the cache more often. Perhaps we could load about:newtab from there as well, for example.

One step at a time!

Thanks to
  • Florian Quèze and Doug Thayer, both of whom independently approached me with the idea of creating a static about:home
  • Jay Lim and Ursula Sarracini, both of whom wrote some of the groundwork code that was needed for this feature (namely, the infrastructure for the privileged about content process, and the first version of the moz-page-thumbs support)
  • Gijs Kruitbosch, Kate Hudson, Ed Lee, Scott Downe, and Gavin Suntop for all of the consulting and reviews
  • Honza Bambas and Andrew Sutherland for storage consultations
  • Markus Stange, Andrew Creskey, Eric Smyth, Asif Youssuff, Dan Mosedale, Emily Derr for their generous feedback on this post
Categorieën: Mozilla-nl planet

Mozilla Thunderbird: What’s New in Thunderbird 78

Mozilla planet - vr, 17/07/2020 - 00:49

Thunderbird 78 is our newest ESR (extended-support release), which comes out yearly and is considered the latest stable release. Right now you can download the newest version from our website, and existing users will be automatically updated in the near future. We encourage those who rely on the popular add-on Enigmail to wait to update until the automatic update rolls out to them to ensure their encrypted email settings are properly imported into Thunderbird’s new built-in OpenPGP encrypted email feature.

Last year’s release focused on ensuring Thunderbird has a stable foundation on which to build. The new Thunderbird 78 aims to improve the experience of using Thunderbird, adding many quality-of-life features to the application and making it easier to use.

Compose Window Redesign

Compose Window Comparison, 68 and 78

The compose window has been reworked to help users find features more easily and to make composing a message faster and more straightforward. The compose window now also takes up less space with recipients listed in “pills” instead of an entire line for every address.

Dark Mode

Dark Mode

Thunderbird’s new Dark Mode is easier on the eyes for those working in the dark, and it has the added benefit of looking really cool! The Dark Mode even works when writing and reading emails – so you are not suddenly blinded while you work. Thunderbird will look at your operating system settings to see if you have enabled dark mode OS-wide and respect those settings. Here are the instructions for setting dark mode in Mac, and setting dark mode in Windows.

Calendar and Tasks Integrated

Thunderbird’s Lightning calendar and tasks add-on is now a part of the application itself, which means everyone now has access to these features the moment they install Thunderbird. This change also sets the stage for a number of future improvements the Thunderbird team will make in the calendar. Much of this will be focused on improved interoperability with the mail part of Thunderbird, as well as improving the user experience of the calendar.

Account Setup & Account Central Updated

Account Setup and Account Central Updated, comparison between 68 and 78

The Account Setup window and the Account Central tab, which appears when you do not have an account setup or when you select an existing account in the folder tree, have both been updated. The layout and dialogues have been improved in order to make it easier to understand the information displayed and to find relevant settings. The Account Central tab also has new information about the Thunderbird project and displays the version you are using.

Folder Icons and Colors Update

New Folder Icons and Colors for Thunderbird 78

Folder icons have been replaced and modernized with a new vector style. This will ensure better compatibility with HiDPI monitors and dark mode. Vector icons also means you will be able to customize their default colors to better distinguish and categorize your folders list.

Minimize to Tray

Windows users have reason to rejoice, as Thunderbird 78 can now be minimized to tray. This has been a repeatedly requested feature that has been available through many popular add-ons, but it is now part of Thunderbird core – no add-on needed! This feature has been a long time coming and we hope to bring more operating-system specific features for each platform to Thunderbird in the coming releases.

End-to-End Encrypted Email Support

New end-to-end encryption preferences tab.

Thunderbird 78.2, due out in the coming months, will offer a new feature that allows you to end-to-end encrypt your email messages via OpenPGP. In the past this feature was achieved in Thunderbird primarily with the Enigmail add-on, however, in this release we have brought this functionality into core Thunderbird. We’d like to offer a special thanks to Patrick Brunschwig for his years of work on Enigmail, which laid the groundwork for this integrated feature, and for his assistance throughout its development. The new feature is also enabled by the RNP library, and we’d like to thank the project’s developers for their close collaboration and hard work addressing our needs.

End-to-end encryption for email can be used to ensure that only the sender and the recipients of a message can read the contents. Without this protection it is easy for network administrators, email providers and government agencies to read your messages. If you would like to learn more about how end-to-end encryption in Thunderbird works, check out our article on Introduction to End-to-end encryption in Thunderbird. If you would like to learn more about the development of this feature or participate in testing, check out the OpenPGP Thunderbird wiki page.

About Add-ons

As with previous major releases, it may take time for authors of legacy extensions to update their add-ons to support the new release. So if you are using add-ons we recommend you not update manually to 78.0, and instead wait for Thunderbird to automatically update to 78. We encourage users to reach out to their add-on’s author to let them know that you are interested in using it in 78.

Learn More

If we listed all the improvements in Thunderbird 78 in this blog post, you’d be stuck reading this for the whole day. So we will save you from that, and let you know that if you want to see a longer list of changes for the new release – check the release notes on our website.

Great Release, Bright Future

The past year has been an amazing year for Thunderbird. We had an incredible release in version 68 that was popular with our users, and laid the groundwork for much of what we did in 78. On top of great improvements in the product, we moved into a new financial and legal home, and we grew our team to thirteen people (soon to be even more)!

We’re so grateful to all our users and contributors who have stuck with us all these years, and we hope to earn your dedication for the years to come. Thunderbird 78 is the beginning of a new era for the project, as we attempt to bring our users the features that they want and need to be productive in the 2020s – while also maintaining what has made Thunderbird so great all these years.

Thank you to our wonderful community, please enjoy Thunderbird 78.

Download the newest release from our website.

Categorieën: Mozilla-nl planet

Data@Mozilla: Mozilla Telemetry in 2020: From “Just Firefox” to a “Galaxy of Data”

Mozilla planet - do, 16/07/2020 - 22:23

(“This Week in Glean” is a series of blog posts that the Glean Team at Mozilla is using to try to communicate better about our work. They could be release notes, documentation, hopes, dreams, or whatever: so long as it is inspired by Glean. You can find an index of all TWiG posts online.)

This is a special guest post by non-Glean-team member William Lachance!

In the last year or so, there’s been a significant shift in the way we (Data Engineering) think about application-submitted data @ Mozilla, but although we have a new application-based SDK based on these principles (the Glean SDK), most of our data tools and documentation have not yet been updated to reflect this new state of affairs.

Much of this story is known inside Mozilla Data Engineering, but I thought it might be worth jotting them down into a blog post as a point of reference for people outside the immediate team. Knowing this may provide some context for some our activities and efforts over the next year or two, at least until our tools, documentation, and tribal knowledge evolve.

In sum, the key differences are:

  • Instead of just one application we care about, there are many.
  • Instead of just caring about (mostly1) one type of ping (the Firefox main ping), an individual application may submit many different types of pings in the course of their use.
  • Instead of having both probes (histogram, scalar, or other data type) and bespoke parametric values in a JSON schema like the telemetry environment, there are now only metric types which are explicitly defined as part of each ping.

The new world is pretty exciting and freeing, but there is some new domain complexity that we need to figure out how to navigate. I’ll discuss that in my last section.

The Old World: Firefox is king

Up until roughly mid–2019, Firefox was the centre of Mozilla’s data world (with the occasional nod to Firefox for Android, which uses the same source repository). The Data Platform (often called “Telemetry”) was explicitly designed to cater to the needs of Firefox Developers (and to a lesser extent, product/program managers) and a set of bespoke tooling was built on top of our data pipeline architecture – this blog post from 2017 describes much of it.

In outline, the model is simple: on the client side, assuming a given user had not turned off Telemetry, during the course of a day’s operation Firefox would keep track of various measures, called “probes”. At the end of that duration, it would submit a JSON-encoded “main ping” to our servers with the probe information and a bunch of other mostly hand-specified junk, which would then find its way to a “data lake” (read: an Amazon S3 bucket). On top of this, we provided a python API (built on top of PySpark) which enabled people inside Mozilla to query all submitted pings across our usage population.

The only type of low-level object that was hard to keep track of was the list of probes: Firefox is a complex piece of software and there are many aspects of it we wanted to instrument to validate performance and quality of the product – especially on the more-experimental Nightly and Beta channels. To solve this problem, a probe dictionary was created to help developers find measures that corresponded to the product area that they were interested in.

On a higher-level, accessing this type of data using the python API quickly became slow and frustrating: the aggregation of years of Firefox ping data was hundreds of terabytes big, and even taking advantage of PySpark’s impressive capabilities, querying the data across any reasonably large timescale was slow and expensive. Here, the solution was to create derived datasets which enabled fast(er) access to pings and other derived measures, document them on docs.telemetry.mozilla.org, and then allow access to them through tools like sql.telemetry.mozilla.org or the Measurement Dashboard.

The New World: More of everything

Even in the old world, other products that submitted telemetry existed (e.g. Firefox for Android, Firefox for iOS, the venerable FirefoxOS) but I would not call them first-class citizens. Most of our documentation treated them as (at best) weird edge cases. At the time of this writing, you can see this distinction clearly on docs.telemetry.mozilla.org where there is one (fairly detailed) tutorial called “Choosing a Desktop Dataset” while essentially all other products are lumped into “Choosing a Mobile Dataset”.

While the new universe of mobile products are probably the most notable addition to our list of things we want to keep track of, they’re only one piece of the puzzle. Really we’re interested in measuring all the things (in accordance with our lean data practices, of course) including tools we use to build our products like mozphab and mozregression.

In expanding our scope, we’ve found that mobile (and other products) have different requirements that influence what data we would want to send and when. For example, sending one blob of JSON multiple times per day might make sense for performance metrics on a desktop product (which is usually on a fast, unmetered network) but is much less acceptable on mobile (where every byte counts). For this reason, it makes sense to have different ping types for the same product, not just one. For example, Fenix (the new Firefox for Android) sends a tiny baseline ping2 on every run to (roughly) measure daily active users and a larger metrics ping sent on a (roughly) daily interval to measure (for example) a distribution of page load times.

Finally, we found that naively collecting certain types of data as raw histograms or inside the schema didn’t always work well. For example, encoding session lengths as plain integers would often produce weird results in the case of clock skew. For this reason, we decided to standardize on a set of well-defined metrics using Glean, which tries to minimize footguns. We explicitly no longer allow clients to submit arbitrary JSON or values as part of a telemetry ping: if you have a use case not covered by the existing metrics, make a case for it and add it to the list!

To illustrate this, let’s take a (subset) of what we might be looking at in terms of what the Fenix application sends:

mermaid source

At the top level we segment based on the “application” (just Fenix in this example). Just below that, there are the pings that this application might submit (I listed three: the baseline and metrics pings described above, along with a “migration” ping, which tracks metrics when a user migrates from Fennec to Fenix). And below that there are different types of metrics included in the pings: I listed a few that came out of a quick scan of the Fenix BigQuery tables using my prototype schema dictionary.

This is actually only the surface-level: at the time of this writing, Fenix has no fewer than 12 different ping types and many different metrics inside each of them.3 On a client level, the new Glean SDK provides easy-to-use primitives to help developers collect this type of information in a principled, privacy-preserving way: for example, data review is built into every metric type. But what about after it hits our ingestion endpoints?

Hand-crafting schemas, data ingestion pipelines, and individualized ETL scripts for such a large matrix of applications, ping types, and measurements would quickly become intractable. Instead, we (Mozilla Data Engineering) refactored our data pipeline to parse out the information from the Glean schemas and then create tables in our BigQuery datastore corresponding to what’s in them – this has proceeded as an extension to our (now somewhat misnamed) probe-scraper tool.

You can then query this data directly (see accessing glean data) or build up a derived dataset using our SQL-based ETL system, BigQuery-ETL. This part of the equation has been working fairly well, I’d say: we now have a diverse set of products producing Glean telemetry and submitting it to our servers, and the amount of manual effort required to add each application was minimal (aside from adding new capabilities to the platform as we went along).

What hasn’t quite kept pace is our tooling to make navigating and using this new collection of data tractable.

What could bring this all together?

As mentioned before, this new world is quite powerful and gives Mozilla a bunch of new capabilities but it isn’t yet well documented and we lack the tools to easily connect the dots from “I have a product question” to “I know how to write an SQL query / Spark Job to answer it” or (better yet) “this product dashboard will answer it”.

Up until now, our defacto answer has been some combination of “Use the probe dictionary / telemetry.mozilla.org” and/or “refer to docs.telemetry.mozilla.org”. I submit that we’re at the point where these approaches break down: as mentioned above, there are many more types of data we now need to care about than just “probes” (or “metrics”, in Glean-parlance). When we just cared about the main ping, we could write dataset documentation for its recommended access point (main_summary) and the raw number of derived datasets was managable. But in this new world, where we have N applications times M ping types, the number of canonical ping tables are now so many that documenting them all on docs.telemetry.mozilla.org no longer makes sense.

A few months ago, I thought that Google’s Data Catalog (billed as offering “a unified view of all your datasets”) might provide a solution, but on further examination it only solves part of the problem: it provides only a view on your BigQuery tables and it isn’t designed to provide detailed information on the domain objects we care about (products, pings, measures, and tools). You can map some of the properties from these objects onto the tables (e.g. adding a probe’s description field to the column representing it in the BigQuery table), but Data Calalog’s interface to surfacing and filtering through this information is rather slow and clumsy and requires detailed knowledge of how these higher level concepts relate to BigQuery primitives.

Instead, what I think we need is a new system which allows a data practitioner (Data Scientist, Firefox Engineer, Data Engineer, Product Manager, whoever) to visualize the relevant set of domain objects relevant to their product/feature of interest quickly then map them to specific BigQuery tables and other resources (e.g. visualizations using tools like GLAM) which allow people to quickly answer questions so we can make better products. Basically, I am thinking of some combination of:

  • The existing probe dictionary (derived from existing product metadata)
  • A new “application” dictionary (derived from some simple to-be-defined application metadata description)
  • A new “ping” dictionary (derived from existing product metadata)
  • A BigQuery schema dictionary (I wrote up a prototype of this a couple weeks ago) to map between these higher-level objects and what’s in our low-level data store
  • Documentation for derived datasets generated by BigQuery-ETL (ideally stored alongside the ETL code itself, so it’s easy to keep up to date)
  • A data tool dictionary describing how to easily access the above data in various ways (e.g. SQL query, dashboard plot, etc.)

This might sound ambitious, but it’s basically just a system for collecting and visualizing various types of documentation— something we have proven we know how to do. And I think a product like this could be incredibly empowering, not only for the internal audience at Mozilla but also the external audience who wants to support us but has valid concerns about what we’re collecting and why: since this system is based entirely on systems which are already open (inside GitHub or Mercurial repositories), there is no reason we can’t make it available to the public.

  1. Technically, there are various other types of pings submitted by Firefox, but the main ping is the one 99% of people care about. 
  2. This is actually a capability that the Glean SDK provides, so other products (e.g. Lockwise, Firefox for iOS) also benefit from this capability. 
  3. The scope of this data collection comes from the fact that Fenix is a very large and complex application. rather than a desire to collect everything just because we can— smaller efforts like mozregression collect a much more limited set of data
Categorieën: Mozilla-nl planet

Cameron Kaiser: TenFourFox FPR25b1 available

Mozilla planet - do, 16/07/2020 - 21:36
TenFourFox Feature Parity Release 25 beta 1 is now available (downloads, hashes, release notes). Raphaël traced the the Twitch JavaScript crash we wallpapered over in FPR24 back to an issue with DOM workers not having sufficient memory allocated, so we widened that out. There still seems to be an endian issue Twitch is triggering, because it needs a huge amount of memory for its worker to finish and then can't spawn another thread because there's not enough memory to (but it reportedly works on Intel TenFourFox, so it's something specific about PowerPC). But hey! No crashes!

Raphaël gets a second gold star for noticing that the gcc runtime we include with every copy of TenFourFox (because we build with a later compiler) is not itself optimized for the underlying platform, because MacPorts simply builds it for ppc rather than one of the specific subtypes. So he built four sets of runtime libraries for each platform and I've integrated it into the build system so that each optimized build now uses a C/C++ runtime tuned for that specific processor family (the debug build is still built for generic ppc so it runs on anything). This is not as big an improvement as you might think because JavaScript performance is almost overwhelmingly dominated by the JIT, and as I mentioned, JavaScript is one of the few areas TenFourFox has tuned and tested to hell. But other things such as DOM, graphics, layout and such do show some benefit, and scripts that spend more time in the interpreter than the JIT (primarily short one-offs) do so as well. There are no changes in the gcc runtime otherwise and it's still the same code, just built with better flags.

This release also includes additional hosts for adblock and additional fonts for the ATSUI font blocklist, and will have the usual security updates as well. It will come out parallel with Firefox 68.11 on or about July 28.

Categorieën: Mozilla-nl planet

William Lachance: Mozilla Telemetry in 2020: From "Just Firefox" to a "Galaxy of Data"

Mozilla planet - do, 16/07/2020 - 20:42

(“This Week in Glean” is a series of blog posts that the Glean Team at Mozilla is using to try to communicate better about our work. They could be release notes, documentation, hopes, dreams, or whatever: so long as it is inspired by Glean. You can find an index of all TWiG posts online.)

This is a special guest post by non-Glean-team member William Lachance!

In the last year or so, there’s been a significant shift in the way we (Data Engineering) think about application-submitted data @ Mozilla, but although we have a new application-based SDK based on these principles (the Glean SDK), most of our data tools and documentation have not yet been updated to reflect this new state of affairs.

Much of this story is known inside Mozilla Data Engineering, but I thought it might be worth jotting them down into a blog post as a point of reference for people outside the immediate team. Knowing this may provide some context for some our activities and efforts over the next year or two, at least until our tools, documentation, and tribal knowledge evolve.

In sum, the key differences are:

  • Instead of just one application we care about, there are many.
  • Instead of just caring about (mostly1) one type of ping (the Firefox main ping), an individual application may submit many different types of pings in the course of their use.
  • Instead of having both probes (histogram, scalar, or other data type) and bespoke parametric values in a JSON schema like the telemetry environment, there are now only metric types which are explicitly defined as part of each ping.

The new world is pretty exciting and freeing, but there is some new domain complexity that we need to figure out how to navigate. I’ll discuss that in my last section.

The Old World: Firefox is king

Up until roughly mid–2019, Firefox was the centre of Mozilla’s data world (with the occasional nod to Firefox for Android, which uses the same source repository). The Data Platform (often called “Telemetry”) was explicitly designed to cater to the needs of Firefox Developers (and to a lesser extent, product/program managers) and a set of bespoke tooling was built on top of our data pipeline architecture - this blog post from 2017 describes much of it.

In outline, the model is simple: on the client side, assuming a given user had not turned off Telemetry, during the course of a day’s operation Firefox would keep track of various measures, called “probes”. At the end of that duration, it would submit a JSON-encoded “main ping” to our servers with the probe information and a bunch of other mostly hand-specified junk, which would then find its way to a “data lake” (read: an Amazon S3 bucket). On top of this, we provided a python API (built on top of PySpark) which enabled people inside Mozilla to query all submitted pings across our usage population.

The only type of low-level object that was hard to keep track of was the list of probes: Firefox is a complex piece of software and there are many aspects of it we wanted to instrument to validate performance and quality of the product - especially on the more-experimental Nightly and Beta channels. To solve this problem, a probe dictionary was created to help developers find measures that corresponded to the product area that they were interested in.

On a higher-level, accessing this type of data using the python API quickly became slow and frustrating: the aggregation of years of Firefox ping data was hundreds of terabytes big, and even taking advantage of PySpark’s impressive capabilities, querying the data across any reasonably large timescale was slow and expensive. Here, the solution was to create derived datasets which enabled fast(er) access to pings and other derived measures, document them on docs.telemetry.mozilla.org, and then allow access to them through tools like sql.telemetry.mozilla.org or the Measurement Dashboard.

The New World: More of everything

Even in the old world, other products that submitted telemetry existed (e.g. Firefox for Android, Firefox for iOS, the venerable FirefoxOS) but I would not call them first-class citizens. Most of our documentation treated them as (at best) weird edge cases. At the time of this writing, you can see this distinction clearly on docs.telemetry.mozilla.org where there is one (fairly detailed) tutorial called “Choosing a Desktop Dataset” while essentially all other products are lumped into “Choosing a Mobile Dataset”.

While the new universe of mobile products are probably the most notable addition to our list of things we want to keep track of, they’re only one piece of the puzzle. Really we’re interested in measuring all the things (in accordance with our lean data practices, of course) including tools we use to build our products like mozphab and mozregression.

In expanding our scope, we’ve found that mobile (and other products) have different requirements that influence what data we would want to send and when. For example, sending one blob of JSON multiple times per day might make sense for performance metrics on a desktop product (which is usually on a fast, unmetered network) but is much less acceptable on mobile (where every byte counts). For this reason, it makes sense to have different ping types for the same product, not just one. For example, Fenix (the new Firefox for Android) sends a tiny baseline ping2 on every run to (roughly) measure daily active users and a larger metrics ping sent on a (roughly) daily interval to measure (for example) a distribution of page load times.

Finally, we found that naively collecting certain types of data as raw histograms or inside the schema didn’t always work well. For example, encoding session lengths as plain integers would often produce weird results in the case of clock skew. For this reason, we decided to standardize on a set of well-defined metrics using Glean, which tries to minimize footguns. We explicitly no longer allow clients to submit arbitrary JSON or values as part of a telemetry ping: if you have a use case not covered by the existing metrics, make a case for it and add it to the list!

To illustrate this, let’s take a (subset) of what we might be looking at in terms of what the Fenix application sends:

mermaid source

At the top level we segment based on the “application” (just Fenix in this example). Just below that, there are the pings that this application might submit (I listed three: the baseline and metrics pings described above, along with a “migration” ping, which tracks metrics when a user migrates from Fennec to Fenix). And below that there are different types of metrics included in the pings: I listed a few that came out of a quick scan of the Fenix BigQuery tables using my prototype schema dictionary.

This is actually only the surface-level: at the time of this writing, Fenix has no fewer than 12 different ping types and many different metrics inside each of them.3 On a client level, the new Glean SDK provides easy-to-use primitives to help developers collect this type of information in a principled, privacy-preserving way: for example, data review is built into every metric type. But what about after it hits our ingestion endpoints?

Hand-crafting schemas, data ingestion pipelines, and individualized ETL scripts for such a large matrix of applications, ping types, and measurements would quickly become intractable. Instead, we (Mozilla Data Engineering) refactored our data pipeline to parse out the information from the Glean schemas and then create tables in our BigQuery datastore corresponding to what’s in them - this has proceeded as an extension to our (now somewhat misnamed) probe-scraper tool.

You can then query this data directly (see accessing glean data) or build up a derived dataset using our SQL-based ETL system, BigQuery-ETL. This part of the equation has been working fairly well, I’d say: we now have a diverse set of products producing Glean telemetry and submitting it to our servers, and the amount of manual effort required to add each application was minimal (aside from adding new capabilities to the platform as we went along).

What hasn’t quite kept pace is our tooling to make navigating and using this new collection of data tractable.

What could bring this all together?

As mentioned before, this new world is quite powerful and gives Mozilla a bunch of new capabilities but it isn’t yet well documented and we lack the tools to easily connect the dots from “I have a product question” to “I know how to write an SQL query / Spark Job to answer it” or (better yet) “this product dashboard will answer it”.

Up until now, our defacto answer has been some combination of “Use the probe dictionary / telemetry.mozilla.org” and/or “refer to docs.telemetry.mozilla.org”. I submit that we’re at the point where these approaches break down: as mentioned above, there are many more types of data we now need to care about than just “probes” (or “metrics”, in Glean-parlance). When we just cared about the main ping, we could write dataset documentation for its recommended access point (main_summary) and the raw number of derived datasets was managable. But in this new world, where we have N applications times M ping types, the number of canonical ping tables are now so many that documenting them all on docs.telemetry.mozilla.org no longer makes sense.

A few months ago, I thought that Google’s Data Catalog (billed as offering “a unified view of all your datasets”) might provide a solution, but on further examination it only solves part of the problem: it provides only a view on your BigQuery tables and it isn’t designed to provide detailed information on the domain objects we care about (products, pings, measures, and tools). You can map some of the properties from these objects onto the tables (e.g. adding a probe’s description field to the column representing it in the BigQuery table), but Data Calalog’s interface to surfacing and filtering through this information is rather slow and clumsy and requires detailed knowledge of how these higher level concepts relate to BigQuery primitives.

Instead, what I think we need is a new system which allows a data practitioner (Data Scientist, Firefox Engineer, Data Engineer, Product Manager, whoever) to visualize the relevant set of domain objects relevant to their product/feature of interest quickly then map them to specific BigQuery tables and other resources (e.g. visualizations using tools like GLAM) which allow people to quickly answer questions so we can make better products. Basically, I am thinking of some combination of:

  • The existing probe dictionary (derived from existing product metadata)
  • A new “application” dictionary (derived from some simple to-be-defined application metadata description)
  • A new “ping” dictionary (derived from existing product metadata)
  • A BigQuery schema dictionary (I wrote up a prototype of this a couple weeks ago) to map between these higher-level objects and what’s in our low-level data store
  • Documentation for derived datasets generated by BigQuery-ETL (ideally stored alongside the ETL code itself, so it’s easy to keep up to date)
  • A data tool dictionary describing how to easily access the above data in various ways (e.g. SQL query, dashboard plot, etc.)

This might sound ambitious, but it’s basically just a system for collecting and visualizing various types of documentation— something we have proven we know how to do. And I think a product like this could be incredibly empowering, not only for the internal audience at Mozilla but also the external audience who wants to support us but has valid concerns about what we’re collecting and why: since this system is based entirely on systems which are already open (inside GitHub or Mercurial repositories), there is no reason we can’t make it available to the public.

  1. Technically, there are various other types of pings submitted by Firefox, but the main ping is the one 99% of people care about. 

  2. This is actually a capability that the Glean SDK provides, so other products (e.g. Lockwise, Firefox for iOS) also benefit from this capability. 

  3. The scope of this data collection comes from the fact that Fenix is a very large and complex application. rather than a desire to collect everything just because we can— smaller efforts like mozregression collect a much more limited set of data

Categorieën: Mozilla-nl planet

Support.Mozilla.Org: Introducing Mozilla VPN

Mozilla planet - do, 16/07/2020 - 08:10

Hi everyone,

You might remember that we first introduced the Firefox Private Network (FPN) back then in December 2019. At that time, we had two types of offerings available only in the U.S: FPN Browser Level Protection (by using an extension) and FPN Device Protection (which is available for Windows 10, iOS, and Android).

Today will mark another milestone for FPN since we’ll be changing the name from FPN full-device VPN to simply the Mozilla VPN. For now, this change will only include the Windows 10 version as well as the Android version. Currently, the iOS version is still called FPN on the Apple Store, although our team is currently working hard to change it to Mozilla VPN as well. Meanwhile, FPN Browser Level Protection will remain the same until we make further decisions.

On top of that, we will start offering Mozilla VPN in more countries outside of the US. The new countries will be Canada, the UK, New Zealand, Singapore, and Malaysia.

What does this mean for the community?

We’ve changed the product name in Kitsune (although the URL is still the same). Since most of the new countries are English speaking countries, we will not require the support articles to be translated for this release.

And as usual, support requests will be handled through Zendesk and the forum will continue to be managed by our designated staff members, Brady and Eve. However, we also welcome everyone who wants to help.

We are enthusiastic about this new opportunity and hope that you’ll support us along the way. If you have any questions or concerns, please let me/Giulia know.

Categorieën: Mozilla-nl planet

The Rust Programming Language Blog: Announcing Rust 1.45.0

Mozilla planet - do, 16/07/2020 - 02:00

The Rust team is happy to announce a new version of Rust, 1.45.0. Rust is a programming language that is empowering everyone to build reliable and efficient software.

If you have a previous version of Rust installed via rustup, getting Rust 1.45.0 is as easy as:

rustup update stable

If you don't have it already, you can get rustup from the appropriate page on our website, and check out the detailed release notes for 1.45.0 on GitHub.

What's in 1.45.0 stable

There are two big changes to be aware of in Rust 1.45.0: a fix for some long-standing unsoundness when casting between integers and floats, and the stabilization of the final feature needed for one of the more popular web frameworks to work on stable Rust.

Fixing unsoundness in casts

Issue 10184 was originally opened back in October of 2013, a year and a half before Rust 1.0. As you may know, rustc uses LLVM as a compiler backend. When you write code like this:

pub fn cast(x: f32) -> u8 { x as u8 }

The Rust compiler in Rust 1.44.0 and before would produce LLVM-IR that looks like this:

define i8 @_ZN10playground4cast17h1bdf307357423fcfE(float %x) unnamed_addr #0 { start: %0 = fptoui float %x to i8 ret i8 %0 }

That fptoui implements the cast, it is short for "floating point to unsigned integer."

But there's a problem here. From the docs:

The ‘fptoui’ instruction converts its floating-point operand into the nearest (rounding towards zero) unsigned integer value. If the value cannot fit in ty2, the result is a poison value.

Now, unless you happen to dig into the depths of compilers regularly, you may not understand what that means. It's full of jargon, but there's a simpler explanation: if you cast a floating point number that's large to an integer that's small, you get undefined behavior.

That means that this, for example, was not well-defined:

fn cast(x: f32) -> u8 { x as u8 } fn main() { let f = 300.0; let x = cast(f); println!("x: {}", x); }

On Rust 1.44.0, this happens to print "x: 0" on my machine. But it could print anything, or do anything: this is undefined behavior. But the unsafe keyword is not used within this block of code. This is what we call a "soundness" bug, that is, it is a bug where the compiler does the wrong thing. We tag these bugs as I-unsound on our issue tracker, and take them very seriously.

This bug took a long time to resolve, though. The reason is that it was very unclear what the correct path forward was.

In the end, the decision was made to do this:

  • as would perform a "saturating cast".
  • A new unsafe cast would be added if you wanted to skip the checks.

This is very similar to array access, for example:

  • array[i] will check to make sure that array has at least i + 1 elements.
  • You can use unsafe { array.get_unchecked(i) } to skip the check.

So, what's a saturating cast? Let's look at a slightly modified example:

fn cast(x: f32) -> u8 { x as u8 } fn main() { let too_big = 300.0; let too_small = -100.0; let nan = f32::NAN; println!("too_big_casted = {}", cast(too_big)); println!("too_small_casted = {}", cast(too_small)); println!("not_a_number_casted = {}", cast(nan)); }

This will print:

too_big_casted = 255 too_small_casted = 0 not_a_number_casted = 0

That is, numbers that are too big turn into the largest possible value. Numbers that are too small produce the smallest possible value (which is zero). NaN produces zero.

The new API to cast in an unsafe manner is:

let x: f32 = 1.0; let y: u8 = unsafe { x.to_int_unchecked() };

But as always, you should only use this method as a last resort. Just like with array access, the compiler can often optimize the checks away, making the safe and unsafe versions equivalent when the compiler can prove it.

Stabilizing function-like procedural macros in expressions, patterns, and statements

In Rust 1.30.0, we stabilized "function-like procedural macros in item position." For example, the gnome-class crate:

Gnome-class is a procedural macro for Rust. Within the macro, we define a mini-language which looks as Rust-y as possible, and that has extensions to let you define GObject subclasses, their properties, signals, interface implementations, and the rest of GObject's features. The goal is to require no unsafe code on your part.

This looks like this:

gobject_gen! { class MyClass: GObject { foo: Cell<i32>, bar: RefCell<String>, } impl MyClass { virtual fn my_virtual_method(&self, x: i32) { ... do something with x ... } } }

The "in item position" bit is some jargon, but basically what this means is that you could only invoke gobject_gen! in certain places in your code.

Rust 1.45.0 adds the ability to invoke procedural macros in three new places:

// imagine we have a procedural macro named "mac" mac!(); // item position, this was what was stable before // but these three are new: fn main() { let expr = mac!(); // expression position match expr { mac!() => {} // pattern position } mac!(); // statement position }

Being able to use macros in more places is interesting, but there's another reason why many Rustaceans have been waiting for this feature for a long time: Rocket. Initially released in December of 2016, Rocket is a popular web framework for Rust often described as one of the best things the Rust ecosystem has to offer. Here's the "hello world" example from its upcoming release:

#[macro_use] extern crate rocket; #[get("/<name>/<age>")] fn hello(name: String, age: u8) -> String { format!("Hello, {} year old named {}!", age, name) } #[launch] fn rocket() -> rocket::Rocket { rocket::ignite().mount("/hello", routes![hello]) }

Until today, Rocket depended on nightly-only features to deliver on its promise of flexibility and ergonomics. In fact, as can be seen on the project's homepage, the same example above in the current version of Rocket requires the proc_macro_hygiene feature to compile. However, as you may guess from the feature's name, today it ships in stable! This issue tracked the history of nightly-only features in Rocket. Now, they're all checked off!

This next version of Rocket is still in the works, but when released, many folks will be very happy :)

Library changes

In Rust 1.45.0, the following APIs were stabilized:

Additionally, you can use char with ranges, to iterate over codepoints:

for ch in 'a'..='z' { print!("{}", ch); } println!(); // Prints "abcdefghijklmnopqrstuvwxyz"

For a full list of changes, see the full release notes.

Other changes

There are other changes in the Rust 1.45.0 release: check out what changed in Rust, Cargo, and Clippy.

Contributors to 1.45.0

Many people came together to create Rust 1.45.0. We couldn't have done it without all of you. Thanks!

Categorieën: Mozilla-nl planet

Mozilla VR Blog: Recording inside of Hubs

Mozilla planet - do, 16/07/2020 - 01:36
Recording inside of Hubs

(This Post is for recording inside of Hubs via a laptop or desktop computer. Recording Hubs experiences from inside Virtual Reality is for another post.)

Minimum Requirements

Hardware

  • A computer
  • Internet connection
  • Storage space to save recordings

Software

  • Screen capture software
  • Browser that supports Hubs

Hubs by Mozilla; the future of remote collaboration.

Accessible from a web browser and on a range of devices, Hubs allows users to meet in a virtual space and share ideas, images and files. The global pandemic is keeping us distant socially but Hubs is helping us to bridge that gap!

We’re often asked how to record your time inside of Hubs, either for a personal record or to share with others. Here I will share what has worked for me to capture usable footage from inside of a Hubs environment.

Firstly, like traditional videography, you’re going to need the appropriate hardware and software.

Hardware:

I have a need to capture footage at the highest manageable resolution and frames per second so I use a high powered desktop PC with a good graphics card and 32Gb of RAM. When I’m wearing my editor’s hat, I like to have the freedom to make precise cuts and the ability to zoom in on a particular part of the frame and still maintain image quality. However, my needs for quality  are probably higher than most people looking to capture film inside of Hubs and an average laptop is generally going to be perfectly fine to get decent shot quality.

A strong internet connection is going to be essential to ensure the avatar animations and videos in-world function smoothly on your screen. Also, a good amount of bandwidth is required if you plan on live streaming video content out of your Hubs space to platforms such as Zoom or Twitch.

This may be especially relevant during the pandemic lockdown, with increased usage/burden on home internet.

Next up is storage to record your video. I use a 5TB external hard drive as my main storage device to ensure I never run out of space. A ten minute video that is 1280x720 and 30fps is roughly about 1Gb of data so it can add up pretty quickly!

One last piece of hardware I use but is not essential is a good gaming mouse. This offers better tracking and response time, allowing for more accurate cursor control and ultimately smoother camera movement inside of Hubs.

Another benefit I gain is customizability. Adjusting tracking sensitivity and adding macro commands to the additional buttons has greatly improved my experience recording inside of Hubs.

Now that we have the hardware, let’s talk about software.

Software:

OBS (Open Broadcast Software) is open source and also free! This application allows video recording and live streaming and is a popular choice for capturing and sharing your streams. This is a great piece of software and allows you full control over both your incoming and outgoing video streams.

My need for the highest available capture has led me to use Nvidia’s Geforce experience software. This is an application that complements my Geforce GTX graphics card and gives me the ability to optimize my settings.

So now that we’re up to speed with the hardware and software, it’s time to set up for recording.

As I mentioned earlier, I set up my software to get the best results possible from my hardware. The settings you choose will be dependent on your hardware and may take some experimentation to perfect. I tend to run my settings at 1920x1080 and 60fps. It’s good practice to run with commonly used resolution scales and frames per second to make editing, exporting and sharing as painless as possible. 1280x720 @ 30fps is a common and respectable setting.

These frame sizes have a 16:9 aspect ratio which is a widely used scale.

Audio is pretty straightforward: 44.1kHz is a good enough sample rate to get a usable recording. The main things to note are the spatial audio properties from avatars speaking and objects with audio attached inside of Hubs. Finding a position that allows for clean and balanced sound is important. It can also be handy to turn off sound effects from the preferences menu. That way if it’s a chat-heavy environment, the bubble sounds don’t interrupt the speaker in the recording. Another option to isolate the speaker’s audio is to have the camera avatar mute everyone else manually before recording.

Before I hit record there are a few other things I like to set up. One is maximizing my window in the browser settings (Not surprisingly I use Firefox..) and another is choosing which user interface graphics are showing. Personally, I prefer to disable all my U.I. so all that is showing is the scene inside of Hubs. I do this by using the tilde (~) (hot) key or hitting camera mode in the options menu and then selecting hide all in the bottom right corner of the screen. The second option here is only available to people who have been promoted to room moderator so be sure to check that before you begin!

Additionally, there is an option under the misc tab to turn Avatars’ name tags on or off which can be helpful depending on your needs. It's a good rule of thumb to get the permission (or at least notify) those who will be in the space that you will be recording so they can adjust their own settings or name tag accordingly, but if it's not practicable to get individuals' permission, you may want to consider turning name tags off, just in case.

Once you get to this point it pretty closely resembles the role of a traditional camera operator. You’ll need to consider close-ups, wide shots, scenes and avatars, while maintaining a balanced audio feed.

Depending on the scene creator’s settings, you may have the option to fly in Hubs. This can open up some options for more creative or cinematic camera work. Another possibility is to have multiple computers recording different angles, enabling the editor to switch between perspectives.

And that's a basic introduction on how to record inside of Hubs from a computer! Stay tuned for how to set up recording your Hubs experience from inside of virtual reality.

Stay safe, stay healthy and keep on rockin’ the free web!

Categorieën: Mozilla-nl planet

The Firefox Frontier: No-judgment digital definitions: VPNs explained

Mozilla planet - wo, 15/07/2020 - 23:09

Many of us spend multiple hours a day using the internet to do everyday things like watching videos, shopping, gaming and paying bills, all the way to managing complex work … Read more

The post No-judgment digital definitions: VPNs explained appeared first on The Firefox Frontier.

Categorieën: Mozilla-nl planet

Mozilla Performance Blog: What’s new in Perfherder?

Mozilla planet - wo, 15/07/2020 - 18:38

Perfherder is one of the primary tools used by our performance sheriffs to triage and investigate regression (and improvement) alerts. It’s also a key part of the workflow any Firefox engineer may experience when working on performance, either responding to a regression, or proactively measuring the impact of their changes. This post will cover the various improvements that have been made to Perfherder so far in 2020.

Server-side search for alerts

Prior to this change, searching within the alerts view was entirely client-side. This had some strange effects on the pagination, which would mean sheriffs would waste time clicking through many blank pages of alerts when filtering by alerts assigned to them for example.

Retrigger jobs from compare view

Last year we introduced a popular feature request with the ability to retrigger jobs from the compare view. Earlier this year we’ve enhanced this further by allowing the user to specify how many retriggers should be requested for the base and new revisions.

The retrigger button is shown in the compare view when you hover near the total runs when you’re logged in

The retrigger will by default request 5 jobs against both the base and new revisions

Distinguishing results by application

As we move towards clearer test names, many of the latest performance tests do not include the target application name in the test names. This means they were difficult to distinguish in Perfherder. There’s still some work to do here, but you can now see the application displayed in the graphs legend alongside the repository and platform.

Legend showing results for the same test against both Firefox and Google Chrome

 

Adding tags to alert summaries

For a while now the performance sheriffs have been using the notes feature of alert summaries as a way to add tags. Examples include #regression-backedout and #regression-fixed, which can add context for improvements, and also provides valuable insights into our alert data. To improve our consistency applying these, we’ve moved them from the notes to a dedicated tagging feature. Sheriffs can now apply predefined tags, which are then displayed prominently in the alert summary.

Tags are selected from a predefined list

UI/UX improvements

There were also several smaller improvements to the UI/UX that shouldn’t go unmentioned:

  • If you provide a name for your comparison view, this is now reflected in the page title so you can easily find it again in your open tabs and browser history.
  • Regression bug templates have been simplified, and updated to take advantage of Markdown formatting.
  • Graphs now show the measurement units along the y-axis when provided by the test results.
  • Most commonly used repositories are grouped for convenience when populating graphs.
  • Added “all statuses” and “all frameworks” to filters in alerts view.
  • Provided first/last page shortcuts in alerts view pagination.
  • Removed mozilla-inbound from graphs view links from alerts.
  • Removed AWFY framework due to these tests being migrated to Raptor.
Bug fixes

The following bug fixes are also worth highlighting:

  • When selecting tests to populate graphs, the list of tests is now refreshed whenever any of the filters are updated.
  • We no longer show links in the graph datapoint tool-tip when the associated job has expired.
Acknowledgements

These updates would not have been possible without Ionuț Goldan, Alexandru Irimovici, Alexandru Ionescu, Andra Esanu, and Florin Strugariu. Thanks also to the Treeherder team for reviewing patches and supporting these contributions to the project. Finally, thank you to all of the Firefox engineers for all of your bug reports and feedback on Perfherder and the performance workflow. Keep it coming, and we look forward to sharing more updates with you all soon.

Categorieën: Mozilla-nl planet

The Mozilla Blog: Mozilla Puts Its Trusted Stamp on VPN

Mozilla planet - wo, 15/07/2020 - 16:16

Starting today, there’s a VPN on the market from a company you trust. The Mozilla VPN (Virtual Private Network) is now available on Windows, Android and iOS devices. This fast and easy-to-use VPN service is brought to you by Mozilla, the makers of Firefox, and a trusted name in online consumer security and privacy services.

See for yourself how the Mozilla VPN works:

 



The first thing you may notice when you install the Mozilla VPN is how fast your browsing experience is. That’s because the Mozilla VPN is based on modern and lean technology, the WireGuard protocol’s 4,000 lines of code, is a fraction in size of legacy protocols used by other VPN service providers.

You will also see an easy-to-use and simple interface for anyone who is new to VPN, or those who want to set it and get onto the web.

With no long-term contracts required, the Mozilla VPN is available for just $4.99 USD per month and will initially be available in the United States, Canada, the United Kingdom, Singapore, Malaysia, and New Zealand, with plans to expand to other countries this Fall.

In a market crowded by companies making promises about privacy and security, it can be hard to know who to trust. Mozilla has a reputation for building products that help you keep your information safe. We follow our easy to read, no-nonsense Data Privacy Principles which allow us to focus only on the information we need to provide a service. We don’t keep user data logs.

We don’t partner with third-party analytics platforms who want to build a profile of what you do online. And since the makers of this VPN are backed by a mission-driven company you can trust that the dollars you spend for this product will not only ensure you have a top-notch VPN, but also are making the internet better for everyone.

Simple and easy-to-use switch

Last year, we beta tested our VPN service which provided encryption and device-level protection of your connection and information on the Web. Many users shared their thoughts on why they needed this service.

Some of the top reasons users cited for using a VPN:

  • Security for all your devices Users are flocking to VPNs for added protection online. With Mozilla VPN you can be sure your activity is encrypted across all applications and websites, whatever device you are on.
  • Added protection for your private information – Over 50 percent of VPN users in the US and UK said that seeking protection when using public wi-fi was a top reason for choosing a VPN service.
  • Browse more anonymously – Users care immensely about being anonymous when they choose to. A VPN is a key component as it encrypts all your traffic and protects your IP address and location.
  • Communicate more securely – Using a VPN can give an added layer of protection, ensuring every conversation you have is encrypted over the network.

In a world where unpredictability has become the “new normal,” we know that it’s more important than ever for you to feel safe, and for you to know that what you do online is your own business.

Check out the Mozilla VPN and download it from our website,  Google Play store or Apple App store.

*Updated July 27, 2020 to reflect the availability of Mozilla VPN on iOS devices

The post Mozilla Puts Its Trusted Stamp on VPN appeared first on The Mozilla Blog.

Categorieën: Mozilla-nl planet

Mozilla Puts Its Trusted Stamp on VPN

Mozilla Blog - wo, 15/07/2020 - 16:16

Starting today, there’s a VPN on the market from a company you trust. The Mozilla VPN (Virtual Private Network) is now available on Windows and Android devices. This fast and easy-to-use VPN service is brought to you by Mozilla, the makers of Firefox, and a trusted name in online consumer security and privacy services.

See for yourself how the Mozilla VPN works:

 



The first thing you may notice when you install the Mozilla VPN is how fast your browsing experience is. That’s because the Mozilla VPN is based on modern and lean technology, the WireGuard protocol’s 4,000 lines of code, is a fraction in size of legacy protocols used by other VPN service providers.

You will also see an easy-to-use and simple interface for anyone who is new to VPN, or those who want to set it and get onto the web.

With no long-term contracts required, the Mozilla VPN is available for just $4.99 USD per month and will initially be available in the United States, Canada, the United Kingdom, Singapore, Malaysia, and New Zealand, with plans to expand to other countries this Fall.

In a market crowded by companies making promises about privacy and security, it can be hard to know who to trust. Mozilla has a reputation for building products that help you keep your information safe. We follow our easy to read, no-nonsense Data Privacy Principles which allow us to focus only on the information we need to provide a service. We don’t keep user data logs.

We don’t partner with third-party analytics platforms who want to build a profile of what you do online. And since the makers of this VPN are backed by a mission-driven company you can trust that the dollars you spend for this product will not only ensure you have a top-notch VPN, but also are making the internet better for everyone.

Simple and easy-to-use switch

Last year, we beta tested our VPN service which provided encryption and device-level protection of your connection and information on the Web. Many users shared their thoughts on why they needed this service.

Some of the top reasons users cited for using a VPN:

  • Security for all your devices Users are flocking to VPNs for added protection online. With Mozilla VPN you can be sure your activity is encrypted across all applications and websites, whatever device you are on.
  • Added protection for your private information – Over 50 percent of VPN users in the US and UK said that seeking protection when using public wi-fi was a top reason for choosing a VPN service.
  • Browse more anonymously – Users care immensely about being anonymous when they choose to. A VPN is a key component as it encrypts all your traffic and protects your IP address and location.
  • Communicate more securely – Using a VPN can give an added layer of protection, ensuring every conversation you have is encrypted over the network.

In a world where unpredictability has become the “new normal,” we know that it’s more important than ever for you to feel safe, and for you to know that what you do online is your own business.

Check out the Mozilla VPN and download it from our website or Google Play store.

The post Mozilla Puts Its Trusted Stamp on VPN appeared first on The Mozilla Blog.

Categorieën: Mozilla-nl planet

Armen Zambrano: New backfill action

Mozilla planet - wo, 15/07/2020 - 16:10

If you use Treeherder on repositories that are not Try, you might have used the backfill action. The backfill action takes a selected task and schedules it nine pushes back from the selected task. You can see the original work here and the follow up here.

<figcaption>The mda task on push 48a10a9249b0 has been backfilled</figcaption>

In the screenshot above you can see that the task mdaturned orange (implying that it failed). In the screenshot we can see that a Mozilla code sheriff has both retriggered the task four more times (you can see four more running tasks on the same push) and has backfilled the task on previous pushes. This is to determine if the regression was introduced on previous pushes or if the failure is due to an intermittent test failure.

<figcaption>Once you select a task and click on the hamburguer menu you can get to the backfill action</figcaption>

The difference with the old backfill action is threefold:

  1. The backfilled tasks include -bk in their symbol and group and it includes the revision of the originating task that was backfilled
  2. The backfilled tasks schedule the same set of manifests as the starting task
  3. The backfill action schedules a support action called backfill-task
Modified symbol for backfilled tasks

The modified symbol and group name for backfilled tasks is to:

  1. Show that it is a backfilled task (rather than schedule by normal scheduling) and that it can have a modified set of manifests (see next section)
  2. Show from which task it was backfilled from (by including the revision)
  3. Group backfilled tasks together to make it clear that they were not scheduled by normal means

I’ve also landed a change on Treeherder to handle this new naming and allow to filter out normal tasks plus backfilled tasks.

<figcaption>From this link you can filter out mda tasks plus mda backfilled tasks</figcaption>Manifest-level backfilling

Point number two from the above list is what changes the most. Soon we will be landing a change on autoland that will schedule some test tasks with a dynamic set of manifests. This means that a task scheduled on push A will have a set of manifests (e.g. set X) and the same task on push B can have a different set of manifests (e.g. set Y).

The new backfill takes this into account by looking at the env variable MOZHARNESS_TEST_PATHS which contains the list of manifests and re-uses that value on backfilled tasks. This ensures that we’re scheduling the same set of manifests in every backfilled task.

Support actions

You can skip reading this section as this is more of an architectural change. This fixes the issue that backfilled tasks could not be re-run.

Backfilled tasks are now scheduled by a support action called backfill-task. If on Treeherder we filter by backfill tasks you can see both the initial backfillaction and the backfill-tasksupport action:

<figcaption>Two backfill actions were triggered on push 48a10a9249b0</figcaption>

The action backfill has scheduled nine backfill-task and those are in charge of scheduling the mda task on that push.

Thanks for reading. Please file a bug and CC me if you notice anything going wrong with it.

Categorieën: Mozilla-nl planet

Pagina's