mozilla

Mozilla Nederland LogoDe Nederlandse
Mozilla-gemeenschap

Abonneren op feed Mozilla planet
Planet Mozilla - https://planet.mozilla.org/
Bijgewerkt: 3 weken 4 dagen geleden

Hacks.Mozilla.Org: Add-Ons Outage Post-Mortem Result

vr, 12/07/2019 - 18:08

Editor’s Note: July 12, 1:52pm pt – Updated Balrog update frequency and added some more background.

As I mentioned in my previous post, we’ve been conducting a post-mortem on the add-ons outage. Sorry this took so long to get out; we’d hoped to have this out within a week, but obviously that didn’t happen. There was just a lot more digging to do than we expected. In any case, we’re now ready to share the results. This post provides a high level overview of our findings, with more detail available in Sheila Mooney’s incident report and Matt Miller & Peter Saint-Andre’s technical report.

Root Cause Analysis

The first question that everyone asks is “how did you let this happen?” At a high level, the story seems simple: we let the certificate expire. This seems like a simple failure of planning, but upon further investigation it turns out to be more complicated: the team responsible for the system which generated the signatures knew that the certificate was expiring but thought (incorrectly) that Firefox ignored the expiration dates. Part of the reason for this misunderstanding was that in a previous incident we had disabled end-entity certificate checking, and this led to confusion about the status of intermediate certificate checking. Moreover, the Firefox QA plan didn’t incorporate testing for certificate expiration (or generalized testing of how the browser will behave at future dates) and therefore the problem wasn’t detected. This seems to have been a fundamental oversight in our test plan.

The lesson here is that: (1) we need better communication and documentation of these parts of the system and (2) this information needs to get fed back into our engineering and QA work to make sure we’re not missing things. The technical report provides more details.

Code Delivery

As I mentioned previously, once we had a fix, we decided to deliver it via the Studies system (this is one part of a system we internally call “Normandy”). The Studies system isn’t an obvious choice for this kind of deployment because it was intended for deploying experiments, not code fixes. Moreover, because Studies permission is coupled to Telemetry, this meant that some users needed to enable Telemetry in order to get the fix, leading to Mozilla temporarily over-collecting data that we didn’t actually want, which we then had to clean up.

This leads to the natural question: “isn’t there some other way you could have deployed the fix?” to which the answer is “sort of.” Our other main mechanisms for deploying new code to users are dot releases and a system called “Balrog”. Unfortunately, both of these are slower than Normandy: Balrog checks for updates every 12 hours (though there turns out to have been some confusion about whether this number was 12 or 24), whereas Normandy checks every 6. Because we had a lot of users who were affected, getting them fixed was a very high priority, which made Studies the best technical choice.

The lesson here is that we need a mechanism that allows fast updates that isn’t coupled to Telemetry and Studies. The property we want is the ability to quickly deploy updates to any user who has automatic updates enabled. This is something our engineers are already working on.

Incomplete Fixes

Over the weeks following the incident, we released a large number of fixes, including eight versions of the system add-on and six dot releases. In some cases this was necessary because older deployment targets needed a separate fix. In other cases it was a result of defects in an earlier fix, which we then had to patch up in subsequent work. Of course, defects in software cannot be completely eliminated, but the technical report found that at least in some cases a high level of urgency combined with a lack of available QA resources (or at least coordination issues around QA) led to testing that was less thorough than we would have liked.

The lesson here is that during incidents of this kind we need to make sure that we not only recruit management, engineering, and operations personnel (which we did) but also to ensure that we have QA available to test the inevitable fixes.

Where to Learn More

If you want to learn more about our findings, I would invite you to read the more detailed reports we produced. And as always, if those don’t answer your questions, feel free to email me at ekr-blog@mozilla.com.

The post Add-Ons Outage Post-Mortem Result appeared first on Mozilla Hacks - the Web developer blog.

Categorieën: Mozilla-nl planet

QMO: Firefox Nightly 70 Testday, July 19th

do, 11/07/2019 - 13:29

Hello Mozillians,

We are happy to let you know that Friday, July 19th, we are organizing Firefox Nightly 70 Testday. We’ll be focusing our testing on: Fission. 

Check out the detailed instructions via this etherpad.

No previous testing experience is required, so feel free to join us on #qa IRC channel where our moderators will offer you guidance and answer your questions.

Join us and help us make Firefox better!

See you on Friday!

Categorieën: Mozilla-nl planet

Mike Hommey: Reproducing the Linux builds of Firefox 68

do, 11/07/2019 - 04:31

Starting with Firefox 68, the Linux builds shipped by Mozilla should be reproducible (it is not currently automatically validated that it definitely is, but 68.0 is). These builds are optimized with Profile Guided Optimization, and the profile data was not kept and published until recently, which is why they weren’t reproducible until now.

The following instructions require running Docker on a Linux host (this may or may not work on a non-Linux host, I don’t know what e.g. Docker for Mac does, and if the docker support in the mach command works with it). I’ll try to make them generic enough that they may apply to any subsequent release of Firefox.

  • Clone either the mozilla-unified or mozilla-release repository. You can use Mercurial or Git (with git-cinnabar), it doesn’t matter.
  • Checkout the FIREFOX_68_0_RELEASE tag and find out what its Mercurial changeset id is (it is 353628fec415324ca6aa333ab6c47d447ecc128e).
  • Open the Taskcluster index tool in a browser tab.
  • In the input field type or copy/paste gecko.v2.mozilla-release.shippable.revision.353628fec415324ca6aa333ab6c47d447ecc128e.firefox.linux64-opt and press the Enter key. (replace 353628fec415324ca6aa333ab6c47d447ecc128e with the right revision if you’re trying for another release)
  • This will fill the “Indexed Task” pane, where you will find a TaskId. Follow the link there, it will bring you to the corresponding Task Run Logs
  • Switch to the Task Details
  • Scroll down to the “Dependencies” list, and check the task name that begins with “build-docker-image”. For the Firefox 68 build task, it is build-docker-image-debian7-amd64-build.
  • Take that name, remove the “build-docker-image-” prefix, and run the following command, from inside the repository, to download the corresponding docker image: $ ./mach taskcluster-load-image debian7-amd64-build

    Obviously, replace debian7-amd64-build with whatever you found in the task dependencies. The image can also be built from the source tree, but this is out of scope for this post.

  • The command output will give you a docker run -ti ... command to try. Run it. It will open a shell in the docker image.
  • From the docker shell, run the following commands: $ echo no-api-key > /builds/mozilla-desktop-geoloc-api.key $ echo no-api-key > /builds/sb-gapi.data $ echo no-api-key > /builds/gls-gapi.data

    Or replace no-api-key with the actual keys if you have them.

  • Back to the Task Details, check the env part of the “Payload”. You’ll need to export all these variables with the corresponding values. e.g. $ export EXTRA_MOZHARNESS_CONFIG='{"update_channel": "release", "mozconfig_variant": "release"}' $ export GECKO_BASE_REPOSITORY='https://hg.mozilla.org/mozilla-unified' $ export GECKO_HEAD_REPOSITORY='https://hg.mozilla.org/releases/mozilla-release' ...
  • Set the missing TASKCLUSTER_ROOT_URL environment variable: $ export TASKCLUSTER_ROOT_URL='https://taskcluster.net'
  • Change the value of MOZHARNESS_ACTIONS to: $ export MOZHARNESS_ACTIONS='build'

    The original value contains get-secrets, which will try to download from http://taskcluster/, which will fail with a DNS error, and check-test, which runs make check, which is not necessary to get a working Firefox.

  • Take command part of the “Payload”, and run that in the docker shell: $ /builds/worker/bin/run-task --gecko-checkout /builds/worker/workspace/build/src -- /builds/worker/workspace/build/src/taskcluster/scripts/builder/build-linux.sh
  • Once the build is finished, in another terminal, check what the container id of your running docker container is, and extract the build artifact from there: $ docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES d234383ba9c7 debian7-amd64-build:be96d1b734e1a152a861ce786861fca6e70bcb996bf67347f5af4f146db157ec "bash" 2 hours ago Up 2 hours nifty_hermann $ docker cp d234383ba9c7:/builds/worker/artifacts/target.tar.bz2 .

    (replace d234383ba9c7 with your container id)

  • Now you can exit the docker shell. That will remove the container.

After all the above, you can finally compare your target.tar.bz2 to the Linux64 Firefox 68 release. You will find a few inevitable differences:

  • The .chk files will be different, because they are self-signatures for FIPS mode that are generated with one-time throw-away keys.
  • The Firefox 68 release contains .sig files that your build won’t contain. They are signature files, which aren’t reproducible outside Mozilla automation for obvious reasons.
  • Consequently, the precomplete file contains instructions for the .sig files in the Firefox 68 release that won’t be in your build.
  • The omni.ja files are different. If you extract them (they are uncompressed zip files with a few tweaks to the format), you’ll see the only difference is in modules/AppConstants.jsm, for the three API keys you created a file for earlier.

Everything else is identical bit for bit.

All the above is a rather long list of manual steps. Ideally, most of it would be automated. We’re not there yet. We only recently got to the point where the profile data is available to make it possible at all. In other words, this is a starting point. It’s valuable to know it does work but requires manual steps and what those are.

It is also worth noting that while the above downloads and uses pre-built compilers and other tools, it is also possible to rebuild those, although they likely won’t be bit-for-bit identical. But differences in those shouldn’t incur differences in Firefox. Replacing the pre-built ones with ones you’d build yourself unfortunately currently requires some more manual work.

As for Windows and Mac builds, long story short, they are not reproducible as of writing. Mac builds are not optimized with PGO, but Windows builds are, and their profile data won’t be available until Firefox 69. Both platforms require SDKs that Mozilla can’t redistribute per their license (but are otherwise available for download from Microsoft or Apple, respectively), which makes the setup more complex. And in all likeliness, for both platforms, the toolchains are not deterministic yet (that’s at least true for Mac). Also, binary signatures would need to be tripped off the executables and libraries before any comparison.

Categorieën: Mozilla-nl planet

Mozilla Security Blog: Grizzly Browser Fuzzing Framework

do, 11/07/2019 - 02:54

At Mozilla, we rely heavily on automation to increase our ability to fuzz Firefox and the components from which it is built. Our fuzzing team is constantly developing tools to help integrate new and existing capabilities into our workflow with a heavy emphasis on scaling. Today we would like to share Grizzly – a browser fuzzing framework that has enabled us to quickly and effectively deploy fuzzers at scale.

Grizzly was designed to allow fuzzer developers to focus solely on writing fuzzers and not worry about the overhead of creating tools and scripts to run them. It was created as a platform for our team to run internal and external fuzzers in a common way using shared tools. It is cross-platform and supports running multiple instances in parallel.

Grizzly is responsible for:

  • managing the browser (via Target)
    • launching
    • terminating
    • monitoring logs
    • monitoring resource usage of the browser
    • handling crashes, OOMs, hangs… etc
  • managing the fuzzer/test case generator tool (via Adapter)
    • setup and teardown of tool
    • providing input for the tool (if necessary)
    • creating test cases
  • serving test cases
  • reporting results
    • basic crash deduplication is performed by default
    • FuzzManager support is available (with advanced crash deduplication)

Grizzly is extensible by extending the “Target” or “Adapter” interface. Targets are used to add support for specific browsers. This is where the quirks and complexities of each browser are handled. See puppet_target.py for an example which uses FFPuppet to add support for Firefox. Adapters are used to add support for fuzzers. A basic functional example can be found here. See here for a slightly more advanced example that can be modified to support existing fuzzers.

Grizzly is primarily intended to support blackbox fuzzers. For a feedback driven fuzzing interface please see the libfuzzer fuzzing interface. Grizzly also has a test case reduction mode that can be used on crashes it finds.

For more information please checkout the README.md in the repository and the wiki. Feel free to ask questions on IRC in #fuzzing.

The post Grizzly Browser Fuzzing Framework appeared first on Mozilla Security Blog.

Categorieën: Mozilla-nl planet

Hacks.Mozilla.Org: Testing Picture-in-Picture for videos in Firefox 69 Beta and Developer Edition

wo, 10/07/2019 - 14:29

Editor’s Note: We updated this post on July 11, 2019 to mention that the Picture-in-Picture feature is currently only enabled Firefox 69 Beta and Developer Edition on Windows. We apologize for getting your hopes up if you’re on macOS or Linux, and we hope to have this feature enabled on those platforms once it reaches our quality standards.

Have you ever needed to scan a recipe while also watching a cooking video? Or perhaps you wanted to watch a recording of a lecture while also looking at the course slides. Or maybe you wanted to watch somebody stream themselves playing video games while you work.

We’ve recently shipped a version of Firefox for Windows on our Beta and Developer Edition release channels with an experimental feature that aims to make this easier for you to do!

Picture-in-Picture allows you to pop a video out from where it’s being played into a special kind of window that’s always on top. Then you can move that window around or resize it however you need!

There are two ways to pop out a video into a Picture-in-Picture window:

Via the context menu

If you open the context menu on a <video> element, you’ll sometimes see the media context menu that looks like this:

Showing the default context menu when opened on a video element, with the Picture-in-Picture menu item highlighted.

There’s a Picture-in-Picture menu item in that context menu that you can use to toggle the feature.

Many sites, however, make it difficult to access the context menu for <video> elements. YouTube, for example, overrides the default context menu with their own.

You can get to the default native context menu by either holding Shift while right-clicking, or double right-clicking. We feel, however, that this is not the most obvious gesture for accessing the feature, so that leads us to the other toggling mechanism – the Picture-in-Picture video toggle.

Via the new Picture-in-Picture video toggle

The Picture-in-Picture toggle appears when you hover over videos with the mouse cursor. It is a small blue rectangle that slides out when you hover over it. Clicking on the blue rectangle will open the underlying video in the Picture-in-Picture player window.

Showing the Picture-in-Picture toggle overlaying a video element on YouTube.

Note that the toggle doesn’t appear when hovering all videos. We only show it for videos that include an audio track that are also of sufficient size and play length.

The advantage of the toggle is that we think we can make this work for most sites out of the box, without making the site authors do anything special!

Using the Picture-in-Picture player window

The Picture-in-Picture window also gives you the ability to quickly play or pause the video — hovering the video with your mouse will expose that control, as well as a control for closing the window, and closing the window while returning you to the tab that the video came from.

Asking for your feedback

We’re still working on hammering out keyboard accessibility, as well as some issues on how the video is displayed at extreme window sizes. We wanted to give Firefox Beta and Developer Edition users on Windows the chance to try the feature out and let us know how it feels. We’ll use the information that we gather to determine whether or not we’ve got the UI right for most users, or need to go back to the drawing board. We’re also hoping to bring this same Picture-in-Picture support to macOS and Linux in the near future.

We’re particularly interested in feedback on the video toggle — there’s a fine balance between discoverability and obtrusiveness, and we want to get a clearer sense of where the blue toggle falls for users on sites out in the wild.

So grab yourself an up-to-date copy of Firefox 69 Beta or Developer Edition for Windows, and give Picture-in-Picture a shot! If you’ve got constructive feedback to share, here’s a form you can use to submit it.

Happy testing!

The post Testing Picture-in-Picture for videos in Firefox 69 Beta and Developer Edition appeared first on Mozilla Hacks - the Web developer blog.

Categorieën: Mozilla-nl planet

Niko Matsakis: AiC: Unbounded queues and lang design

wo, 10/07/2019 - 06:00

I have been thinking about how language feature development works in Rust1. I wanted to write a post about what I see as one of the key problems: too much concurrency in our design process, without any kind of “back-pressure” to help keep the number of “open efforts” under control. This setup does enable us to get a lot of things done sometimes, but I believe it also leads to a number of problems.

Although I don’t make any proposals in this post, I am basically advocating for changes to our process that can help us to stay focused on a few active things at a time. Basically, incorporating a notion of capacity such that, if we want to start something new, we either have to finish up with something or else find a way to grow our capacity.

The feature pipeline

Consider how a typical language feature gets introduced today:

  • Initial design in the form of an RFC. This is done by the lang team.
  • Initial implementation is done. This work is overseen by the compiler team, but often it is done by a volunteer contributor who is not themselves affiliated.
  • Documentation work is done, again often by a contributor, overseen by the docs team.
  • Experimentation in nightly takes places, often leading to changes in the design. (These changes have their own FCP periods.)
  • Finally, at some point, we stabilize the feature. This involves a stabilization report that summarizes what has changed, known bugs, what tests exist, and other details. This decision is made by the lang team.

At any given time, therefore, we have a number of features at each point in the pipeline – some are being designed, some are waiting for an implementor to show up, etc.

Today we have unbounded queues

One of the challenges is that the “links” between these pipeline are effectively unbounded queues. It’s not uncommon that we get an RFC for a piece of design that “seems good”. The RFC gets accepted. But nobody is really driving that work – as a result, it simply languishes. To me, the poster child for this is RFC 66 – a modest change to our rules around the lifetime of temporary values. I still think the RFC is a good idea (although its wording is very imprecise and it needs to be rewritten to be made precise). But it’s been sitting around unimplemented since June of 2014. At this point, is the original decision approving the RFC even still valid? (I sort of think no, but we don’t have a formal rule about that.)

How can an RFC sit around for 5 years?

Why did this happen? I think the reason is pretty clear: the idea was good, but it didn’t align with any particular priority. We didn’t have resources lined up behind it. It needed somebody from the lang team (probably me) to rewrite its text to be actionable and precise2. It needed somebody from the compiler team (maybe me again) to either write a PR or mentor somebody through it. And all those people were busy doing other things. So why did we accept the PR in the first place? Well, why wouldn’t we? Nothing in the process states that we should consider available resources when making an RFC decision.

Unbounded queues lead to confusion for users

So why does it matter when things sit around? I think it has a number of negative effects. The most obvious is that it sends really confusing signals to people trying to follow along with Rust’s development. It’s really hard to tell what the current priorities are; it’s hard to tell when a given feature might actually appear. Some of this we can help resolve just by better labeling and documentation.

Unbounded queues make it harder for teams

But there are other, more subtle effects. Overall, it makes it much harder for the team itself to stay organized and focused and that in turn can create a lot of stress. Stress in turn magnifies all other problems.

How does it make it harder to stay organized? Under the current setup, people can add new entries into any of these queues at basically any time. This can come in many forms, such as new RFCs (new design work and discussion), proposed changes to an existing design (new design or implementation work), etc.

Just having a large number of existing issues means that, in a very practical sense, it becomes challenging to follow GitHub notifications or stay on top of all the things going on. I’ve lost count of the number of attempts I’ve made at this personally.

Finally, the fact that design work stretches over such long periods (frequently years!) makes it harder to form stable communities of people that can dig deeply into an issue, develop a rapport, and reach a consensus.

Leaving room for serendipity?

Still, there’s a reason that we setup the system the way we did. This setup can really be a great fit for an open source project. After all, in an open source project, it can be really hard for us to figure out how many resources we actually have. It’s certainly more than the number of folks on the teams. It happens pretty regularly that people appear out of the blue with an amazing PR implementing some feature or other – and we had no idea they were working on it!

In the 2018 RustConf keynote, we talked about the contrast between OSS by serendipity and OSS on purpose. We were highlighting exactly this tension: on the one hand, Rust is a product, and like any product it needs direction. But at the same time, we want to enable people to contribute as much as we can.

Reviewing as the limited resource

Still, while the existing setup helps ensure that there are many opportunities for people to get involved, it also means that people who come with a new idea, PR, or whatever may wind up waiting a long time to get a response. Often the people who are supposed to answer are just busy doing other things. Sometimes, there is a (often unspoken) understanding that a given issue is just not high enough priority to worry about.

In an OSS project, therefore, I think that the right way to measure capacity is in terms of reviewer bandwidth. Here I mean “reviewer” in a pretty general way. It might be someone who reviews a PR, but it might also be a lang team member who is helping to drive a particular design forward.

Leaving room for new ideas?

One other thing I’ve noticed that’s worth highlighting is that, sometimes, hard ideas just need time to bake. Trying to rush something through the design process can be a bad idea.

Consider specialization: On the one hand, this feature was first proposed in July of 2015. We had a lot of really important debate at the time about the importance of parametricity and so forth. We have an initial implementation. But there was one key issue that never got satisfactorily resolved, a technical soundness concern around lifetimes and traits. As such, the issue has sat around – it would get periodically discussed but we never came to a satisfactory conclusion. Then, in Feb of 2018, I had an idea which aturon then extended in April. It seems like these ideas have basically solved the problem, but we’ve been busy in the meantime and haven’t had time to follow up.

This is a tricky case: maybe if we had tried to push specialization all the way to stabilization, we would have had these same ideas. But maybe we wouldn’t have. Overall, I think that deciding to wait has worked out reasonably well for us, but probably not optimally. I think in an ideal world we would have found some useful subset of specialization that we could stabilize, while deferring the tricky questions.

Tabling as an explicit action

Thinking about specialization leads to an observation: one of the things we’re going to have to figure out is how to draw good boundaries so that we can push out a useful subset of a feature (an “MVP”, if you will) and then leave the rest for later. Unlike today, though, I think should be an explicit process, where we take the time to document the problems we still see and our current understanding of the space, and then explicitly “table” the remainder of the work for another time.

People need help to set limits

One of the things I think we should put into our system is some kind of hard cap on the number of things you can do at any given time. I’d like this cap to be pretty small, like one or two. This will be frustrating. It will be tempting to say “sure I’m working on X, but I can make a little time for Y too”. It will also slow us down a bit.

But I think that’s ok. We can afford to do a few less things. Or, if it seems like we can’t, that’s probably a sign that we need to grow that capacity: find more people we trust to do the reviews and lead the process. If we can’t do that, then we have to adjust our ambitions.

In other words, in the absence of a cap, it is very easy to “stretch” to achieve our goals. That’s what we’ve done often in the past. But you can only stretch so far and for so long.

Conclusion

As I wrote in the beginning, I’m not making any proposals in this post, just sharing my current thoughts. I’d like to hear if you think I’m onto something here, or heading in the wrong direction. Here is a link to the Adventures in Consensus thread on internals.

One thing that has been pointed out to me is that these ideas resemble a number of management philosophies, most notably kanban. I don’t have much experience with that personally but it makes sense to me that others would have tried to tackle similar issues.

Footnotes
  1. I’m coming at this from the perspective of the lang team, but I think a lot of this applies more generally. 

  2. For that matter, it would be helpful if there were a spec of the current behavior for it to build off of. 

Categorieën: Mozilla-nl planet

Mozilla Addons Blog: Changes in Firefox 68

di, 09/07/2019 - 17:00

Firefox 68 is coming out today, and we wanted to highlight a few of the changes coming to add-ons. We’ve updated addons.mozilla.org (AMO) and the Add-ons Manager (about:addons) in Firefox to help people find high-quality, secure extensions more easily. We’re also making it easier to manage installed add-ons and report potentially harmful extensions and themes directly from the Add-ons Manager.

Recommended Extensions

In April, we previewed the Recommended Extensions program as one of the ways we plan to make add-ons safer. This program will make it easier for users to discover extensions that have been reviewed for security, functionality, and user experience.

In Firefox 68, you may begin to notice the first small batch of these recommendations in the Add-ons Manager. Recommendations will include star ratings and the number of users that currently have the extension installed. All extensions recommended in the Add-ons Manager are vetted through the Recommended Extensions program.

As the first iteration of a new design, you can expect some clean-up in upcoming releases as we refine it and incorporate feedback.

addons

On AMO starting July 15, Recommended extensions will receive special badging to indicate its inclusion in the program. Additionally, the AMO homepage will be updated to only display Recommended content, and AMO search results will place more emphasis on Recommended extensions.

Note: We previously stated that modifications to AMO would occur on July 11. This has been changed to July 15.

AMO recommended extension badge

As the Recommended Extensions program continues to evolve, more extensions will be added to the curated list.

Add-ons management and abuse reporting

In alignment with design changes in Firefox, we’ve refreshed the Add-ons Manager to deliver a cleaner user experience. As a result, an ellipsis (3-dot) icon has been introduced to keep options organized and easy to find. You can find all the available controls, including the option to report an extension or theme to Mozilla—in one place.

addons look

The new reporting feature allows users to provide us with a better understanding of the issue they’re experiencing. This new process can be used to report any installed extension, whether they were installed from AMO or somewhere else.

addonsselect issue type when reporting extension

Users can also report an extension or theme when they uninstall an add-on. More information about the new abuse reporting process is available here.

Permissions

It’s easy to forget about the permissions that were previously granted to an extension. While most extensions are created by trustworthy third-party developers, we recommend periodically checking what you have installed, what permissions you’ve granted, and making sure you only keep the ones you really want.

Starting in Firefox 68, you can view the permissions of installed extensions directly in the Add-ons Manager, making it easier to perform these periodic checks. Here’s a summary of all extension permissions, so you can review them for yourself when deciding which extensions to keep installed.

addons

In upcoming releases, we will be adjusting and refining changes to the Add-ons Manager to continue aligning the design with the rest of Firefox and incorporating feedback we receive. We’re also developing a Recommended Extensions Community Board for contributors to assist with extension recommendations—we’ll have more information soon.

The post Changes in Firefox 68 appeared first on Mozilla Add-ons Blog.

Categorieën: Mozilla-nl planet

Hacks.Mozilla.Org: Firefox 68: BigInts, Contrast Checks, and the QuantumBar

di, 09/07/2019 - 16:35

Firefox 68 is available today, featuring support for big integers, whole-page contrast checks, and a completely new implementation of a core Firefox feature: the URL bar.

These are just the highlights. For complete information, see:

BigInts for JavaScript

Firefox 68 now supports JavaScript’s new BigInt numeric type.

Screenshot of the DevTools Console showing 2**64 with normal numbers and with BigInts. The screenshot shows how floating point numbers lose precision as they grow larger.

Since its introduction, JavaScript has only had a single numeric type: Number. By definition, Numbers in JavaScript are floating point numbers, which means they can represent both integers (like 22 or 451) and decimal fractions (like 6.28 or 0.30000000000000004). However, this flexibility comes at a cost: 64-bit floats cannot reliably represent integers larger than 2 ** 53.

» 2 ** 53 9007199254740992 » (2 ** 53) + 1 9007199254740992 // <- Shouldn't that end in 3? » (2 ** 53) + 2 9007199254740994

This limitation makes it difficult to work with very large numbers. For example, it’s why Twitter’s JSON API returns Tweet IDs as strings instead of literal numbers.

BigInt makes it possible to represent arbitrarily large integers.

» 2n ** 53n // <-- the "n" means BigInt 9007199254740992n » (2n ** 53n) + 1n 9007199254740993n // <- It ends in 3! » (2n ** 53n) + 2n 9007199254740994n

JavaScript does not automatically convert between BigInts and Numbers, so you can’t mix and match them in the same expression, nor can you serialize them to JSON.

» 1n + 2 TypeError: can't convert BigInt to number » JSON.stringify(2n) TypeError: BigInt value can't be serialized in JSON

You can, however, losslessly convert BigInt values to and from strings:

» BigInt("994633657141813248") 994633657141813248n » String(994633657141813248n) "994633657141813248" // <-- The "n" goes away

The same is not true for Numbers — they can lose precision when being parsed from a string:

» Number("994633657141813248") 994633657141813200 // <-- Off by 48!

MDN has much more information on BigInt.

Accessibility Checks in DevTools

Each release of Firefox brings improved DevTools, but Firefox 68 marks the debut of a brand new capability: checking for basic accessibility issues.

Screenshot of the Accessibility panel in the Firefox DevTools, showing the results of a text contrast check. The page being checked is a Wired article from 2016, "How the Web Became Unreadable." Ironically, its header fails the contrast check.

With Firefox 68, the Accessibility panel can now report any color contrast issues with text on a page. More checks are planned for the future.

We’ve also:

  • Included a button in the Inspector that enables “print media emulation,” making it easy to see what elements of a page would be visible when printed. (Try it on Wikipedia!)
  • Improved CSS warnings in the console to show more information and include a link to related nodes.Screenshot of the Firefox DevTools showing a CSS warnings in the Console of the form "unknown property 'foo', declaration dropped." The warning shows a list of nodes matching the selector with the erroneous rule.
  • Added support for adjusting letter spacing in the Font Editor.
  • Implemented RegEx-based filtering in the DevTools Console: just enclose your query in slashes, like /(foo|bar)/.
  • Made it possible to block specific requests by right-clicking on them in the Network panel.

Firefox 68 also includes refinements to the smarter debugging features we wrote about a few weeks ago.

Web Compatibility

Keeping the Web open is hard work. Sometimes browsers disagree on how to interpret web standards. Other times, browsers implement and ship their own ideas without going through the standards process. Even worse, some developers intentionally block certain browsers from their sites, regardless of whether or not those browsers would have worked.

Screenshot of a blank webpage telling the visitor that the site is "currently not supporting your browser."

At Mozilla, we call these “Web Compatibility” problems, or “webcompat” for short.

Each release of Firefox contains fixes for webcompat issues. For example, Firefox 68 implements:

In the latter case, even with a standard line-clamp property in the works, we have to support the -webkit- version to ensure that existing sites work in Firefox.

Unfortunately, not all webcompat issues are as simple as implementing non-standard APIs from other browsers. Some problems can only be fixed by modifying how Firefox works on a specific site, or even telling Firefox to pretend to be something else in order to evade browser sniffing.

Screenshot of a webpage that blocks Firefox users, but which works perfectly with a webcompat intervention.

We deliver these targeted fixes as part of the webcompat system add-on that’s bundled with Firefox. This makes it easier to update our webcompat interventions as sites change, without needing to bake those fixes directly into Firefox itself. And as of Firefox 68, you can view (and disable) these interventions by visiting about:compat and toggling the relevant switches.

Our first preference is always to help developers ensure their sites work on all modern browsers, but we can only address the problems that we’re aware of. If you run into a web compatibility issue, please report it at webcompat.com.

CSS: Scroll Snapping and Marker Styling

Firefox 68 supports the latest syntax for CSS scroll snapping, which provides a standardized way to control the behavior of scrolling inside containers. You can find out more in Rachel Andrew’s article, CSS Scroll Snap Updated in Firefox 68.

https://hacks.mozilla.org/files/2019/07/scroll-snap.mp4

As shown in the video above, scroll snapping allows you to start scrolling a container so that, when a certain threshold is reached, letting go will neatly finish scrolling to the next available snap point. It is easier to understand this if you try it yourself, so download Firefox 68 and try it out on some of the examples in the MDN Scroll Snapping docs.

And if you are wondering where this leaves the now old-and-deprecated Scroll Snap Points spec, read Browser compatibility and Scroll Snap.

Today’s release of Firefox also adds support for the ::marker pseudo-element. This makes it possible to style the bullets or counters that appear to the side of list items and summary elements.

Last but not least, CSS transforms now work on SVG elements like mark, marker, pattern and clipPath, which are indirectly rendered.

We have an entire article in the works diving into these and other CSS changes in Firefox 68; look for it later this month.

Browser: WebRender and QuantumBar Updates

Two months ago, Firefox 67 became the first Firefox release with WebRender enabled by default, though limited to users with NVIDIA GPUs on Windows 10. Firefox 68 expands that audience to include people with AMD GPUs on Windows 10, with more platforms on the way.

We’ve also been hard at work in other areas of Firefox’s foundation. The URL bar (affectionately known as the “AwesomeBar”) has been completely reimplemented using web technologies: HTML, CSS, and JavaScript. This new ”QuantumBar” should be indistinguishable from the previous AwesomeBar, but its architecture makes it easier to maintain and extend in the future. We move one step closer to the eventual elimination of our legacy XUL/XBL toolkit with this overhaul.

DOM APIs

Firefox 68 brings several changes to existing DOM APIs, notably:

  • Access to cameras, microphones, and other media devices is no longer allowed in insecure contexts like plain HTTP.
  • You can now pass the noreferrer option to window.open() to avoid leaking referrer information upon opening a link in a new window.

We’ve also added a few new APIs, including support for the Visual Viewport API on Android, which returns the viewport taking into consideration things like on-screen keyboards or pinch-zooming. These may result in a smaller visible area than the overall layout viewport.

It is also now possible to use the .decode() method on HTMLImageElement to download and decode elements before adding them to the DOM. For example, this API simplifies replacing low-resolution placeholders with higher resolution images: it provides a way to know that a new image can be immediately displayed upon insertion into the page.

More Inside

These highlights just scratch the surface. In addition to these changes in Firefox, the last month has seen us release Lockwise, a password manager that lets you take your saved credentials with you on mobile. We’ve also released a brand new Firefox Preview on Android, and more.

From all of us at your favorite nominee for Internet Villain of the Year, thank you for choosing Firefox.

The post Firefox 68: BigInts, Contrast Checks, and the QuantumBar appeared first on Mozilla Hacks - the Web developer blog.

Categorieën: Mozilla-nl planet

The Servo Blog: Media stack Mid-Year review

di, 09/07/2019 - 02:00

We recently closed the first half of 2019 and with that it is time to look back and do a quick summary of what the media team has achieved during this 6 months period.

Looking at some stats, we merged 87 Pull Requests, we opened 56 issues, we closed 42 issues and we welcomed 13 new amazing contributors to the media stack.

A/V playback

These are some of the selected A/V playback related H1 acomplishments

Media cache and improved seeking

We significally improved the seeking experience of audio and video files by implementing preloading and buffering support and a media cache.

Basic media controls

After a few months of work we got partial support for the Shadow DOM API, which gave us the opportunity to implement our first basic set of media controls.

media controls

The UI is not perfect, among other things, because we still have no way to render a progress or volume bar properly, as that depends on the input type="range"> layout, which so far is rendered as a simple text box instead of the usual slider with a thumb.

GStreamer backend for MagicLeap

Another great achievement by Xavier Claessens from Collabora has been the GStreamer backend for Magic Leap. The work is not completely done yet, but as you can see on the animation below, he already managed to paint a full screen video on the Magic Leap device.

magic leap video

Hardware accelerated decoding

One of the most wanted features that we have been working on for almost a year and that has recently landed is hardware accelerated decoding.

Thanks to the excellent and constant work from the Igalian Víctor Jáquez, Servo recently gained support for hardware-accelerated media playback, which means lower CPU usage, better battery life and better thermal behaviour, among other goodies.

We only have support on Linux and Android (EGL and Wayland) so far. Support for other platforms is on the roadmap.

The numbers we are getting are already pretty nice. You might not be able to see it clearly on the video, but the renderer CPU time for the non hardware accelerated playback is ~8ms, compared to the ~1ms of CPU time that we get with the accelerated version.

Improved web compatibility of our media elements implementation

We also got a bunch of other smaller features that significantly improved the web compatibility of our media elements.

WebAudio

We also got a few additions on the WebAudio land.

WebRTC

Thanks to jdm’s and Manishearth’s work, Servo has now the foundations of a WebRTC implementation and it is able to perform a 2-way calling with audio and video playback coming from the getUserMedia API.

Next steps

That’s not all folks! We have exciting plans for the second half of 2019.

A/V playback

On the A/V playback land, we want to:

  • Focus on adding hardware accelerated playback on Windows and OSX.
  • Add support for fullscreen playback.
  • Add support for 360 video.
  • Improve the existing media controls by, for instance, implementing a nicer layout for the <input type="range"> element, with a proper slider and a thumb, so we can have progress and volume bars.
WebAudio

For WebAudio there are plans to make some architectural improvements related to the timeline and the graph traversals.

We would also love to work on the MediaElementAudioSourceNode implementation.

WebRTC

For WebRTC, data channels are on the roadmap for the second half.

We currently support the playback of a single stream of audio and video simultaneously, so allowing the playback of multiple simulatenous streams of each type is also something that we would like to get during the following months.

Others

There were also plans to implement support for a global mute feature, and I am happy to say, that khodza already got this done right at the start of the second half.

Finally, we have been trying to get Youtube to work on Servo, but it turned out to be a difficult task because of non-media related issues (i.e. layout or web compatibility issues), so we decided to adjust the goal and focus on embedded Youtube support instead.

Categorieën: Mozilla-nl planet

Wladimir Palant: Various RememBear security issues

ma, 08/07/2019 - 11:05

Whenever I write about security issues in some password manager, people will ask what I’m thinking about their tool of choice. And occasionally I’ll take a closer look at the tool, which is what I did with the RememBear password manager in April. Technically, it is very similar to its competitor 1Password, to the point that the developers are being accused of plagiarism. Security-wise the tool doesn’t appear to be as advanced however, and I quickly found six issues (severity varies) which have all been fixed since. I also couldn’t fail noticing a bogus security mechanism, something that I already wrote about.

Stealing remembear.com login tokens

Password managers will often give special powers to “their” website. This is generally an issue, because compromising this website (e.g. via an all too common XSS vulnerability) will give attackers access to this functionality. In case of RememBear, things turned out to be easier however. The following function was responsible for recognizing privileged websites:

isRememBearWebsite() { let remembearSites = this.getRememBearWebsites(); let url = window.getOriginUrl(); let foundSite = remembearSites.firstOrDefault(allowed => url.indexOf(allowed) === 0, undefined); if (foundSite) { return true; } return false; }

We’ll get back to window.getOriginUrl() later, it not actually producing the expected result. But the important detail here: the resulting URL is being compared against some whitelisted origins by checking whether it starts with an origin like https://remembear.com. No, I didn’t forget the slash at the end here, there really is none. So this code will accept https://remembear.com.malicious.info/ as a trusted website!

Luckily, the consequences aren’t as severe as with similar LastPass issues for example. This would only give attacker’s website access to the RememBear login token. That token will automatically log you into the user’s RememBear account, which cannot be used to access passwords data however. It will “merely” allow the attacker to manage user’s subscription, with the most drastic available action being deleting the account along with all passwords data.

Messing with AutoFill functionality

AutoFill functionality of password managers is another typical area where security issues are found. RememBear requires a user action to activate AutoFill which is an important preventive measure. Also, AutoFill user interface will be displayed by the native RememBear application, so websites won’t have any way of messing with it. I found multiple other aspects of this functionality to be exploitable however.

Most importantly, RememBear would not verify that it filled in credentials on the right website (a recent regression according to the developers). Given that considerable time can pass between the user clicking the bear icon to display AutoFill user interface and the user actually selecting a password to be filled in, one cannot really expect that the browser tab is still displaying the same website. RememBear will happily continue filling in the password however, not recognizing that it doesn’t belong to the current website.

Worse yet, RememBear will try to fill out passwords in all frames of a tab. So if https://malicious.com embeds a frame from https://mybank.com and the user triggers AutoFill on the latter, https://malicious.com will potentially receive the password as well (e.g. via a hidden form). Or even less obvious: if you go to https://shop.com and that site has third-party frames e.g. for advertising, these frames will be able to intercept any of your filled in passwords.

Public Suffix List implementation issues

One point on my list of common AutoFill issues is: Domain name is not “the last two parts of a host name.” On the first glance, RememBear appears to have this done correctly by using Mozilla’s Public Suffix List. So it knows in particular that the relevant part of foo.bar.example.co.uk is example.co.uk and not co.uk. On a closer glance, there are considerable issues in the C# based implementation however.

For example, there is some rather bogus logic in the CheckPublicTLDs() function and I’m not even sure what this code is trying to accomplish. You will only get into this function for multi-part public suffixes where one of the parts has more than 3 characters – meaning pilot.aero for example. The code will correctly recognize example.pilot.aero as being the relevant part of the foo.bar.example.pilot.aero host name, but it will come to the same conclusion for pilot.aeroexample.pilot.aero as well. Since domains are being registered under the pilot.aero namespace, the two host names here actually belong to unrelated domains, so the bug here allows one of them to steal credentials for the other.

The other issue is that the syntax of the Public Suffix List is processed incorrectly. This results for example in the algorithm assuming that example.asia.np and malicious.asia.np belong to the same domain, so that credentials will be shared between the two. With asia.np being the public suffix here, these host names are unrelated however.

Issues saving passwords

When you enter a password on some site, RememBear will offer you to save it – fairly common functionality. However, this will fail spectacularly under some circumstances, and that’s partially due to the already mentioned window.getOriginUrl() function which is implemented as follows:

if (window.location.ancestorOrigins != undefined && window.location.ancestorOrigins.length > 0) { return window.location.ancestorOrigins[0]; } else { return window.location.href; }

Don’t know what window.location.ancestorOrigins does? I didn’t know either, it being a barely documented Chrome/Safari feature which undermines referrer policy protection. It contains the list of origins for parent frames, so this function will return the origin of the parent frame if there is any – the URL of the current document is completely ignored.

While AutoFill doesn’t use window.getOriginUrl(), saving passwords does. So if in Chrome https://evil.com embeds a frame from https://mybank.com and the user logs into the latter, RememBear will offer to save the password. But instead of saving that password for https://mybank.com it will store it for https://evil.com. And https://evil.com will be able to retrieve the password later if the user triggers AutoFill functionality on their site. But at least there will be some warning flags for the user along the way…

There was one more issue: the function hostFromString() used to extract host name from URL when saving passwords was using a custom URL parser. It wouldn’t know how to deal with “unusual” URL schemes, so for data:text/html,foo/example.com:// or about:blank#://example.com it would return example.com as the host name. Luckily for RememBear, its content scripts wouldn’t run on any of these URLs, at least in Chrome. In their old (and already phased out) Safari extension this likely was an issue and would have allowed websites to save passwords under an arbitrary website name.

Timeline
  • 2019-04-09: After discovering the first security vulnerability I am attempting to find a security contact. There is none, so I ask on Twitter. I get a response on the same day, suggesting to invite me to a private bug bounty program. This route fails (I’ve been invited to that program previously and rejected), so we settle on using the support contact as fallback.
  • 2019-04-10: Reported issue: “RememBear extensions leak remembear.com token.”
  • 2019-04-10: RememBear fixes “RememBear extensions leak remembear.com token” issue and updates their Firefox and Chrome extensions.
  • 2019-04-11: Reported issue “No protection against logins being filled in on wrong websites.”
  • 2019-04-12: Reported issues: “Unrelated websites can share logins”, “Wrong interpretation of Mozilla’s Public Suffix list”, “Login saved for wrong site (frames in Chrome)”, “Websites can save logins for arbitrary site (Safari).”
  • 2019-04-23: RememBear fixes parts of the “No protection against logins being filled in on wrong websites” issue in the Chrome extension.
  • 2019-04-24: RememBear confirms that “Websites can save logins for arbitrary site (Safari)” issue doesn’t affect any current products but they intend to remove hostFromString() function regardless.
  • 2019-05-27: RememBear reports having fixed all outstanding issues in the Windows application and Chrome extension. macOS application is supposed to follow a week later.
  • 2019-06-12: RememBear updates Firefox extension as well.
  • 2019-07-08: Coordinated disclosure.
Categorieën: Mozilla-nl planet

Niko Matsakis: Async-await status report #2

ma, 08/07/2019 - 06:00

I wanted to give an update on the status of the “async-await foundations” working group. This post aims to cover three things:

  • the “async await MVP” that we are currently targeting;
  • how that fits into the bigger picture;
  • and how you can help, if you’re so inclined;
Current target: async-await MVP

We are currently working on stabilizing what we call the async-await MVP – as in, “minimal viable product”. As the name suggests, the work we’re doing now is basically the minimum that is needed to “unlock” async-await. After this work is done, it will be easier to build async I/O based applications in Rust, though a number of rough edges remain.

The MVP consists of the following pieces:

The future trait

The first of these bullets, the future trait, was stabilized in the 1.36.0 release. This is important because the Future trait is the core building block for the whole Async I/O ecosystem. Having a stable future trait means that we can begin the process of consolidating the ecosystem around it.

Basic async-await syntax

Now that the future trait is stable, the next step is to stabilize the basic “async-await” syntax. We are presently shooting to stabilize this in 1.38. We’ve finished the largest work items, but there are still a number of things left to get done before that date – if you’re interested in helping out, see the “how you can help” section at the end of this post!

The current support we are aiming to stabilize permits async fn, but only outside of traits and trait implementations. This means that you can write free functions like this one:1

// When invoked, returns a future that (once awaited) will yield back a result: async fn process(data: TcpStream) -> Result<(), Box<dyn Error>> { let mut buf = vec![0u8; 1024]; // Await data from the stream: let len = reader.read(&mut buf).await?; ... }

or inherent methods:

impl MyType { // Same as above, but defined as a method on `MyType`: async fn process(data: TcpStream) -> Result<(), Box<dyn Error>> { .. } }

You can also write async blocks, which generate a future “in place” without defining a separate function. These are particularly useful to pass as arguments to helpers like runtime::spawn:

let data: TcpStream; runtime::spawn(async move { let mut buf = vec![0u8; 1024]; let len = reader.read(&mut buf).await?; ... })

Eventually, we plan to permit async fn in other places, but there are some complications to be resolved first, as will be discussed shortly.

The async book

One of the goals of this stabilization is that, once async-await syntax becomes available, there should be really strong documentation to help people get started. To that end, we’re rejuvenating the “async Rust” book. This book covers the nuts and bolts of Async I/O in Rust, ranging from simple examples with async fn all the way down to the details of how the future trait works, writing your own executors, and so forth. Take a look!

(Eventually, I expect some of this material may make its way into more standard books like The Rust Programming Language, but in the meantime we’re evolving it separately.)

Future work: the bigger picture

The current stabilization push, as I mentioned above, is aimed at getting an MVP stabilized – just enough to enable people to run off and start to build things. So you’re probably wondering, what are some of the things that come next? Here is a (incomplete) list of possible future work:

  • A core set of async traits and combinators. Basically a 1.0 version of the futures-rs repository, offering key interfaces like AsyncRead.
  • Better stream support. The futures-rs repository contains a Stream trait, but there remains some “support work” to make it better supported. This may include some form of for-await syntax (although that is not a given).
  • Generators and async generators. The same core compiler transform that enables async await should enable us to support Python- or JS-like generators as a way to write iterators. Those same generators can then be made asynchronous to produce streams of data.
  • Async fn in traits and trait impls. Writing generic crates and interfaces that work with async fn is possible in the MVP, but not as clean or elegant as it could be. Supporting async fn in traits is an obvious extension to make that nicer, though we have to figure out all of the interactions with the rest of the trait system.
  • Async closures. We would like to support the obvious async || syntax that would generate a closure. This may require tinkering with the Fn trait hierarchy.
How you can get involved

There’s been a lot of great work on the async fn implementation since my first post – we’ve closed over 40 blocker issues! I want to give a special shout out to the folks who worked on those issues:2

  • davidtwco reworked the desugaring so that the drop order for parameters in an async fn and fn is analagous, and then heroically fixed a number of minor bugs that were filed as fallout from this change.
  • tmandry dramatically reduced the size of futures at runtime.
  • gilescope improved a number of error messages and helped to reduce errors.
  • matthewjasper reworked some details of the compiler transform to solve a large number of ICEs.
  • doctorn fixed an ICE when await was used in inappropriate places.
  • centril has been helping to enumerate tests and generally work on triage work.
  • cramertj implemented the await syntax, wrote a bunch of tests, and, of course, did all of the initial implementation work.
  • and hey, I extended the region inferencer to support multiple lifetime parameters. I guess I get some credit too. =)

If you’d like to help push async fn over the finish line, take a look at our list of blocking issues. Anything that is not assigned is fair game! Just find an issue you like that is not assigned and use @rustbot claim to claim it. You can find out more about how our working group works on the async-await working group page. In particular, that page includes a link to the calendar event for our weekly meeting, which takes place in the the #wg-async-foundations channel on the rust-lang Zulip – the next meeting is tomorrow (Tuesday)!. But feel free to drop in any time with questions.

Footnotes
  1. Sadly, it seems like rouge hasn’t been updated yet to highlight the async or await keywords. Or maybe I just don’t understand how to upgrade it. =) 

  2. I culled this list by browsing the closed issues and who they were assigned to. I’m sorry if I forgot someone or minimized your role! Let me know and I’ll edit the post. <3 

Categorieën: Mozilla-nl planet

Cameron Kaiser: TenFourFox FPR15 available

za, 06/07/2019 - 22:01
TenFourFox Feature Parity Release 15 final is now available for testing (downloads, hashes, release notes). There are no changes from the beta other than outstanding security fixes. Assuming all goes well, it will go live Monday evening Pacific as usual.

Also, we now have Korean and Turkish language packs available for testing. If you want to give these a spin, download them here; the plan is to have them go-live at the same time as FPR15. Thanks again to new contributor Tae-Woong Se and, of course, to Chris Trusch as always for organizing localizations and doing the grunt work of turning them into installers.

Not much work will occur on the browser for the next week or so due to family commitments and a couple out-of-town trips, but I'll be looking at a few new things for FPR16, including some minor potential performance improvements and a font subsystem upgrade. There's still the issue of our outstanding JavaScript deficiencies as well, of course. More about that later.

Categorieën: Mozilla-nl planet

About:Community: Firefox 68 new contributors

vr, 05/07/2019 - 21:46

With the release of Firefox 68, we are pleased to welcome the 55 developers who contributed their first code change to Firefox in this release, 49 of whom were brand new volunteers! Please join us in thanking each of these diligent and enthusiastic individuals, and take a look at their contributions:

Categorieën: Mozilla-nl planet

The Mozilla Blog: Mozilla’s Latest Research Grants: Prioritizing Research for the Internet

vr, 05/07/2019 - 20:43

We are very happy to announce the results of our Mozilla Research Grants for the first half of 2019. This was an extremely competitive process, and we selected proposals which address twelve strategic priorities for the internet and for Mozilla. This includes researching better support for integrating Tor in the browser, improving scientific notebooks, using speech on mobile phones in India, and alternatives to advertising for funding the internet. The Mozilla Research Grants program is part of our commitment to being a world-class example of using inclusive innovation to impact culture, and reflects Mozilla’s commitment to open innovation.

We will open a new round of grants in Fall of 2019. See our Research Grant webpage for more details and to sign up to be notified when applications open.

Lead Researchers Institution Project Title Valentin Churavy MIT Bringing Julia to the Browser Jessica Outlaw Concordia University of Portland Studying the Unique Social and Spatial affordances of Hubs by Mozilla for Remote Participation in Live Events Neha Kumar Georgia Tech Missing Data: Health on the Internet for Internet Health Piotr Sapiezynski, Alan Mislove, & Aleksandra Korolova Northeastern University & University of Southern California Understanding the impact of ad preference controls Sumandro Chattapadhyay The Centre for Internet and Society (CIS), India Making Voices Heard: Privacy, Inclusivity, and Accessibility of Voice Interfaces in India Weihang Wang State University of New York Designing Access Control Interfaces for Wasmtime Bernease Herman University of Washington Toward generalizable methods for measuring bias in crowdsourced speech datasets and validation processes David Karger MIT Tipsy: A Decentralized Open Standard for a Microdonation-Supported Web Linhai Song Pennsylvania State University Benchmarking Generic Functions in Rust Leigh Clark University College Dublin Creating a trustworthy model for always-listening voice interfaces Steven Wu University of Minnesota DP-Fathom: Private, Accurate, and Communication-Efficient Nikita Borisov University of Illinois, Urbana-Champaign Performance and Anonymity of HTTP/2 and HTTP/3 in Tor

Many thanks to everyone who submitted a proposal and helped us publicize these grants.

The post Mozilla’s Latest Research Grants: Prioritizing Research for the Internet appeared first on The Mozilla Blog.

Categorieën: Mozilla-nl planet

Mozilla GFX: moz://gfx newsletter #46

vr, 05/07/2019 - 16:26

Hi there! As previously announced WebRender has made it to the stable channel and a couple of million users are now using it without having opted into it manually. With this important milestone behind us, now is a good time to widen the scope of the newsletter and give credit to other projects being worked on by members of the graphics team.

The WebRender newsletter therefore becomes the gfx newsletter. This is still far from an exhaustive list of the work done by the team, just a few highlights in WebRender and graphics in general. I am hoping to keep the pace around a post per month, we’ll see where things go from there.

What’s new in gfx Async zoom for desktop Firefox

Botond has been working on desktop zooming

  • The work is currently focused on the ability to use pinch gestures to zoom (scaling only, no reflow, like on mobile) on desktop platforms.
  • The initial focus is on touchscreens, with support for touchpads to follow.
  • We hope to have this ready for some early adopters to try out in the coming weeks.
WebGL power usage

Jeff Gilbert has been working on power preference options for WebGL.
WebGL has three power preference options available during canvas context creation:

  • default
  • low-power
  • high-performance

The vast majority of web content implicitly requests “default”. Since we don’t want to regress web content performance, we usually treat “default” like “high-performance”. On macOS with multiple GPUs (MacBook Pro), this means activating the power-hungry dedicated GPU for almost all WebGL contexts. While this keeps our performance high, it also means every last one-off or transient WebGL context from ads, trackers, and fingerprinters will keep the high-power GPU running until they are garbage-collected.
In bug 1562812, Jeff added a ramp-up/ramp-down behavior for “default”: For the first couple seconds after WebGL context creation, things stay on the low-power GPU. After that grace period, if the context continues to render frames for presentation, we migrate to the high-power GPU. Then, if the context stops producing frames for a couple seconds, we release our lock of the high-power GPU, to try to migrate back to the low-power GPU.
What this means is that active WebGL content should fairly quickly end up ramped-up onto the high-power GPU, but inactive and orphaned WebGL contexts won’t keep the browser locked on to the high-power GPU anymore, which means better battery life for users on these machines as they wander the web.

DisplayList building optimization

Miko, Matt, Timothy and Dan have worked on improving display list build times

  • The two main areas of improvement have been avoiding unnecessary work during display list merging, and improving the memory access patterns during display list building.
  • The improved display list merging algorithm utilizes the invalidation assumptions of the frame tree, and avoids preprocessing sub display lists that cannot have changed. (bug 1544948)
  • Some commonly used display items have drastically shrunk in size, which has reduced the memory usage and allocations. For example, the size of transform display item went down from 1024 bytes to 512 bytes. (bug 1502049, bug 1526941)
  • The display item size improvements have also tangentially helped with caching and prefetching performance. For example, the base display item state booleans were collapsed into a bit field and moved to the first cache line. (bug 1526972, bug 1540785)
  • Retained display lists were enabled for Android devices and for the parent process. (bugs 1413567 and 1413546)
  • Telemetry probes show that since the Orlando All Hands, the mean display list build time has gone down by 40%, from ~1.8ms to ~1.1ms. The 95th percentile has gone down by 30%, from ~6.2ms to ~4.4ms.
  • While these numbers might seem low, they are still a considerable proportion of the target 16ms frame budget. There is more promising follow-up work scheduled in bugs 1539597 and 1554503.
What’s new in WebRender

WebRender is a GPU based 2D rendering engine for web written in Rust, currently powering Firefox‘s rendering engine as well as the research web browser servo.

Software backend investigations

Glenn, Jeff Muizelaar and Jeff Gilbert are investigating WebRender on top of swiftshader or llvmpipe when the avialable GPU featureset is too small or too buggy. The hope is that these emulation layers will help us quickly migrate users that can’t have hardware acceleration to webrender with a minimal amount of specific code (most likely some simple specialized blitting routines in a few hot spots where the emulation layer is unlikely provide optimal speed).
It’s too early to tell at this point whether this experiment will pan out. We can see some (expected) regressions compared to the non-webrender software backend, but with some amount of optimization it’s probable that performance will get close enough and provide an acceptable user experience.
This investigation is important since it will determine how much code we have to maintain specifically for non-accelerated users, a proportion of users that will decrease over time but will probably never quite get to zero.

Pathfinder 3 investigations

Nical spent some time in the last quarter familiarizing with pathfinder’s internals and investigating its viability for rendering SVG paths in WebRender/Firefox. He wrote a blog post explaining in details how pathfinder fills paths on the GPU.
The outcome of this investigation is that we are optimistic about pathfinder’s approach and think it’s a good fit for integration in Firefox. In particular the approach is flexible and robust, and will let us improve or replace parts in the longer run.
Nical is also experimenting with a different tiling algorithm, aiming at reducing the CPU overhead.

The next steps are to prototype an integration of pathfinder 3 in WebRender, start using it for simple primitives such as CSS clip paths and gradually use it with more primitives.

Picture caching improvements

Glenn continues his work on picture caching. At the moment WebRender only triggers the optimization for a single scroll root. Glenn has landed a lot of infrastructure work to be able to benefit from picture caching at a more granular level (for example one per scroll-root), and pave the way for rendering picture cached slices directly into OS compositor surfaces.

The next steps are, roughly:

  • Add a couple follow up optimizations such as exact dirty rectangles for smaller updates / multi-resolution tiles, detect tiles that are solid colors.
  • Enable caching on content + UI explicitly (as an intermediate step, to give caching on the UI).
  • Implement the right support for multiple cached slices that have transparency and subpixel text anti-aliasing.
  • Enable fine grained caching on multiple slices.
  • Expose those multiple cached slices to OS compositor for power savings
    and better scrolling performance where supported.
WebRender on Android

Thanks to the continuous efforts of Jamie, Kats, WebRender is starting to be pretty solid on android. GeckoView is required to enable WebRender so it isn’t possible to enable it now in Firefox for Android, but we plan make the option available for the Firefox Preview browser (which is powered by GeckoView) when it will have nightly builds.

Categorieën: Mozilla-nl planet

Mozilla Reps Community: Rep of the Month – June 2019

vr, 05/07/2019 - 13:37

Please join us in congratulating Pranshu Khanna, Rep of the Month for June 2019!

Pranshu is from Surat, Gujarat, India. His journey started with a Connected Devices workshop in 2016, since then he’s been a super active contributor and a proud Mozillian. He joined the Reps Program in March 2019 and has been instrumental ever since.

f6c0f4f0e8e167e8b3ac7274878873d7

In addition to that, he’s been one of the most active Reps in his region since he joined the program. He has worked to get his local community, Mozilla Gujarat, to meet very regularly and contribute to Common Voice, BugHunter, Localization, SUMO, A-Frame, Add-ons, and Open Source Contribution. He’s an active contributor and a maintainer of the OpenDesign Repository and frequently contributes to the Mozilla India & Mozilla Gujarat Social Channels.

Congratulations and keep rocking the open web!

To congratulate Pranshu, please head over to Discourse!

Categorieën: Mozilla-nl planet

Mozilla Localization (L10N): L10n report: July edition

do, 04/07/2019 - 17:32

Please note some of the information provided in this report may be subject to change as we are sometimes sharing information about projects that are still in early stages and are not final yet. 

Welcome!

New localizers

Are you a locale leader and want us to include new members in our upcoming reports? Contact us!

New community/locales added
  • Manx was added to Pontoon and they’re starting to work on localizing Firefox.
New content and projects What’s new or coming up in Firefox desktop

As usual, let’s start with some important dates:

  • Firefox 68 will be released on July 9. At that point, Firefox 69 will be in beta, while Nightly will have Firefox 70.
  • The deadline to ship localization updates in Firefox 69 is August 20.

Firefox 69 has been a lightweight release in terms of new strings, with most of the changes coming from Developer Tools. Expect a lot more content in Firefox 70, with particular focus on the Password Manager – integrating most of the Lockwise add-on features directly into Firefox – and work around Tracking Protection and Security, with brand new about pages.

What’s new or coming up in mobile

Since our last report, we’ve shipped the first release of Firefox Preview (Fenix) in 11 languages (including en-US). The next upcoming step will be to open up the project to more locales. If you are interested, make sure to follow closely the dev.l10n mailing list this week. And congratulations to the teams that helped make this a successful localized first release!

On another note, Firefox for iOS deadline for l10n work on v18 just passed this week, and we’re adding two new locales: Finnish and Gujarati. Congratulations to the teams!

What’s new or coming up in web projects Keep brand and product names in English

The current policy per branding and legal teams is that the trademarked names should be left unchanged. Unless indicated in the comments, brand names should be left in English, they cannot be transliterated or declined. If you have any doubt, ask through the available communication channels.

Product names such as Lockwise, Monitor, Send, and Screenshot, to name a few, used alone or with Firefox, should be left unchanged. Common Voice is also a product name and should remain unchanged. Pocket is a trademark and must remain as is too. Locale managers, please inform the rest of the communities on the policy especially to new contributors. Search in Pontoon or Transvision for these names and revert them to English.

Mozilla.org

The recently added new pages contain marketing languages to promote all the Firefox products. Localize them and start spreading the words on social media and other platforms. The deadline is indicative.

  • New: firefox/best-browser.lang, firefox/campaign-trailhead.lang, firefox/all-unified.lang;
  • Update: firefox/accounts-2019.lang;
  • Obsolete: firefox/accounts.lang will be removed soon. Stop working on this page if is incomplete.
What’s new or coming up in SuMo Newly published localizer facing documentation

The Firefox marketing guides are living documents and will be updated as needed.

  • The guide in German was written by Mozilla staff copywriter. It sets the tone for marketing content localization, including the mozilla.org site. With this guide, the hope is that the site has a more unified tone from page to page, regardless of who contributed to the localization: the community or a staff.
  • The guide in English was derived from the guide written in German and is meant to be a template for any communities who want to create a marketing content localization guide. This is in addition to the style guide a community has already created.
  • The guide in French will be authored by a Mozilla staff copywriter. We will update you in a future report.
Events
  • Want to showcase an event coming up that your community is participating in? Reach out to any l10n-driver and we’ll include that (see links to emails at the bottom of this report)
Useful Links Questions? Want to get involved?

Did you enjoy reading this report? Let us know how we can improve by reaching out to any one of the l10n-drivers listed above.

Categorieën: Mozilla-nl planet

The Rust Programming Language Blog: Announcing Rust 1.36.0

do, 04/07/2019 - 02:00

The Rust team is happy to announce a new version of Rust, 1.36.0. Rust is a programming language that is empowering everyone to build reliable and efficient software.

If you have a previous version of Rust installed via rustup, getting Rust 1.36.0 is as easy as:

$ rustup update stable

If you don't have it already, you can get rustup from the appropriate page on our website, and check out the detailed release notes for 1.36.0 on GitHub.

What's in 1.36.0 stable

This release brings many changes, including the stabilization of the Future trait, the alloc crate, the MaybeUninit<T> type, NLL for Rust 2015, a new HashMap<K, V> implementation, and --offline support in Cargo. Read on for a few highlights, or see the detailed release notes for additional information.

The Future is here!

In Rust 1.36.0 the long awaited Future trait has been stabilized!

With this stabilization, we hope to give important crates, libraries, and the ecosystem time to prepare for async / .await, which we'll tell you more about in the future.

The alloc crate is stable

Before 1.36.0, the standard library consisted of the crates std, core, and proc_macro. The core crate provided core functionality such as Iterator and Copy and could be used in #![no_std] environments since it did not impose any requirements. Meanwhile, the std crate provided types like Box<T> and OS functionality but required a global allocator and other OS capabilities in return.

Starting with Rust 1.36.0, the parts of std that depend on a global allocator, e.g. Vec<T>, are now available in the alloc crate. The std crate then re-exports these parts. While #![no_std] binaries using alloc still require nightly Rust, #![no_std] library crates can use the alloc crate in stable Rust. Meanwhile, normal binaries, without #![no_std], can depend on such library crates. We hope this will facilitate the development of a #![no_std] compatible ecosystem of libraries prior to stabilizing support for #![no_std] binaries using alloc.

If you are the maintainer of a library that only relies on some allocation primitives to function, consider making your library #[no_std] compatible by using the following at the top of your lib.rs file:

#![no_std] extern crate alloc; use alloc::vec::Vec; MaybeUninit<T> instead of mem::uninitialized

In previous releases of Rust, the mem::uninitialized function has allowed you to bypass Rust's initialization checks by pretending that you've initialized a value at type T without doing anything. One of the main uses of this function has been to lazily allocate arrays.

However, mem::uninitialized is an incredibly dangerous operation that essentially cannot be used correctly as the Rust compiler assumes that values are properly initialized. For example, calling mem::uninitialized::<bool>() causes instantaneous undefined behavior as, from Rust's point of view, the uninitialized bits are neither 0 (for false) nor 1 (for true) - the only two allowed bit patterns for bool.

To remedy this situation, in Rust 1.36.0, the type MaybeUninit<T> has been stabilized. The Rust compiler will understand that it should not assume that a MaybeUninit<T> is a properly initialized T. Therefore, you can do gradual initialization more safely and eventually use .assume_init() once you are certain that maybe_t: MaybeUninit<T> contains an initialized T.

As MaybeUninit<T> is the safer alternative, starting with Rust 1.39, the function mem::uninitialized will be deprecated.

To find out more about uninitialized memory, mem::uninitialized, and MaybeUninit<T>, read Alexis Beingessner's blog post. The standard library also contains extensive documentation about MaybeUninit<T>.

NLL for Rust 2015

In the announcement for Rust 1.31.0, we told you about NLL (Non-Lexical Lifetimes), an improvement to the language that makes the borrow checker smarter and more user friendly. For example, you may now write:

fn main() { let mut x = 5; let y = &x; let z = &mut x; // This was not allowed before 1.31.0. }

In 1.31.0 NLL was stabilized only for Rust 2018, with a promise that we would backport it to Rust 2015 as well. With Rust 1.36.0, we are happy to announce that we have done so! NLL is now available for Rust 2015.

With NLL on both editions, we are closer to removing the old borrow checker. However, the old borrow checker unfortunately accepted some unsound code it should not have. As a result, NLL is currently in a "migration mode" wherein we will emit warnings instead of errors if the NLL borrow checker rejects code the old AST borrow checker would accept. Please see this list of public crates that are affected.

To find out more about NLL, MIR, the story around fixing soundness holes, and what you can do about the warnings if you have them, read Felix Klock's blog post.

A new HashMap<K, V> implementation

In Rust 1.36.0, the HashMap<K, V> implementation has been replaced with the one in the hashbrown crate which is based on the SwissTable design. While the interface is the same, the HashMap<K, V> implementation is now faster on average and has lower memory overhead. Note that unlike the hashbrown crate, the implementation in std still defaults to the SipHash 1-3 hashing algorithm.

--offline support in Cargo

During most builds, Cargo doesn't interact with the network. Sometimes, however, Cargo has to. Such is the case when a dependency is added and the latest compatible version needs to be downloaded. At times, network access is not an option though, for example on an airplane or in isolated build environments.

In Rust 1.36, a new Cargo flag has been stabilized: --offline. The flag alters Cargo's dependency resolution algorithm to only use locally cached dependencies. When the required crates are not available offline, and a network access would be required, Cargo will exit with an error. To prepopulate the local cache in preparation for going offline, use the cargo fetch command, which downloads all the required dependencies for a project.

To find out more about --offline and cargo fetch, read Nick Cameron's blog post.

For information on other changes to Cargo, see the detailed release notes.

Library changes

The dbg! macro now supports multiple arguments.

Additionally, a number of APIs have been made const:

New APIs have become stable, including:

Other library changes are available in the detailed release notes.

Other changes

Detailed 1.36.0 release notes are available for Rust, Cargo, and Clippy.

Contributors to 1.36.0

Many people came together to create Rust 1.36.0. We couldn't have done it without all of you. Thanks!

Categorieën: Mozilla-nl planet

Will Kahn-Greene: Crash pings (Telemetry) and crash reports (Socorro/Crash Stats)

wo, 03/07/2019 - 21:00

I keep getting asked questions that stem from confusion about crash pings and crash reports, the details of where they come from, differences between the two data sets, what each is currently good for, and possible future directions for work on both. I figured I'd write it all down.

This is a brain dump and sort of a blog post and possibly not a good version of either. I desperately wished it was more formal and mind-blowing like something written by Chutten or Alessio.

It's likely that this is 90% true today but as time goes on, things will change and it may be horribly wrong depending on how far in the future you're reading this. As I find out things are wrong, I'll keep notes. Any errors are my own.

Table of contents because this is long

Summary

We (Mozilla) have two different data sets for crashes: crash pings in Telemetry and crash reports in Socorro/Crash Stats. When Firefox crashes, the crash reporter collects information about the crash and this results in crash ping and crash report data. From there, the two different data things travel two different paths and end up in two different systems.

This blog post covers these two different journeys, their destinations, and the resulting properties of both data sets.

This blog post specifically talks about Firefox and not other products which have different crash reporting stories.

CRASH!

Firefox crashes. It happens.

The crash reporter kicks in. It uses the Breakpad library to collect data about the crashed process, package it up into a minidump. The minidump has information about the registers, what's in memory, the stack of the crashing thread, stacks of other threads, what modules are in memory, and so on.

Additionally, the crash reporter collects a set of annotations for the crash. Annotations like ProductName, Version, ReleaseChannel, BuildID and others help us group crashes for the same product and build.

The crash reporter assembles the portions of the crash report that don't have personally identifiable information (PII) in them into a crash ping. It uses minidump-analyzer to unwind the stack. The crash ping with this stack is sent via the crash ping sender to Telemetry.

If Telemetry is disabled, then the crash ping will not get sent to Telemetry.

The crash reporter will show a crash report dialog informing the user that Firefox crashed. The crash report dialog includes a comments field for additional data about the crash and an email field. The user can choose to send the crash report or not.

If the user chooses to send the crash report, then the crash report is sent via HTTP POST to the collector for the crash ingestion pipeline. The entire crash ingestion pipeline is called Socorro. The website part is called Crash Stats.

If the user chooses not to send the crash report, then the crash report never goes to Socorro.

If Firefox is unable to send the crash report, it keeps it on disk. It might ask the user to try to send it again later. The user can access about:crashes and send it explicitly.

Relevant backstory What is symbolication?

Before we get too much further, let's talk about symbolication.

minidump-whatever will walk the stack starting with the top-most frame. It uses frame information to find the caller frame and works backwards to produce a list of frames. It also includes a list of modules that are in memory.

For example, part of the crash ping might look like this:

"modules": [ ... { "debug_file": "xul.pdb", "base_addr": "0x7fecca50000", "version": "69.0.0.7091", "debug_id": "4E1555BE725E9E5C4C4C44205044422E1", "filename": "xul.dll", "end_addr": "0x7fed32a9000", "code_id": "5CF2591C6859000" }, ... ], "threads": [ { "frames": [ { "trust": "context", "module_index": 8, "ip": "0x7feccfc3337" }, { "trust": "cfi", "module_index": 8, "ip": "0x7feccfb0c8f" }, { "trust": "cfi", "module_index": 8, "ip": "0x7feccfae0af" }, { "trust": "cfi", "module_index": 8, "ip": "0x7feccfae1be" }, ...

The "ip" is an instruction pointer.

The "module_index" refers to another list of modules that were all in memory at the time.

The "trust" refers to how the stack unwinding figured out how to unwind that frame. Sometimes it doesn't have enough information and it does an educated guess.

Symbolication takes the module name, the module debug id, and the offset and looks it up with the symbols it knows about. So for the first frame, it'd do this:

  1. module index 8 is xul.dll
  2. get the symbols for xul.pdb debug id 4E1555BE725E9E5C4C4C44205044422E1 which is at https://symbols.mozilla.org/xul.pdb/4E1555BE725E9E5C4C4C44205044422E1/xul.sym
  3. figure out that 0x7feccfc3337 (ip) - 0x7fecca50000 (base addr for xul.pdb module) is 0x573337
  4. look up 0x573337 in the SYM file and I think that's nsTimerImpl::InitCommon(mozilla::BaseTimeDuration<mozilla::TimeDurationValueCalculator> const &,unsigned int,nsTimerImpl::Callback &&)

Symbolication does that for every frame and then we end up with a helpful symbolicated stack.

Tecken has a symbolication API which takes the module and stack information in a minimal form and symbolicates using symbols it manages.

It takes a form like this:

{ "memoryMap": [ [ "xul.pdb", "44E4EC8C2F41492B9369D6B9A059577C2" ], [ "wntdll.pdb", "D74F79EB1F8D4A45ABCD2F476CCABACC2" ] ], "stacks": [ [ [0, 11723767], [1, 65802] ] ] }

This has two data structures. The first is a list of (module name, module debug id) tuples. The second is a list of (module id, memory offset) tuples.

What is Socorro-style signature generation?

Socorro has a signature generator that goes through the stack, normalizes the frames so that frames look the same across platforms, and then uses that to generate a "signature" for the crash that suggests a common cause for all the crash reports with that signature.

It's a fragile and finicky process. It works well for some things and poorly for others. There are other ways to generate signatures. This is the one that Socorro currently uses. We're constantly honing it.

I export Socorro's signature generation system as a Python library called siggen.

For examples of stacks -> signatures, look at crash reports on Crash Stats.

What is it and where does it go? Crash pings in Telemetry

Crash pings are only sent if Telemetry is enabled in Firefox.

The crash ping contains the stack for the crash, but little else about the crashed process. No register data, no memory data, nothing found on the heap.

The stack is unwound by minidump-analyzer in the client on the user's machine. Because of that, driver information can be used by unwinding so for some kinds of crashes, we may get a better stack than crash reports in Socorro.

Stacks in crash pings are not symbolicated.

There's an error aggregates data set generated from the crash pings which is used by Mission Control.

Crash reports in Socorro

Socorro does not get crash reports if the user chooses not to send a crash report.

Socorro collector discards crash reports for unsupported products.

Socorro collector throttles incoming crash reports for Firefox release channel--it accepts 10% of those for processing and rejects the other 90%.

The Socorro processor runs minidump-stackwalk on the minidump which unwinds the stack. Then it symbolicates the stack using symbols uploaded during the build process to symbols.mozilla.org.

If we don't have symbols for modules, minidump-stackwalk will guess at the unwinding. This can work poorly for crashes that involve drivers and system libraries we don't have symbols for.

Crash pings vs. Crash reports

Because of the above, there are big differences in collection of crash data between the two systems and what you can do with it.

Representative of the real world

Because crash ping data doesn't require explicit consent by users on a crash-by-crash basis and crash pings are sent using the Telemetry infrastructure which is pretty resilient to network issues and other problems, crash ping data in Telemetry is likely more representative of crashes happening for our users.

Crash report data in Socorro is limited to what users explicitly send us. Further, there are cases where Firefox isn't able to run the crash reporter dialog to ask the user.

For example, on Friday, June 28th, 2019 for Firefox release channel:

  • Telemetry got 1,706,041 crash pings
  • Socorro processed 42,939 crash reports, so figure it got around 420,000 crash reports
Stack quality

A crash report can have a different stack in the crash ping than in the crash report.

Crash ping data in Telemetry is unwound in the client. On Windows, minidump-analyzer can access CFI unwinding data, so the stacks can be better especially in cases where the stack contains system libraries and drivers.

We haven't implemented this yet on non-Windows platforms.

Crash report data in Socorro is unwound by the Socorro processor and is heavily dependent on what symbols we have available. It doesn't do a good job with unwinding through drivers and we often don't have symbols for Linux system libraries.

Gabriele says sometimes stacks are unwound better for crashes on MacOS and Linux platforms than what the crash ping contains.

Symbolication and signatures

Crash ping data is not symbolicated and we're not generating Socorro-style signatures, so it's hard to bucket them and see change in crash rates for specific crashes.

There's an fx-crash-sig Python library which has code to symbolicate crash ping stacks and generate a Socorro-style signature from that stack. This is helpful for one-off analysis but this is not a long-term solution.

Crash report data in Socorro is symbolicated and has Socorro-style signatures.

The consequence of this is that in Telemetry, we can look at crash rates for builds, but can't look at crash rates for specific kinds of crashes as bucketed by signatures.

The Signature Report and Top Crashers Report in Crash Stats can't be implemented in Telemetry (yet).

Tooling

Telemetry has better tooling for analyzing crash ping data.

Crash ping data drives Mission Control.

Socorro's tooling is limited to Supersearch web ui and API which is ok at some things and not great at others. I've heard some people really like the Supersearch web ui.

There are parts of the crash report which are not searchable. For example, it's not possible to search for crash reports where a certain module is in the stack. Socorro has a signature report and a topcrashers page which help, but they're not flexible for answering questions outside of what we've explicitly coded them for.

Socorro sends a copy of processed crash reports to Telemetry and this is in the "socorro_crash" dataset.

PII and data access

Telemetry crash ping data does not contain PII. It is not publicly available, except in aggregate via Mission Control.

Socorro crash report data contains PII. A subset of the crash data is available to view and search by anyone. The PII data is restricted to users explicitly granted access to it. PII data includes user email addresses, user-provided comments, CPU register data, what else was in memory, and other things.

Data expiration

Telemetry crash ping data isn't expired, but I think that's changing at some point.

Socorro crash report data is kept for 6 months.

Data latency

Socorro data is near real-time. Crash reports get collected and processed and are available in searches and reports within a few minutes.

Crash ping data gets to Telemetry almost immediately.

Main ping data has some latency between when it's generated and when it is collected. This affects normalization numbers if you were looking at crash rates from crash ping data.

Derived data sets may have some latency depending on how they're generated.

Conclusions and future plans Socorro

Socorro is still good for deep dives into specific crash reports since it contains the full minidump and sometimes a user email address and user comments.

Socorro has Socorro-style signatures which make it possible to aggregate crash reports into signature buckets. Signatures are kind of fickle and we adjust how they're generated over time as compilers, symbols extraction, and other things change. We can build Signature Reports and Top Crasher reports and those are ok, but they use total counts and not rates.

I want to tackle switching from Socorro's minidump-stackwalk to minidump-analyzer so we're using the same stack walker in both places. I don't know when that will happen.

Socorro is going to GCP which means there will be different tools available for data analysis. Further, we may switch to BigQuery or some other data store that lets us search the stack. That'd be a big win.

Telemetry

Telemetry crash ping data is more representative of the real world, but stacks are symbolicated and there's no signature generation, so you can't look at aggregates by cause.

Symbolication and signature generation of crash pings will get figured out at some point.

Work continues on Mission Control 2.0.

Telemetry is going to GCP which means there will be different tools available for data analysis.

Together

At the All Hands, I had a few conversations about fixing tooling for both crash reports and crash pings so the resulting data sets were more similar and you could move from one to the other. For example, if you notice a troubling trend in the crash ping data, can you then go to Crash Stats and find crash reports to deep dive into?

I also had conversations around use cases. Which data set is better for answering certain questions?

We think having a guide that covers which data set is good for what kinds of analysis, tools to use, and how to move between the data sets would be really helpful.

Thanks!

Many thanks to everyone who helped with this: Will Lachance, W Chris Beard, Gabriele Svelto, Nathan Froyd, and Chutten.

Also, many thanks to Chutten and Alessio who write fantastic blog posts about Telemetry things. Those are gold.

Updates
  • 2019-07-04: Crash ping data is not publicly available. Blog post updated accordingly. Thanks, Chutten!
Categorieën: Mozilla-nl planet

Mozilla Reps Community: 8 Years of Reps Program, Celebrating Community Successes!

wo, 03/07/2019 - 14:23

The Reps program idea was started in 2010 by William Quiviger and Pierros Papadeas, until officially launched and welcoming volunteers onboard as Mozilla Reps in 2011. The Mozilla Reps program aims to empower and support volunteer Mozillians who want to be official representatives of Mozilla in their region/locale/country. The program provides a framework and a specific set of tools to help Mozillians to organize and/or attend events, recruit and mentor new contributors, document and share activities, and support their local communities better. The Reps program was created to help communities around the world. Community is the backbone of the Mozilla project. As the Mozilla project grows in scope and scale, community needs to be strengthened and empowered accordingly. This is the central aim of the Mozilla Reps program: to empower and to help push responsibility to the edges, in order to help the Mozilla contributor base grow. Nowadays, the Reps are taking a stronger point by becoming the Community Coordinators.

You can learn more about the program here.

Success Stories

Mozilla Reps program proven to be successful to help to identify talents from local communities to contribute to Mozilla projects. Reps also help to run local/international events and helping the campaigns to take place. Some of the campaigns happen in the past, was a collaboration work with other Mozilla teams. These are list of some activities or campaigns which collaborating the Reps program with other Mozilla teams. Until today, we have total 7623 events, 261 active Reps, and 51,635 activities reported on Reps portal.

Historically, the Reps have been supporting major Mozilla products through the whole existence, from the Firefox OS Launch to supporting the latest campaigns. Events that help to promote the launch of new version of Firefox such Firefox 4.0, Firefox Quantum, and every Firefox major release updates. Events that care about users and community such Privacy Month, Aadhar, The Dark Funnel, Techspeakers, Web Compatibility Sprint, Maker Party. Events relate to new product release such Firefox Rocket/Lite and Screenshot Go for South East Asia and India market. Events relate to localization such The Add-on Localization. Events relate to Mozilla’s product such Rust, Webmaker, Firefox OS.

Do You Have More Ideas?

With those many success stories on the past in helping engagement of local communities and helping many different campaigns, Mozilla Reps still looking forward to many different activities and campaigns in future. So, if you have more ideas for campaigns or engagement for local communities, or want to collaborate with Mozilla Reps program to get in touch with local communities, Let’s do it together!

Categorieën: Mozilla-nl planet

Pagina's