mozilla

Mozilla Nederland LogoDe Nederlandse
Mozilla-gemeenschap

Air Mozilla: Reps Webinar

Mozilla planet - di, 04/04/2017 - 15:30

Reps Webinar Onboarding for new Reps

Categorieën: Mozilla-nl planet

Armen Zambrano: Screencast: How to green up Firefox test jobs on new infrastructure

Mozilla planet - di, 04/04/2017 - 15:24
In this blog post I go over the basics of investigating if a new platform on the continous integration system is ready to run Firefox test jobs.

In this case we look at Windows 7 and Windows 10 jobs on TaskCluster.
Some issues are on the actually machines (black screenshots; audio set up) and others are tests that need developer investigation.

You need about 30 minutes to watch these.


Creative Commons License
This work by Zambrano Gasparnian, Armen is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 Unported License.
Categorieën: Mozilla-nl planet

Nicholas Nethercote: Improving the Gecko Profiler

Mozilla planet - di, 04/04/2017 - 07:52

Over the last three months I landed 167 patches, via 41 Bugzilla bugs, for the Gecko Profiler. These included crash fixes, assertion failure fixes, data race fixes, optimization fixes, and a great many refactorings.

Background

The Gecko Profiler is a profiler built into Firefox. It can be used with the Gecko Profiler Addon to profile Firefox. It also provides the core profiling mechanism that is used by Firefox’s devtools to profile JavaScript content code.

It’s a crucial component.  It was originally written 5 years ago in something of a hurry, because we desperately needed a built-in profiler, one that could give more detailed, custom information than is possible with an external profiler. As I understand it, part of it was imported from V8, so it was a mix of previously existing code and new code. And in the years since it has been extended by multiple people, in a variety of ways that didn’t necessarily mesh well.

As a result, at the start of Q1 it was in pretty bad shape. Crashes and assertion failures were frequent, and the code itself was hard to read and maintain. Markus Stange had recently taken over ownership, and had made some great improvements to the Addon, but the core code needed work as well. So I started digging in to see what I could find and improve. There was a lot!

Fixes

Bug 1331571. The profiler had code for incorporating power consumption estimates from the Intel Power Gadget. Unfortunately, this integration had major flaws: Intel Power Gadget only gives very coarse power consumption estimates; the profiler samples at 1000Hz and is CPU intensive and so is likely to skew the power consumption estimates significantly; and nobody had ever used it meaningfully. So I removed it.

Bug 1317771. The profiler had a “standalone” configuration that allowed it to be used in programs other than Firefox. But it was complex (lots of #ifdef statements) and broken and unlikely to be of use. So I removed it.

Bug 1328369, Bug 1328373: Dead code removal.

Bug 1332577. The public API for the profiler was a mess. It was split across several header files, and most API functions had an “outer” version with a profiler_ prefix that immediately called into an inner version with a mozilla_sampler_ prefix (though there were some inconsistencies). So I combined the various header files into a single file, GeckoProfiler.h, and simplified the API functions into a single level, all consistently named with a profiler_ prefix.

Bug 1333296. Even the name of the profiler was a source of confusion. It was originally known as “SPS”, which I believe is short for “simple profiling system”. At some point that changed to “the Gecko Profiler”, although was also occasionally referred to as “the built-in profiler”! Because of this history, the code was littered with references to SPS. In this bug I updated them all to refer to the Gecko Profiler. (I updated the MDN docs, too. The page name still uses “Built-in Profiler” because I don’t know how to change MDN page names.)

Bug 1329684. I removed some mutex wrapper classes that I think were necessary at one point for the “standalone” configuration.

Bug 1328365. Thread-local storage was being used to store a pointer to an object that was only accessed on the main thread, so I changed it to be a global variable. I also renamed some variables whose names referred to a type that had been renamed a long time ago.

Bug 1333655. The profiler had a cross-platform thread abstraction that was clumsy and over-engineered, so I streamlined it.

Bug 1334466. The profiler had a class called Sampler, which I think was imported from V8, and a subclass called GeckoSampler. Both classes were fairly complex, and we only ever instantiated the subclass. The separation merely obscured things, so I merged the subclass into Sampler. Having all that code in a single class and a single module made it much easier to see exactly what it was doing.

Bug 1335595. Two classes, ThreadInfo and ThreadProfile, were used for per-thread data. They were hopelessly intertwined: each one had a pointer to the other, and multiple fields were present (i.e. duplicated) in both of them. So I just merged them.

Bug 1336326. Three minor clean-ups.

Bug 816598. I implemented a memory reporter for the profiler. This was first requested in 2012, and five duplicate bugs had been filed in the interim!

Bug 1126576. I removed some very grotty manual refcounting from the PseudoStack class, which simplified things. A little too much, in fact… I misunderstood how things worked, causing a crash, which I subsequently fixed in bug 1340161.

Bug 1337189. The aforementioned Sampler class was still over-engineered. It only ever had 0 or 1 instantiations, and basically was an unnecessary level of abstraction. In this bug I basically got rid of it by merging it into another file. Which took 27 patches! (One of these patches introduced a regression, which I later fixed in bug 1340327.) At this point a lot of the core code that had previously been spread across multiple files and classes was now in a single file, tools/profiler/core/platform.cpp, and it was becoming increasingly obvious that there was a lot of global state being accessed from multiple threads with insufficient thread synchronization.

Bug 1338957. The profiler tracks which threads are “sleeping” (i.e. blocked on some operation), to support an optimization where it samples sleeping threads in a cheaper-than-normal fashion. It was using two counters and a boolean to track the sleep state of each thread. These three values were accessed from multiple threads; two of them were atomic, and one wasn’t, so the whole setup was very racy. I managed to condense the three values into a single atomic tri-state value, which made things simpler and thread-safe.

Bug 1339327. I landed eight refactoring patches with no particular common theme, mostly involving renaming things and removing unnecessary stuff.

Bug 1339435. I removed two erroneous assertions that I had added in an earlier patch — two functions that I thought only ran on the main thread turned out to run off the main thread as well.

Bug 1339695. The profiler has a lot of code that is specific to a particular architecture (e.g. x86), OS (e.g. Windows), or platform (e.g. x86/Windows). The #ifdef statements used to select these were massively inconsistent — turns out there are many ways to detect this stuff — so I fixed this up. Among other things, this involved using the nice constants in tools/profiler/core/PlatformMacros.h consistently throughout the profiler’s code. (I fixed a regression — caused by mistyping one of the #ifdef conditions, alas! — from this change in bug 1350211. And another one involving --disable-profiling in bug 1348776.) I also renamed some files that had .cc extensions instead of the usual .cpp because they had (I think) been imported from V8.

Bug 1340928. At this point I had started working on a major change to the handling of the profiler’s core global state. It would inevitably be a big patch, but I wanted it to be as small as possible, so I started aggressively carving off small changes that could be landed separately. This bug featured 16 of them.

Bug 1328378. The profiler has two kinds of samples: periodic, which are taken by a separate thread in response to a timer firing, and synchronous, which a thread takes itself in response to a request via the profiler API. There are a lot of similarities between the two, but also some important differences. This bug took some steps to simplify the messy handling of synchronous samples.

Bug 1344118. I mentioned earlier that the profiler tracks which threads are “sleeping” to support an optimization: when a thread is asleep, we can mostly duplicate its last sample without unwinding its stack. But the optimization was buggy and would become a catastrophic pessimization in certain circumstances, due to what should have been a short O(1)-ish buffer search becoming O(n²)-ish, which would quickly peg one CPU at 100% usage. As far as I can tell, this bug was present in the optimization ever since it was implemented three years ago. (It’s possible it wasn’t noticed because its effect increase as more threads are profiled, but the profiler defaults to only profiling the main thread and the compositor thread.) The fix was straightforward once the diagnosis was made, and Julian Seward did a follow-up that made the optimization even more effective.

Bug 1342306. In this bug I put almost all of the profiler’s global state into a single class and protected accesses to it with a mutex. Unlike the old code, the new code is simple and obviously thread-safe. The final patch in this bug was much bigger than I would have liked, at 142 KiB, even after I carved off as many precursor patches as I could. Unsurprisingly, there were some follow-up fixes required: bug 1346356 (a leak and a deadlock), bug 1347044 (another deadlock), bug 1348374 (yet another deadlock), and bug 1350967 (surprise! another deadlock).

Bug 1345262. I fixed an assertion failure caused by the profiler and the JS engine having differing views about what functions should be called on what threads.

Bug 1347348. Five more assorted clean-ups.

Bug 1349856. I fixed a minor error involving a call to the profiler from Gecko.

Bug 1346132. I removed the profiler’s bespoke logging system, replacing it with the standard Mozilla one. I also made the logging output more concise and informative.

Bug 1350212. I cleaned up a class and its uses a bit.

Bug 1351523. I reordered one function’s arguments to match the order used in two related functions.

Bug 1351528. I removed some unused values from an enum.

Bug 1348024. I simplified some environment variables used by the profiler.

Bug 1351946. I removed some gnarly code for starting the profiler on B2G.

Bug 1351136. The profiler’s testing coverage is not very good, as can be seen from the numerous regressions I introduced and fixed. So I added a gtest that improves coverage. There’s still room for more test coverage improvement.

Bug 1351963. I further clarified the handling of synchronous vs. periodic samples, and simplified the ownership of some per-thread data structures.

Discussion

I learned some interesting things while doing this work.

Learning a component

Three months ago I knew almost nothing about the profiler’s code. Today I’m a module peer.

At the start of January I had been told that the profiler needed work, and I had assigned myself a Q1 deliverable to “land three improvements to the Gecko Profiler”. I started just by looking for easy, obvious code clean-ups, such as dead code removal, fixing inconsistent naming of things, and removing unnecessary indirections. (These are the kinds of clean-ups you can make with only shallow understanding.) The profiler proved to be a target-rich environment for such changes!

After I made a few dozen such changes I started understanding more deeply how the pieces fit together. (Partly because I’d been staring at the code a lot, and partly because my changes were making the code easier to understand. Refactorings add up.) I started interleaving my easy clean-up patches with ones that required more insight into how the profiler worked. I made numerous mistakes along the way, as the various regression fixes above show. But that’s ok.

I also kept a text file in which I had a list of ideas for things to fix. Every time I saw something that looked like it could be improved, I added it to the file, and I repeatedly checked the file when deciding what to work on next. As my understanding of the code improved, multiple times I realized that items I had written down were wrong, or incomplete, or that seemingly separate things were related. (In fact, I’m still using the file, because I still have numerous things I want to improve.)

Multi-threaded programming basics

Although I first learned C and C++ about 20 years ago, and I have worked at Mozilla for more than 8 years, this was the first time I’ve ever done serious multi-threaded programming, i.e. at a level where I needed a reasonably deep understanding of how various threads can interact. I got the following two great tips from Julian Seward, which helped a lot.

  • Write down pseudocode for each thread.
  • Write down potential worst-case thread operation interleavings.

I also found it helpful to add comments (or assertions, where appropriate) to the top of functions that indicate which thread or threads they run on. For example:

void profiler_gathered_OOP_profile() { MOZ_RELEASE_ASSERT(NS_IsMainThread()); ... }

and:

void profiler_thread_sleep() { // This function runs both on and off the main thread. ... } A useful idiom: proof-of-lock tokens

I also employed a programming idiom that turned out to be extremely helpful.  Most of the global profiler state is in single class called ProfilerState. There is a single instance of this class, gPS, and a single mutex that protects it, gPSMutex. To guarantee that no code is able to access gPS‘s contents without first locking the mutex, for every field in ProfilerState there is a getter and a setter, both of which require a “proof-of-lock” token, which takes the form of a const PS::AutoLock&, where PS::AutoLock is an RAII type that locks and unlocks a mutex.

For example, consider this function, which checks if the profiler is paused.

bool profiler_is_paused() { PS::AutoLock lock(gPSMutex); if (!gPS->IsActive(lock)) { return false; } return gPS->IsPaused(lock); }

The PS::AutoLock locks the mutex. IsActive() and IsPaused() both access fields within gPS, and so they are passed lock, which serves as the proof-of-lock value. IsPaused() and SetIsPaused() are implemented as follow.

bool IsPaused(const PS::AutoLock&) const { return mIsPaused; } void SetIsPaused(const PS::AutoLock&, bool aIsPaused) { mIsPaused = aIsPaused; }

Neither function actually uses the proof-of-lock token. Nonetheless, any function that calls a ProfilerState getter or setter must either lock gPSMutex, or have an ancestor that does. This idiom has two very nice benefits.

  • You can’t access gPS‘s contents without having first locked gPSMutex. (Well, it is possible to subvert the protection, but not by accident.)
  • It’s obvious that all functions that have a proof-of-lock argument are called only while gPSMutex is locked.

Functions that are called from multiple places sometimes must be split in two: an outer function in which the mutex is initially unlocked, and an inner function that takes a proof-of-lock token. This isn’t hard, though.

Deadlocks vs. data races

After my big change to the profiler’s global state, I had to fix numerous deadlocks. This wasn’t too hard. Deadlocks (caused by too much thread synchronization) are obvious, easy to diagnose, and these ones weren’t hard to fix. It’s useful to contrast them with data races (caused by too little thread synchronization) which typically have subtle effects and are difficult to diagnose.

Patch discipline

For this work I wrote a lot of small patches. This is my preferred way to work, for two reasons. First, small patches make life easier for reviewers, which in turn results in faster reviews. Second, small patches make regression hunting easy. It’s always nice when you bisect a regression to a small patch.

Future work and Thanks

The profiler still has plenty of room for improvement, and I’m planning to do more work on it in Q2. In the meantime, if you’ve tried the profiler in the past and had problems it might be worth trying again. It’s in much better shape now.

Finally, many thanks to Markus Stange for reviewing the majority of the patches and answering lots of questions, and Julian Seward for reviewing most of the remainder and for numerous helpful discussions about threaded programming.

Categorieën: Mozilla-nl planet

Mozilla Marketing Engineering & Ops Blog: Kuma Report, March 2017

Mozilla planet - di, 04/04/2017 - 07:32

Here’s what happened in March in Kuma, the engine of MDN:

  • Shipped content experiments framework
  • Merged read-only maintenance mode
  • Shipped tweaks and fixes

Here’s the plan for April:

  • Clean up KumaScript macro development
  • Improve and maintain CSS quality
  • Ship the sample database
Done in March Content Experiments Framework

We’re planning to experiment with small, interactive examples at the top of high-traffic reference pages. We want to see the effects of this change, by showing the new content to some of the users, and tracking their behavior. We shipped a new A/B testing framework, using the Traffic Cop library in the browser. We’ll use the framework for the examples experiment, starting in April.

Read-Only Maintenance Mode

We’ve merged a new maintenance mode configuration, which keeps Kuma running when the database connection is read-only. Eventually, this will allow MDN content to remain available when the database is being updated, and lead to new distributed architectures. In the near term, we’ll use it to test our new AWS infrastructure running production backups, and eventually against off-peak MDN traffic.

Shipped Tweaks and Fixes

Here’s some other highlights from the 15 merged Kuma PRs in March:

KumaScript continues to be busy, with 19 merged PRs. There were some PRs from new contributors:

Planned for April

We had a productive work week in Toronto. We decided that we need to make sure we’re paying down our technical debt regularly, while we continue supporting improved features for MDN visitors. Here’s what we’re planning to ship in April:

Clean Up KumaScript Macro Development

KumaScript macros have moved to GitHub, but ghosts of the old way of doing things remain in Kuma, and the development process is still tricky. This month, we’ll tackle some of the known issues:

  • Remove the legacy macros from MDN (stuck in time at November 2016)
  • Remove macro editing from MDN
  • Update macro searching
  • Start on an automated testing framework for KumaScript macros
Improve and Maintain CSS Quality

We’re preparing for some future changes by getting our CSS in order. One of the strategies will be to define style rules for our CSS, and check that existing code is compliant with stylelint. We can then enforce the style rules by detecting violations in pull requests.

Ship the Sample Database

The Sample Database has been promised every month since October 2016, and has slipped every month. We don’t want to break the tradition: the sample database will ship in April. See PR 4076 for the remaining tasks.

Categorieën: Mozilla-nl planet

This Week In Rust: This Week in Rust 176

Mozilla planet - di, 04/04/2017 - 06:00

Hello and welcome to another issue of This Week in Rust! Rust is a systems language pursuing the trifecta: safety, concurrency, and speed. This is a weekly summary of its progress and community. Want something mentioned? Tweet us at @ThisWeekInRust or send us a pull request. Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub. If you find any errors in this week's issue, please submit a PR.

Updates from Rust Community News & Blog Posts Crate of the Week

This week's Crate of this Week is fst, which contains Finite State Transducers and assorted algorithms that use them (e.g. fuzzy text search). Thanks to Jules Kerssemakers for the suggestion!

Submit your suggestions and votes for next week!

Call for Participation

Always wanted to contribute to open-source projects but didn't know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

If you are a Rust project owner and are looking for contributors, please submit tasks here.

Updates from Rust Core

114 pull requests were merged in the last week.

New Contributors
  • Alan Stoate
  • aStoate
  • Donnie Bishop
  • GAJaloyan
  • Jörg Thalheim
  • Malo Jaffré
  • Micah Tigley
  • Nick Sweeting
  • Phil Ellison
  • raph
Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

No RFCs were approved this week.

Final Comment Period

Every week the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now. This week's FCPs are:

New RFCs Style RFCs

Style RFCs are part of the process for deciding on style guidelines for the Rust community and defaults for Rustfmt. The process is similar to the RFC process, but we try to reach rough consensus on issues (including a final comment period) before progressing to PRs. Just like the RFC process, all users are welcome to comment and submit RFCs. If you want to help decide what Rust code should look like, come get involved!

We're making good progress and the style is coming together. If you want to see the style in practice, check out our example or use the Integer32 Playground and select 'Proposed RFC' from the 'Format' menu. Be aware that implementation is work in progress.

Issues in final comment period:

Good first issues:

We're happy to mentor these, please reach out to us in #rust-style if you'd like to get involved

Upcoming Events

If you are running a Rust event please add it to the calendar to get it mentioned here. Email the Rust Community Team for access.

Rust Jobs

No jobs listed for this week.

Tweet us at @ThisWeekInRust to get your job offers listed here!

Quote of the Week

I gave my company's Embedded C training course this morning. It's amazing how much more sense C makes when you explain it in Rust terms.

theJPster in #rust-embedded.

Thanks to Oliver Schneider for the suggestion.

Submit your quotes for next week!

This Week in Rust is edited by: nasa42, llogiq, and brson.

Categorieën: Mozilla-nl planet

Air Mozilla: Mozilla Weekly Project Meeting, 03 Apr 2017

Mozilla planet - ma, 03/04/2017 - 20:00

Mozilla Weekly Project Meeting The Monday Project Meeting

Categorieën: Mozilla-nl planet

Air Mozilla: Mozilla Weekly Project Meeting, 03 Apr 2017

Mozilla planet - ma, 03/04/2017 - 20:00

Mozilla Weekly Project Meeting The Monday Project Meeting

Categorieën: Mozilla-nl planet

Air Mozilla: Gecko Profiler Introduction

Mozilla planet - ma, 03/04/2017 - 18:34

Gecko Profiler Introduction Ehsan Akhgari: Gecko Profiler Introduction

Categorieën: Mozilla-nl planet

Air Mozilla: Gecko Profiler Introduction

Mozilla planet - ma, 03/04/2017 - 18:34

Gecko Profiler Introduction Ehsan Akhgari: Gecko Profiler Introduction

Categorieën: Mozilla-nl planet

Carsten Book: Sheriffing@Mozilla – Sheriffing and Backouts

Mozilla planet - ma, 03/04/2017 - 16:09

Hi,

Keeping the code trees [1] green (meaning free of build or test failures,
regressions, and minimizing intermittent test failures) is the daily
goal of sheriffing.

In order to reach this goal, this means we sometimes have to back out (revert)
changes made by developers. While this is a part of our job, we don’t do
it easily or without reason.

Backouts happen mostly for:
-> Bustage (i.e. Firefox no longer
successfully builds)
-> Test failures caused by a specific change
-> Issues reported by the community, like startup crashes or severe
regressions (these backouts often lead to new nightly builds being
created as well)
-> Performance regressions or memory leaks
-> Issues that block merges like merge-conflicts (like for a mozilla-inbound to mozilla-central merge)

For our primary integration repositories (where our developers land most
their changes), our workflow depends on which repository the problem is
on.

Mozilla-Inbound

-> Close Mozilla-Inbound if needed (preventing
developers from landing any further changes until the problem is
resolved)

-> Try to notify the responsible developer so that they
are  aware of the problem caused by their patch

-> If possible, we
accept follow-up patches to fix the problem. This allows us to fail
forward and avoid running extra jobs that require more CPU time and
therefore increase costs.

-> If we don’t get response from the developer within a short
timeframe like 5 minutes, we back out the change and comment in the
bug with a reason for the backout (for example, including a link to the
failure log) and a needinfo to the assigne, to make sure the bug don’t get lost.

Autoland

-> Changesets that cause problems are backed out immediately –
no follow-ups as described above are possible (only the sheriffs can push manually to
autoland)

In any case, backouts are never meant to be personal and it’s part of
our job to try our best to keep our trees open for developers. We also
try to provide as much information as possible in the bug for why we
backed out a change.

Of course, we also make mistakes and it could be that we backed out
changesets that were innocent (like in a case where its not 100% clear
what caused the problem), but we try our best.

If you feedback or ideas how we can make things better, let me know.

Cheers,
– Tomcat

 

[1] Trees: The tree contains the source code as well as the code required to build each project on supported platforms (Linux, Windows, macOS, etc) and tests for various areas. Sheriffs take care of Firefox Code Trees like mozilla-central, mozilla-inbound, autoland, mozilla-aurora, mozilla-beta and mozilla-esr45/52 – our primary tool is treeherder and can be found here

Categorieën: Mozilla-nl planet

Mozilla Addons Blog: Migrating ColorZilla to WebExtensions

Mozilla planet - ma, 03/04/2017 - 13:24

ColorZilla lets you get a color reading from any point in your browser, quickly make adjustments to it, and paste it into another program. It also generates gradients and more, making it an indispensable add-on for designers and artists.

For more resources on updating your extension, please check out MDN. You can also contact us via these methods.

Can you provide a short background on your add-on? What does it do, when was it created, and why was it created?

ColorZilla is one of the earliest Firefox add-ons—in fact, it’s the 271st Firefox add-on ever created (currently there are over 18,000 add-ons available on AMO). The first version was released almost 13 years ago in September 2004. ColorZilla was created to help designers and web developers with color-related tasks—it had the first-ever browser-based eyedropper, which allowed picking colors from any location in the browser and included a sophisticated Photoshop-like color-picker that could perform various color manipulations. Over the years the add-on gained recognition with millions of users, won awards and was updated with many advanced features, such as DOM color analyzers, gradient editors etc.

What add-on technologies or APIs were used to build your add-on?

Because the core of the ColorZilla codebase was written in the very early days, it used fairly low-level APIs and services.

Initially, ColorZilla relied on native XPCOM components for color sampling from the browser window. The first release included a Windows XPCOM module with a following release adding native XPCOM modules for MacOSX and Linux. After a few years, when new APIs became available, the native XPCOM part was eliminated and replaced with a Canvas JavaScript-based solution that didn’t require any platform-specific modules.

Beyond color sampling, ColorZilla used low-level Firefox XPCOM services for file system access (to save color palettes etc), preferences, extension management etc. It also accessed the browser content DOM directly in order to analyze DOM colors etc.

Why did you decide to transition your add-on to WebExtensions APIs?

There were two major reasons. The first reason was Firefox moving from single process to Electrolysis (e10s). With add-ons no longer able to directly access web content, it would have required refactoring large portions of the ColorZilla code base. In addition, as ColorZilla for Chrome was released in 2012, it meant that there was a need to maintain two completely separate code bases, and to implement new features and capabilities for both. Using WebExtensions allowed seamless supporting of e10s and code-sharing with ColorZilla for Chrome, minimizing the amount of overhead and maintenance and maximizing the efforts that could be invested in innovation and new capabilities.

Walk us through the process of how you made the transition. How was the experience of finding WebExtensions APIs to replace legacy APIs? What are some advantages and limitations?

Because ColorZilla for Chrome was already available on the market for about 5 years and because WebExtensions are largely based on Chrome extension APIs, the most natural path was to back-port the Chrome version to Firefox instead of porting the legacy Firefox extension code base to WebExtensions.

The first step of that process was to bring all the WebExtensions APIs used in the code to their latest versions, as ColorZilla for Chrome was using some older or deprecated Chrome APIs and Firefox implementation of WebExtensions is based on the latest APIs and doesn’t include the older versions. One such example is updating older chrome.extension.onRequest API to browser.runtime.onMessage.

The next step was to make all the places that hard-coded Chrome—in UI, URLs, etc—to be flexible and detect the current browser. The final step was to bridge various gaps in implementation or semantics between Chrome and Firefox—for example, it’s not possible to programmatically copy to clipboard from background scripts in Firefox. Another example is the browser.extension.isAllowedFileSchemeAccess API that has a slightly different semantic—meaning in Chrome, the script cannot access local files, and in Firefox, it cannot open them, but can still access them.

WebExtensions, as both a high-level and multi-browser set of APIs, has some limitations. One example that affected ColorZilla is that the main add-on button allows only one action. So the “browser action” cannot have a main button action and a drop-down containing a menu with more options (also known as a “menu-button” in the pre-WebExtensions world). With only one action available when users click on the main button, there was a need to come up with creative UI solutions to combine showing a menu of available options with auto-starting the color sampling. This allowed users to click on the web content and get a color reading immediately. This and other limitations require add-on developers to often not just port their add-ons to new APIs, but re-think the UI and functionality of their add-ons.

The huge advantages of the final WebExtensions-based ColorZilla is that it’s both future-proof, supporting new and future versions of Firefox, and multi-browser, supporting Chrome, Edge and other browsers with a single code base.

Note: This bug is meant to expand the capability of menu-buttons in the browserAction API.

What, if anything, is different about your add-on now that it is a WebExtension? Were you able to transition with all the features intact?

The majority of the functionality was successfully transitioned. The UI/UX of the add-on is somewhat different and some users did need to adjust to that, but all the top features (and more!) are there in the new WebExtensions version.

What advice would you give other legacy add-on developers?

First, I suggest going over the WebExtensions API and capabilities and doing a feasibility analysis of whether the legacy add-on functionality can be supported with WebExtensions. Some legacy add-ons leverage low-level APIs and access or modify Firefox in a very deep or unique way, which wouldn’t be possible with WebExtensions. Then, if the functionality can be supported, I suggest mapping the UI/UX of the legacy add-on to the new sets of WebExtensions requirements and paradigms—browser actions, popup windows etc. Following implementation, I suggest extensive testing across different platforms and configurations—depending on the complexity of the add-on, the porting process can introduce a range of issues and quirks. Finally, once the new WebExtensions-based version is released, my advice is to be ready to listen to user feedback and bug reports and quickly release new versions and address issues, to minimize the window of instability for users.

Anything else you’d like to add?

One advice for Mozilla is to better support developers’ and users’ transition to WebExtensions—the process is quite effort-intensive for developers, and user-facing issues, quirks and instabilities that might be introduced due to these changes might be frustrating for both add-on authors and their users. One thing Mozilla could improve, beyond supporting the developer community, is to really shorten the add-on review times and work with developers to shorten the cycle between user bug reports, developer fixes and the release of these fixes to the users. This will really minimize the window of instability for users and make the entire process of moving the Firefox add-on ecosystem to WebExtensions so much smoother. My advice for add-on authors on this front is to engage with the AMO editors, understand the review process and work together to make the review process as fast and smooth possible.

The post Migrating ColorZilla to WebExtensions appeared first on Mozilla Add-ons Blog.

Categorieën: Mozilla-nl planet

Eric Shepherd: Happy eleventh Mozillaversary to me!

Mozilla planet - ma, 03/04/2017 - 13:12

As of today—April 3, 2017—I’ve been working as a Mozilla staffer for 11 years. Eleven years of documenting the open Web, as well as, at times, certain aspects of the guts of Firefox itself. Eleven years. Wow. I wrote in some detail last year about my history at Mozilla, so I won’t repeat the story here.

I think 2017 is going to be a phenomenal year for the MDN team. We continue to drive forward on making open web documentation that can reach every web developer regardless of skill level. I’m still so excited to be a part of it all!

A little fox that Sophie got me

Last night, my eleven-year-old daughter (born about 10 months before I joined Mozilla) brought home this fox beanie plush for me. I don’t know what prompted her to get it—I don’t think she’s aware of the timing—but I love it! It may or may not actually be a red panda, but it has a very Firefox look to it, and that’s good enough for me.

Categorieën: Mozilla-nl planet

Hannes Verschore: Spidermonkey JIT improvements in FF53

Mozilla planet - ma, 03/04/2017 - 12:58

On 23th of January the code of Firefox 53 already merged into the stabilization tree. While working on the next releases the code of FF53 has time to stabilize before release on April 18th.

In FF53 a lot has happened. Narrowing down on the JITs, the following was committed:

CacheIR

CacheIR improved drastically in this release. The goal of this project is twofold. One part is to unify the inline caches (IC) stubs in Baseline and IonMonkey. As a result we will only have to implement a new stub once anymore, leading to less code duplication. Secondly it uses an intermediate representation allowing us to reuse parts between stubs.

Starting in this release IonMonkey uses this infrastructure for generating ICs. Also new ICs were ported and we have now complete coverage of JSOP_GETPROP (e.g. reading out obj.prop where obj is an object) and JSOP_GETELEM (e.g. reading out array[42]) in CacheIR. Besides this milestone inline caches for getting DOM expando properties (e.g. properties added on DOM objects) were added, getting own properties of expandos on DOM proxies and lookups of plain data properties on the WindowProxies.

Our regular contributor evilpie helped a lot with this effort and implemented a logger that shows when we are missing specific stubs. This allowed us to find missing edge cases on popular websites. This enabled optimizations notably on Google Docs and Twitter. This work will continue in FF54.

WebAssembly

Since we implemented the draft specification of WebAssembly, we haven’t stopped improving it, be it for throughput or for compilation time and we’ve been polishing our implementation to fix bugs and incorporate last-minute spec changes

In order to improve the experience we have moved validation on the helper thread and we’re doing more of the compilation in parallel. Lastly we added some optimizations to achieve better parallelism while compiling. As a result the compilation of WebAssembly code should be smoother.

IonMonkey

IonMonkey also got its fair share of improvements in this release.

On Google docs we noticed a lot of compilation time was spend in a particular function “FlagAllOperandsAsHavingRemovedUses”. We were able to decrease the time spent in the loop in that function by removing some extra checks. As a result this is now a very tight loop and not visible in profiles anymore.

We adjusted a part of our engine, IonBuilder, which job is to create an SSA graph from a JS script, to return a “Result” type. This annotation will makes it easier to differentiate between different kinds of failures and correctly act on them. It indicated places where we didn’t handle out of memory failures correctly. In the future it will also allow to backtrack after an inlining failure and continue compiling without having to restart over.

Another improvement that happened to IonBuilder is that we now split the creation of the Control Flow Graph (CFG) and the rest of what IonBuilder does. IonBuilder has a lot of different roles and as a result could be cleaner. Also this code is one of the few parts that cannot run on the background thread. This split simplifies the IonBuilder code a bit and allows us to cache the CFG. A recompilation should be a little bit faster now.

Taahir Ahmed added extra code to allow us to constant fold powers. With this code IonMonkey can now use the result of a power with constant, instead of executing it every time at runtime.

Addition to the team

I’m also happy to announce that Ted Campbell has joined the JIT team. He started January 9th and is located in the Toronto office. He is helping the CacheIR project and will also look into making new ECMAScript 2016 features faster in IonMonkey.

Closing notes

This is not a full list of the changes that happened, but should cover the big ones. If you want the full list I would recommend you read the bug list. I want to thank everybody for their hard work. If you are interested in helping out, we have a list of mentored bugs at bugsahoy or you can contact me (h4writer) online at irc.mozilla.org #jsapi.

Categorieën: Mozilla-nl planet

Mozilla Addons Blog: April’s Featured Add-ons

Mozilla planet - ma, 03/04/2017 - 08:26

Firefox Logo on blue background

Pick of the Month: Bulk Media Downloader

by InBasic
Manage large media downloads—audio, images, and video—with this lightweight tool.

“Very useful.”

Featured: Desktop Messenger for Telegram™

by Elen Norphen
Put Telegram right in your toolbar.

“Easy to locate groups, delete messages, and know everything stays secures. Keep up the good work!!!.”

Featured: Google™ Keep

by Philip Tholus, Morni Colhker
Have a notepad with you at all times.

“We have this proxy security stuff at work, and I can’t connect to Google Keep at work. No extensions worked in Chrome, and only one extension worked with FireFox. This way firefox became my default browser. Thank you.”

Featured: Font Finder (revived)

by Andy Portmen
Instantly analyze any font you find on the internet. This is a great tool for designers and developers.

“With one click, an entire paragraph’s font family, color (both hex and RGB), spacing, transformation, and element details are shown. “

Nominate your favorite add-ons

Featured add-ons are selected by a community board made up of add-on developers, users, and fans. Board members change every six months. Here’s further information on AMO’s featured content policies.

If you’d like to nominate an add-on for featuring, please send it to amo-featured [at] mozilla [dot] org for the board’s consideration. We welcome you to submit your own add-on!

The post April’s Featured Add-ons appeared first on Mozilla Add-ons Blog.

Categorieën: Mozilla-nl planet

The Servo Blog: These Weeks In Servo 96

Mozilla planet - ma, 03/04/2017 - 02:30

In the last two weeks, we landed 223 PRs in the Servo organization’s repositories.

Planning and Status

Our overall roadmap is available online, including the overall plans for 2017. Q2 plans will appear shortly; please check it out and provide feedback!

This week’s status updates are here.

Notable Additions
  • SimonSapin improved the test harness for the CSS parser.
  • emilio fixed the source of frequent intermittent failures in our automated tests.
  • SimonSapin allowed the CSS parser to accept less strict @font-face rules.
  • nox removed all interior mutability from the implementatation of the Fetch algorithm.
  • canaltinova made font-family CSS properties respect their original form when serializing.
  • nical fixed an assert failure stemming from external images that exceeded the max texture size.
  • nox implemented the websocket HTTP handshake.
  • vmx made the SpiderMonkey crate compile on Android x86.
  • streichgeorg implemented CSS parsing and serialization for the initial-letter property.
  • glennw added support for box shadows with border radii in WebRender.
  • jdm made the Rust SpiderMonkey bindings automatically invoke JS_ShutDown.
  • mrobinson corrected the scroll roots used for absolutely positioned elements.
  • emilio fixed the serialization of calc() expressions that were simplified during parsing.
  • froydnj reduced the amount of memory used by IDNA data tables in rust-url.
  • kvark split the drawing of rounded rectangles into opaque and transparent operations.
  • stshine replaced explicit style fixups during layout with more internal pseudo elements.
  • bholley made URLs more efficient for Stylo.
  • MortimerGoro fixed the crash occurring when moving Servo to the background on Android.
  • ferjm improved the performance of the image cache by making it per-document instead of global.
  • bd339 made the writing-mode CSS property affect the computed display of affected elements.
  • mephisto41 implemented gradient border support.
  • emilio reduced the impact of the bloom filter on complex CSS selectors.
  • avadacatavra and nox upgraded hyper and OpenSSL past the old, deprecated versions previous in use.
  • gterzian implemented support for structured clones of Blobs.
  • TheKK corrected the test harness for <a> elements with referrer policies.
New Contributors

Interested in helping build a web browser? Take a look at our curated list of issues that are good for new contributors!

Categorieën: Mozilla-nl planet

Mike Hommey: git-cinnabar experimental features

Mozilla planet - zo, 02/04/2017 - 00:54

Since version 0.4.0, git-cinnabar has a few hidden experimental features. Two of them are available in 0.4.0, and a third was recently added on the master branch.

The basic mechanism to enable experimental features is to set a preference in the git configuration with a comma-separated list of features to enable, or all, for all of them. That preference is cinnabar.experiments.

Any means to set a git configuration can be used. You can:

  • Add the following to .git/config: [cinnabar] experiments=feature
  • Or run the following command: $ git config cinnabar.experiments feature
  • Or only enable the feature temporarily for a given command: $ git -c cinnabar.experiments=feature command arguments

But what features are there?

wire

In order to talk to Mercurial repositories, git-cinnabar normally uses mercurial python modules. This experimental feature allows to access Mercurial repositories without using the mercurial python modules. It then relies on git-cinnabar-helper to connect to the repository through the mercurial wire protocol.

As of version 0.4.0, the feature is automatically enabled when Mercurial is not installed.

merge

Git-cinnabar currently doesn’t allow to push merge commits. The main reason for this is that generating the correct mercurial data for those merges is tricky, and needs to be gotten right.

In version 0.4.0, enabling this feature allows to push merge commits as long as the parent commits are available on the mercurial repository. If they aren’t, you need to first push them independently, and then push the merge.

On current master, that limitation doesn’t exist anymore ; you can just push everything in one go.

The main caveat with this experimental support for pushing merges is that it currently doesn’t handle the case where a file was moved on one of the branches the same way mercurial would (i.e. the information would be lost to mercurial users).

clonebundles

As of mercurial 3.6, Mercurial servers can opt-in to providing pre-generated bundles, which, when clients support it, takes CPU load off the server when a clone is performed. Good for servers, and usually good for clients too when they have a fast network connection, because downloading a pre-generated bundle is usually faster than waiting for the server to generate one.

As of a few days ago, the master branch of git-cinnabar supports cloning using those pre-generated bundles, provided the server advertizes them (mozilla-central does).

Categorieën: Mozilla-nl planet

Mike Hommey: Progress on git-cinnabar memory usage

Mozilla planet - za, 01/04/2017 - 11:45

This all started when I figured out that git-cinnabar was using crazy amounts of memory when cloning mozilla-central. That pointed to memory allocation patterns that triggered a suboptimal behavior in the glibc memory allocator, and, while overall, git-cinnabar wasn’t really abusing memory all things considered, it happened to be realloc()ating way too much.

It also turned out that recent changes on the master branch had made most uses of fast-import synchronous, making the whole process significantly slower.

This is where we started from on 0.4.0:

And on the master branch as of be75326:

An interesting thing to note here is that the glibc allocator runaway memory use was, this time, more pronounced on 0.4.0 than on master. It was the opposite originally, but as I mentioned in the past ASLR is making it not happen exactly the same way each time.

While I’m here, one thing I failed to mention in the previous posts is that all these measurements were done by cloning a local mercurial clone of mozilla-central, served from localhost via HTTP to eliminate the download time from hg.mozilla.org. And while mozilla-central itself has received new changesets since the first post, the local clone has not been updated, such that all subsequent clone tests I did were cloning the exact same repository under the exact same circumstances.

After last blog post, I focused on the low hanging fruits identified so far:

  • Moving the mercurial to git SHA1 mapping to the helper process (Finding a git bug in the process).
  • Tracking mercurial manifest heads in the helper process.
  • Removing most of the synchronous calls to the helper happening during a clone.

And this is how things now look on the master branch as of 35c18e7:

So where does that put us?

  • The overall clone is now about 11 minutes faster than 0.4.0 (and about 50 minutes faster than master as of be75326!)
  • Non-shared memory use of the git-remote-hg process stays well under 2GB during the whole clone, with no spike at the end.
  • git-cinnabar-helper now uses more memory, but the sum of both processes is less than what it used to be, even when compensating for the glibc memory allocator issue. One thing to note is that while the git-cinnabar-helper memory use goes above 2GB at the end of the clone, a very large part is due to the pack window size being 1GB on 64-bits (vs. 32MB on 32-bits). Memory usage should stay well under the 2GB address space limit on a 32-bits system.
  • CPU usage is well above 100% for most of the clone.

On a more granular level:

  • The “Import manifests” phase is now 13 minutes faster than it was in 0.4.0.
  • The “Read and import files” phase is still almost 4 minutes slower than in 0.4.0.
  • The “Import changesets” phase is still almost 2 minutes slower than in 0.4.0.
  • But the “Finalization” phase is now 3 minutes faster than in 0.4.0.

What this means is that there’s still room for improvement. But at this point, I’d rather focus on other things.

Logging all the memory allocations with the python allocator disabled still resulted in a 6.5GB compressed log file, containing 2.6 billion calls to malloc, calloc, free and realloc (down from 2.7 billions in be75326). The number of allocator calls done by the git-remote-hg process is down to 2.25 billions (from 2.34 billion in be75326).

Surprisingly, while more things were moved to the helper, it still made less allocations than in be75326: 345 millions, down from 363 millions. Presumably, this is because the number of commands processed by the fast-import code was reduced.

Let’s now take a look at the various metrics we analyzed previously (the horizontal axis represents the number of allocator calls that happened before the measurement):

A few observations to make here:

  • The allocated memory (requested bytes) is well below what it was, and the spike at the end is entirely gone. It also more closely follows the amount of raw data we’re holding on to (which makes sense since most of the bookkeeping was moved to the helper)
  • The number of live allocations (allocated memory pointers that haven’t been free()d yet) has gone significantly down as well.
  • The cumulated[*] bytes are now in a much more reasonable range, with the lower bound close to the total amount of data we’re dealing with during the clone, and the upper bound slightly over twice that amount (the upper bound for the be75326 is not shown here, but it was around 45TB; less than 7TB is a big improvement).
  • There are less allocator calls during the first phases and the “Importing changesets” phase, but more during the “Reading and importing files” and “Importing manifests” phases.

[*] The upper bound is the sum of all sizes ever given to malloc, calloc, realloc etc. and the lower bound is the same, but removing the size of allocations passed as input to realloc (in practical words, this pretends reallocs never happened and that the final size for a given reallocated pointer is the one that counts)

So presumably, some of the changes led to more short-lived allocations. Considering python uses its own allocator for sizes smaller than 512 bytes, it’s probably not so much of a problem. But let’s look at the distribution of buffer sizes (including all sizes given to realloc).

(Bucket size is 16 bytes)

What is not completely obvious from the logarithmic scale is that, in fact, 98.4% of the allocations are less than 512 bytes with the current master (35c18e7), and they were 95.5% with be75326. Interestingly, though, in absolute numbers, there are less allocations smaller than 512 bytes in current master than in be75326 (1,194,268,071 vs 1,214,784,494). This suggests the extra allocations that happen during some phases are larger than that.

There are clearly less allocations across the board (apart from very few exceptions), and close to an order of magnitude less allocations larger than 1MiB. In fact, widening the bucket size to 32KiB shows an order of magnitude difference (or close) for most buckets:

An interesting thing to note is how some sizes are largely overrepresented in the data with buckets of 16 bytes, like 768, 1104, 2048, 4128, with other smaller bumps for e.g. 2144, 2464, 2832, 3232, 3696, 4208, 4786, 5424, 6144, 6992, 7920… While some of those are powers of 2, most aren’t, and some of them may actually represent objects sized with a power of 2, but that have an extra PyObject overhead.

While looking at allocation stats, I got to wonder what the lifetimes of those allocations looked like. So I scanned the allocator logs and measured the distance between when an allocation is made and when it is freed, ignoring reallocs.

To give a few examples of what I mean, the following allocation for p gets a lifetime of 0:

void *p = malloc(42); free(p);

The following a lifetime of 1:

void *p = malloc(42); void *other = malloc(42); free(p);

And the following a lifetime of 1 as well:

void *p = malloc(42); p = realloc(p, 84); free(p);

(that is, it is not counted as two malloc/free pairs)

The further away the free is from the corresponding malloc, the larger the lifetime. And the largest the lifetime can ever be is the total number of allocator function calls minus two, in the hypothetical case the very first allocation is freed as the very last (minus two because we defined the lifetime as the distance).

What comes out of this data:

  • As expected, there are more short-lived allocations in 35c18e7.
  • Around 90% of allocations have a lifetime spanning 10% of the process life or less. This is a rather surprisingly large amount of allocations with a very large lifetime.
  • Around 80% of allocations have a lifetime spanning 0.01% of the process life or less.
  • The median lifetime is around 0.0000002% (2*10-7%) of the process life, which, in absolute terms is around 500 allocator function calls between a malloc and a free.
  • If we consider every imported changeset, manifest and file to require a similar number of allocations, and considering there are about 2.7M of them in total, each spans about 3.7*10-7%. About 53% of all allocations on be75326 and 57% on 35c18e7 have a lifetime below that. Whenever I get to look more closely to memory usage again, I’ll probably look at the data separately for each individual phase.
  • One surprising fact, that doesn’t appear on the graph because of the logarithmic scale not showing “0” on the horizontal axis, is that 9.3% on be75326 and 7.3% on 35c18e7 of all allocations have a lifetime of 0. That is, whatever the code using them is doing, it’s not allocating or freeing anything else, and not reallocating them either.

All in all, what the data shows is that we’re definitely in a better place now than we used to be a few days ago, and that there is still work to do on the memory front, but:

  • As mentioned in a previous post, there are bigger wins to be had from not keeping manifests data around in memory at all, and by importing it directly instead.
  • In time, a lot of the import code is meant to move to the helper, where the constraints are completely different, and it might not be worth spending time now on reducing the memory usage of python code that might go away soon(ish). The situation was bad and necessitated action rather quickly, but we’re now in a place where it’s not as bad anymore.

So at this point, I won’t look any deeper into the memory usage of the git-remote-hg python process, and will instead focus on the planned metadata storage changes. They will allow to share the metadata more easily (allowing faster and more straightforward gecko-dev graft), and will allow to import manifests earlier, which, as mentioned already, will help reduce memory use, but, more importantly, will allow to do more actual work while downloading the data. On slow networks, this is crucial to make clones and pulls faster.

Categorieën: Mozilla-nl planet

Gervase Markham: Root Store Policy 2.4.1 Published

Mozilla planet - vr, 31/03/2017 - 22:11

Version 2.4.1 of Mozilla’s CA Policy has now been published. This document incorporates by reference the Common CCADB Policy 1.0 and the Mozilla CCADB Policy 1.0. Neither of these latter two documents has changed in this revision cycle.

This version has no new normative provisions; it is a rearrangement and reordering of the existing policy 2.4. Diffs against 2.4 are not provided because they are not useful; everything appears to have changed textually, even if nothing has changed normatively.

It’s on days like this that one remembers that making the Internet a better, safer and more secure place often involves doing things which are very mundane. :-) The next job will be to work on version 2.5, of which more later.

Categorieën: Mozilla-nl planet

Gervase Markham: Happy Birthday, Mozilla!

Mozilla planet - vr, 31/03/2017 - 22:08

Mozilla is 19 today :-)

Categorieën: Mozilla-nl planet

Air Mozilla: Bedrock: From Code to Production

Mozilla planet - vr, 31/03/2017 - 21:28

 From Code to Production A presentation on how changes to our flagship website (www.mozilla.org) are made, and how to request them so that they're as high-quality and quick-to-production as...

Categorieën: Mozilla-nl planet

Pagina's