Mozilla Nederland LogoDe Nederlandse

Abonneren op feed Mozilla planet
Planet Mozilla -
Bijgewerkt: 16 uur 10 min geleden

Mark Finkle: Random Management: Unblocking Technical Leadership

zo, 16/08/2015 - 00:09

I’m an Engineering Manager, previously a Senior Developer. I have done a lot of coding. I still try to do a little coding every now and then. Because of my past, I could be oppressive to senior developers on my teams. When making decisions, I found myself providing both the management viewpoint and the technical viewpoint. This usually means I was keeping a perfectly qualified technical person from participating at a higher level of responsibility. This creates an unhealthy technical organization with limited career growth opportunities.

As a manager with a technical background, I found it difficult to separate the two roles, but admitting there was a problem was a good first step. Over the last few years, I have been trying to get better at creating more room for technical people to grow on my teams. It seems to be more about focusing on outcomes for them to target, finding opportunities for them to tackle, listening to what they are telling me, and generally staying out of the way.

Another thing to keep in mind, it’s not just an issue with management. The technical growth track is a lot like a ladder: Keep developers climbing or everyone can get stalled. We need to make sure Senior Developers are working on suitable challenges or they end up taking work away from Junior Developers.

I mentioned this previously, but it’s important to create a path for technical leadership. With that in mind, I’m really happy about the recently announced Firefox Technical Architects Group. Creating challenges for our technical leadership, and roles with more responsibility and visibility. I’m also interested to see if we get more developers climbing the ladder.

Categorieën: Mozilla-nl planet

fantasai: Open Letter on the Iran Nuclear Deal

za, 15/08/2015 - 19:00
An argument in favor of approving the Iran Nuclear Deal.
Categorieën: Mozilla-nl planet

Frédéric Wang: MathML Accessibility (part II)

za, 15/08/2015 - 11:50

As announced in a previous blog post, I was invited to two Mozilla Work Weeks in Toronto and Whistler during the month of June. Before these work weeks, the only assistive technology able to read MathML in Gecko-based browsers was NVDA, via the help of the third-party MathPlayer plugin developed by Design Science, as shown in the following video:

Thanks to the effort done during these work weeks plus some additional days, we have made good progress to expose MathML via accessibility APIs on other platforms: Mac OS X, Linux, Android and Firefox OS. Note that Firefox for iOS uses WebKit, so MathML should already be exposed and handled via Apple's WebKit/VoiceOver. If you are not familiar with accessibility APIs (and actually even if you are), I recommend you to read Marco Zehe's excellent blog post about why accessibility APIs matter.

Apple was the first company to rely on accessibility APIs to make MathML accessible: WebKit exposes MathML via its NSAccessibility protocol and it can then be handled by the VoiceOver assistive technology. One of the obvious consequence of working with open standards and open accessibility APIs is that it was then relatively easy for us to make MathML accessible on Mac OS X: We basically just read the WebKit source code to verify how MathML is exposed and did the same for Gecko. The following video shows VoiceOver reading a Wikipedia page with MathML mode enabled in Gecko 41:

Of course, one of the disadvantage is that VoiceOver is proprietary and so we are dependent on what Apple actually implements for MathML and we can not easily propose patches to fix bugs or add support for new languages. This is however still more convenient for users than the proprietary MathPlayer plugin used by NVDA: at least VoiceOver is installed by default on Apple's products and well-integrated into their user & accessibility interfaces. For instance, I was able to use the standard user interface to select the French language in VoiceOver and it worked immediately. For NVDA+MathPlayer, there are several configuration menus (one for the Windows system, one for NVDA and one for MathPlayer) and even after selecting French everywhere and rebooting, the math formulas were still read in English...

The next desktop platform we worked on was Linux. We continued to improve how Gecko expose the MathML via the ATK interface but the most important work was done by Joanmarie Diggs: making Orca able to handle the exposed MathML accessibility tree. Compared to the previous solutions, this one is 100% open and I was happy to be able to submit a couple of patches to Orca and to work with the Gnome Translation Team to keep the French translation up-to-date. By the way, if you are willing to contribute to the localization of Orca into your language feel free to join the Gnome Translation Project, help will be much appreciated! The following video shows how Orca reads the previous Wikipedia page in Nightly builds:

On mobile platforms (Android and Firefox OS) we use a common Javascript layer called AccessFu to handle Gecko's internal accessibility tree. So all of this is handled by Mozilla and hence is also 100% open. As I said in my previous blog post, I was not really aware of the internal details before the Work Weeks so it was good to get more explanations and help from Yura Zenevich. Although we were able to do some preliminary work to add MathML support to AccessFu in bug 1163374, this will definitely need further improvements. So I will not provide any demo for now :-)

To conclude this overview, you can check the status of accessibility on the Mozilla MathML Project page. This page also contains a table of MathML tests and how they are handled on the various platforms. At the end of September, I will travel to Toronto to participate to the Mozilla and FOSS Assistive Technology Meetup. In particular, I hope to continue improvements to MathML Accessibility in Mozilla products... Stay tuned!

Categorieën: Mozilla-nl planet

Vladan Djeric: New policy: 24-hour backouts for major Talos regressions

za, 15/08/2015 - 03:22

Now that I’ve caught your attention with a sufficiently provocative title, please check out this new Talos regression policy that we* will be trying out starting next week :)!topic/

tl;dr: Perf sheriffs will back out any Talos regression of 10% or more if it affects a reliable test on Windows. We’ll give the patch author 24 hours to explain why the regression is acceptable and shouldn’t be backed out. Perf sheriffs will aim to have such regressions backed out within 48 hours of landing.

I promise this policy is much more nuanced and thought-through than the title or summary might suggest :) But I really want to hear developers’ opinions.

* I’m taking point on publicizing this new policy and answering any questions, but Joel Maher, William Lachance and Vaibhav Agarwal of the A-Team did all the heavy lifting. They built the tools for detecting & investigating Talos regressions and they’re the perf sheriffs.

Avi Halachmi from my team is helping to check the tools for correctness. I just participate in Talos policy decisions and occasionally act as an (unintentional) spokesperson :)

Categorieën: Mozilla-nl planet

David Weir (satdav): Mozilla now has supports en-gb on sites

za, 15/08/2015 - 02:15

Mozilla is now supporting en-gb (English Great Britain


Marketplace and Add Ons and Mozillians  will have this enabled within the next couple of weeks

Webmaker will be going live on Monday 17th August 2015

Websites what have this at present



Categorieën: Mozilla-nl planet

David Weir (satdav): Windows 10 testday

za, 15/08/2015 - 02:05

Hello fellow Mozillians I am arranging a windows testday for the users of windows 10

I understand what this might be a issue with some users so if you dont have windows 10 I am happy for you to test on an other operating systen

It will be the best build of Firefox we are going to be testing

Full details of the event can be seen here 

Categorieën: Mozilla-nl planet

Mozilla Addons Blog: Add-on Compatibility for Firefox 41

vr, 14/08/2015 - 22:19

Firefox 41 will be released on September 22nd. Here’s the list of changes that went into this version that can affect add-on compatibility. There is more information available in Firefox 41 for Developers, so you should also give it a look.

Extension signing
  • This is the first version of Firefox that will enforce our new signing requirements. Firefox 40 only warned about it. 41 will disable unsigned extensions by default. All AMO add-ons have already been signed and we’re in the process of reviewing non-AMO add-ons.
General XPCOM New

Please let me know in the comments if there’s anything missing or incorrect on these lists. If your add-on breaks on Firefox 41, I’d like to know.

The automatic compatibility validation and upgrade for add-ons on AMO will happen in the coming weeks, so keep an eye on your email if you have an add-on listed on our site with its compatibility set to Firefox 40.

Categorieën: Mozilla-nl planet

QMO: Seeking participants interested in a FX OS testing event

vr, 14/08/2015 - 19:54

On October 24-25th we are planning a joint l10n/QA hackathon style meetup in Paris, France. This will be similar in format to the first event we held in July in Lima, Peru.

If you are interested in participating, please visit this site to learn more about the event. We will select 5 contributors who reside in the EU to participate in this exciting event. This is a great opportunity to learn more about Firefox OS QA and directly contribute to work on one of the Mozilla QA functional teams.

The deadline for application submission is Wednesday, August 26, 2015.

Please contact if you have any questions.

Categorieën: Mozilla-nl planet

Nick Desaulniers: My SIGGRAPH 2015 Experience

vr, 14/08/2015 - 19:10

I was recently lucky enough to get to attend my first SIGGRAPH conference this year. While I didn’t attend any talks, I did spend some time in the expo. Here is a collection of some of the neat things I saw at SIGGRAPH 2015. Sorry it’s not more collected; I didn’t have the intention of writing a blog post until after folks kept asking me “how was it?”


Most booths had demos on VR headsets. Many were DK2’s and GearVR’s. AMD and NVIDIA had Crescent Bay’s (next gen VR headset). It was noticeably lighter than the DK2, and I thought it rendered better quality. It had nicer cable bundling, and headphones built in, that could fold up and lock out of the way that made it nice to put on/take off. I also tried a Sony Morpheus. They had a very engaging demo that was a tie in to the upcoming movie about tight rope walking, “The Walk”. They had a thin PVC pipe taped to the floor that you had to balance on, and a fan, and you were tight rope walking between the Twin Towers. Looking down and trying to balance was terrifying. There were some demos with a strange mobile VR setup where folks had a backpack on that had an open laptop hanging off the back and could walk around. Toyota and Ford had demos where you could inspect their vehicles in virtual space. I did not see a single HTC/Valve Vive at SIGGRAPH.


Epson had some AR glasses. They were very glasses friendly, unlike most VR headsets. The nose piece was flexible, and if you flattened it out, the headset could rest on top of your glasses and worked well. The headset had some very thick compound lenses. There was a front facing camera and they had a simple demo using image recognition of simple logos (like QR codes) that helped provide position data. There were other demos with orientation tracking that worked well. They didn’t have positional sensor info, but had some hack that tried to estimate positional velocity off the angular momentum (I spoke with the programmer who implemented it).


There was a demo of holograms using tilted pieces of plastic arranged in a box. Also, there was a multiple (200+) projector array that projected a scene onto a special screen. When walking around the screen, the viewing angle always seemed correct. It was very convincing, except for the jarring restart of the animated loop which could be smoothed out (think looping/seamless gifs).

VR/3D video

Google cardboard had a booth showing off 3D videos from youtube. I had a hard time telling if the video were stereoscopic or monoptic since the demo videos only had things in the distance so it was hard to tell if parallax was implemented correctly. A bunch of booths were showing off 3D video, but as far as I could tell, all of the correctly rendered stereoscopic shots were computer rendered. I could not find a single instance with footage shot from a stereoscopic rig, though I tried.


NVIDIA and Intel had the largest booths, followed by Pixar’s Renderman. Felt like a GDC event, smaller, but definitely larger than GDC next. More focus on shiny photorealism demos, artistic tools, less on game engines themselves.

Vulcan/OpenGL ES 3.2

Intel had demos of Vulcan and OpenGL ES 3.2. For 3.2 they were showing off tessellation shaders, I think. For the Vulcan demo, they had a cool demo showing how with a particle demo scene rendered with OpenGL 4, a single CPU was pegged, it was using a lot of power, and had pretty abysmal framerate. When rendering the same scene with Vulcan, they were able to more evenly distribute the workload across CPUs, achieve higher framerate, while using less power. The API to Vulcan is still not published, so no source code is available. It was explained to me that Vulcan is still not thread safe; instead you get the freedom to implement synchronization rather than the driver.


There was a neat demo of a planetarium projector being repurposed to display an “on rails” demo of a virtual scene. You didn’t get parallax since it was being projected on a hemisphere, but it was neat in that like IMAX your entire FOV was encompassed, but you could move your head, not see any pixels, and didn’t experience any motion sickness or disorientation.


I spoke with some folks at the X3D booth about X3DOM. To me, it seems like a bunch of previous attempts have kind of added on too much complexity in an effort to support every use case under the sun, rather than just accept limitations, so much so that getting started writing hello world became difficult. Some of the folks I spoke to at the booth echoed this sentiment, but also noted the lack of authoring tools as things that hurt adoption. I have some neat things I’m working on in this space, based on this and other prior works, that I plan on showing off at the upcoming BrazilJS.

Maker Faire

There was a cool maker faire, some things I’ll have to order for family members (young hackers in training) were Canny bots, eBee and Piper.

Experimental tech

Bunch of neat input devices, one I liked used directional sound as tactile feedback. One demo was rearranging icons on a home screen. Rather than touch the screen, there was a field of tiny speakers that would blast your finger with sound when it entered to simulate the feeling of vibration. It would vibrate to let you know you had “grabbed” and icon, and then drag it.

Book Signing

This was the first time I got to see my book printed in physical form! It looked gorgeous, hardcover printed in color. I met about half of the fellow authors who were also at SIGGRAPH, and our editor. I even got to meet Eric Haines, who reviewed my chapter before publication!

Categorieën: Mozilla-nl planet

Air Mozilla: Webmaker Demos August 14 2015

vr, 14/08/2015 - 19:00

Webmaker Demos August 14 2015 Webmaker Demos August 14 2015

Categorieën: Mozilla-nl planet

Laura de Reynal: The remix definition

vr, 14/08/2015 - 18:18

“What does remixing mean ? To take something that’s pretty good, and add your touch to properly make it better with no disrespect to the creator.”

15 years old teenager, Chicago

Filed under: Mozilla
Categorieën: Mozilla-nl planet

The Rust Programming Language Blog: Rust in 2016

vr, 14/08/2015 - 02:00

This week marks three months since Rust 1.0 was released. As we’re starting to hit our post-1.0 stride, we’d like to talk about what 1.0 meant in hindsight, and where we see Rust going in the next year.

What 1.0 was about

Rust 1.0 focused on stability, community, and clarity.

Altogether, Rust is exciting because it is empowering: you can hack without fear. And you can do so in contexts you might not have before, dropping down from languages like Ruby or Python, making your first foray into systems programming.

That’s Rust 1.0; but what comes next?

Where we go from here

After much discussion within the core team, early production users, and the broader community, we’ve identified a number of improvements we’d like to make over the course of the next or so, falling into three categories:

  • Doubling down on infrastructure;
  • Zeroing in on gaps in key features;
  • Branching out into new places to use Rust.

Let’s look at some of the biggest plans in each of these categories.

Doubling down: infrastructure investments Crater

Our basic stability promise for Rust is that upgrades between versions are “hassle-free”. To deliver on this promise, we need to detect compiler bugs that cause code to stop working. Naturally, the compiler has its own large test suite, but that is only a small fraction of the code that’s out there “in the wild”. Crater is a tool that aims to close that gap by testing the compiler against all the packages found in, giving us a much better idea whether any code has stopped compiling on the latest nightly.

Crater has quickly become an indispensable tool. We regularly compare the nightly release against the latest stable build, and we use crater to check in-progress branches and estimate the impact of a change.

Interestingly, we have often found that when code stops compiling, it’s not because of a bug in the compiler. Rather, it’s because we fixed a bug, and that code happened to be relying on the older behavior. Even in those cases, using crater helps us improve the experience, by suggestion that we should phase fixes in slowly with warnings.

Over the next year or so, we plan to improve crater in numerous ways:

  • Extend the coverage to other platforms beyond Linux, and run test suites on covered libraries as well.
  • Make it easier to use: leave an @crater: test comment to try out a PR.
  • Produce a version of the tool that library authors can use to see effects of their changes on downstream code.
  • Include code from other sources beyond
Incremental compilation

Rust has always had a “crate-wide” compilation model. This means that the Rust compiler reads in all of the source files in your crate at once. These are type-checked and then given to LLVM for optimization. This approach is great for doing deep optimization, because it gives LLVM full access to the entire set of code, allowing for more better inlining, more precise analysis, and so forth. However, it can mean that turnaround is slow: even if you only edit one function, we will recompile everything. When projects get large, this can be a burden.

The incremental compilation project aims to change this by having the Rust compiler save intermediate by-products and re-use them. This way, when you’re debugging a problem, or tweaking a code path, you only have to recompile those things that you have changed, which should make the “edit-compile-test” cycle much faster.

Part of this project is restructuring the compiler to introduce a new intermediate representation, which we call MIR. MIR is a simpler, lower-level form of Rust code that boils down the more complex features, making the rest of the compiler simpler. This is a crucial enabler for language changes like non-lexical lifetimes (discussed in the next section).

IDE integration

Top-notch IDE support can help to make Rust even more productive. Up until now, pioneering projects like Racer, Visual Rust, and Rust DT have been working largely without compiler support. We plan to extend the compiler to permit deeper integration with IDEs and other tools; the plan is to focus initially on two IDEs, and then grow from there.

Zeroing in: closing gaps in our key features Specialization

The idea of zero-cost abstractions breaks down into two separate goals, as identified by Stroustrup:

  • What you don’t use, you don’t pay for.
  • What you do use, you couldn’t hand code any better.

Rust 1.0 has essentially achieved the first goal, both in terms of language features and the standard library. But it doesn’t quite manage to achieve the second goal. Take the following trait, for example:

pub trait Extend<A> { fn extend<T>(&mut self, iterable: T) where T: IntoIterator<Item=A>; }

The Extend trait provides a nice abstraction for insert data from any kind of iterator into a collection. But with traits today, that also means that each collection can provide only one implementation that works for all iterator types, which requires actually calling .next() repeatedly. In some cases, you could hand code it better, e.g. by just calling memcpy.

To close this gap, we’ve proposed specialization, allowing you to provide multiple, overlapping trait implementations as long as one is clearly more specific than the other. Aside from giving Rust a more complete toolkit for zero-cost abstraction, specialization also improves its story for code reuse. See the RFC for more details.

Borrow checker improvements

The borrow checker is, in a way, the beating heart of Rust; it’s the part of the compiler that lets us achieve memory safety without garbage collection, by catching use-after-free bugs and the like. But occasionally, the borrower checker also “catches” non-bugs, like the following pattern:

match map.find(&key) { Some(...) => { ... } None => { map.insert(key, new_value); } }

Code like the above snippet is perfectly fine, but the borrow checker struggles with it today because the map variable is borrowed for the entire body of the match, preventing it from being mutated by insert. We plan to address this shortcoming soon by refactoring the borrow checker to view code in terms of finer-grained (“non-lexical”) regions – a step made possible by the move to the MIR mentioned above.


There are some really neat things you can do in Rust today – if you’re willing to use the Nightly channel. For example, the regex crate comes with macros that, at compile time, turn regular expressions directly into machine code to match them. Or take the rust-postgres-macros crate, which checks strings for SQL syntax validity at compile time. Crates like these make use of a highly-unstable compiler plugin system that currently exposes far too many compiler internals. We plan to propose a new plugin design that is more robust and provides built-in support for hygienic macro expansion as well.

Branching out: taking Rust to new places Cross-compilation

While cross-compiling with Rust is possible today, it involves a lot of manual configuration. We’re shooting for push-button cross-compiles. The idea is that compiling Rust code for another target should be easy:

  1. Download a precompiled version of libstd for the target in question, if you don’t already have it.
  2. Execute cargo build --target=foo.
  3. There is no step 3.
Cargo install

Cargo and is a really great tool for distributing libaries, but it lacks any means to install executables. RFC 1200 describes a simple addition to cargo, the cargo install command. Much like the conventional make install, cargo install will place an executable in your path so that you can run it. This can serve as a simple distribution channel, and is particularly useful for people writing tools that target Rust developers (who are likely to be familiar with running cargo).

Tracing hooks

One of the most promising ways of using Rust is by “embedding” Rust code into systems written in higher-level languages like Ruby or Python. This embedding is usually done by giving the Rust code a C API, and works reasonably well when the target sports a “C friendly” memory management scheme like reference counting or conservative GC.

Integrating with an environment that uses a more advanced GC can be quite challenging. Perhaps the most prominent examples are JavaScript engines like V8 (used by [node.js]) and SpiderMonkey (used by Firefox and Servo). Integrating with those engines requires very careful coding to ensure that all objects are properly rooted; small mistakes can easily lead to crashes. These are precisely the kind of memory management problems that Rust is intended to eliminate.

To bring Rust to environments with advanced GCs, we plan to extend the compiler with the ability to generate “trace hooks”. These hooks can be used by a GC to sweep the stack and identify roots, making it possible to write code that integrates with advanced VMs smoothly and easily. Naturally, the design will respect Rust’s “pay for what you use” policy, so that code which does not integrate with a GC is unaffected.

Epilogue: RustCamp 2015, and Rust’s community in 2016

We recently held the first-ever Rust conference, RustCamp 2015, which sold out with 160 attendees. It was amazing to see so much of the Rust community in person, and to see the vibe of our online spaces translate into a friendly and approachable in-person event. The day opened with a keynote from Nicholas Matsakis and Aaron Turon laying out the core team’s view of where we are and where we’re headed. The slides are available online (along with several other talks), and the above serves as the missing soundtrack.

There was a definite theme of the day: Rust’s greatest potential is to unlock a new generation of systems programmers. And that’s not just because of the language; it’s just as much because of a community culture that says “Don’t know the difference between the stack and the heap? Don’t worry, Rust is a great way to learn about it, and I’d love to show you how.”

The technical work we outlined above is important for our vision in 2016, but so is the work of those on our moderation and community teams, and all of those who tirelessly – enthusiastically – welcome people coming from all kinds of backgrounds into the Rust community. So our greatest wish for the next year of Rust is that, as its community grows, it continues to retain the welcoming spirit that it has today.

Categorieën: Mozilla-nl planet

Jonathan Griffin: Engineering Productivity Update, August 13, 2015

vr, 14/08/2015 - 01:17
From Automation and Tools to Engineering Productivity

“Automation and Tools” has been our name for a long time, but it is a catch-all name which can mean anything, everything, or nothing, depending on the context. Furthermore, it’s often unclear to others which “Automation” we should own or help with.

For these reasons, we are adopting the name “Engineering Productivity”. This name embodies the diverse range of work we do, reinforces our mission (, promotes immediate recognition of the value we provide to the organization, and encourages a re-commitment to the reason this team was originally created—to help developers move faster and be more effective through automation.

The “A-Team” nickname will very much still live on, even though our official name no longer begins with an “A”; the “get it done” spirit associated with that nickname remains a core part of our identity and culture, so you’ll still find us in #ateam, brainstorming and implementing ways to make the lives of Mozilla’s developers better.


Treeherder: Most of the backend work to support automatic starring of intermittent failures has been done. On the front end, several features were added to make it easier for sheriffs and others to retrigger jobs to assist with bisection: the ability to fill in all missing jobs for a particular push, the ability to trigger Talos jobs N times, the ability to backfill all the coalesced jobs of a specific type, and the ability to retrigger all pinned jobs. These changes should make bug hunting much easier.  Several improvements were made to the Logviewer as well, which should increase its usefulness.

Perfherder and performance testing: Lots of Perfherder improvements have landed in the last couple of weeks. See details at wlach’s blog post.  Meanwhile, lots of Talos cleanup is underway in preparation for moving it into the tree.

MozReview: Some upcoming auth changes are explained in mcote’s blog post.

Mobile automation: gbrown has converted a set of robocop tests to the newly enabled mochitest-chrome on Android. This is a much more efficient harness and converting just 20 tests has resulted in a reduction of 30 minutes of machine time per push.

Developer workflow: chmanchester is working on building annotations into files that will automatically select or prioritize tests based on files changed in a commit. See his blog post for more details. Meanwhile, armenzg and adusca have implemented an initial version of a Try Extender app, which allows people to add more jobs on an existing try push. Additional improvements for this are planned.

Firefox automation: whimboo has written a Q2 Firefox Automation Report detailing recent work on Firefox Update and UI tests. Maja has improved the integration of Firefox media tests with Treeherder so that they now officially support all the Tier 2 job requirements.

WebDriver and Marionette: WebDriver is now officially a living standard. Congratulations to David Burns, Andreas Tolfsen, and James Graham who have contributed to this standard. dburns has created some documentation which describes which WebDriver endpoints are implemented in Marionette.

Version control: The ability to read and extra metadata from files has been added to This opens the door to cool future features, like the ability auto file bugs in the proper component and automatically selecting appropriate reviewers when pushing to MozReview. gps has also blogged about some operational changes to which enables easier end-to-end testing of new features, among other things.

The Details Treeherder/Automatic Starring
  • almost finished the required changes to the backend (both db schema and data ingestion)
Treeherder/Front End
  • Several retrigger features were added to Treeherder to make merging and bisections easier:  auto fill all missing/coalesced jobs in a push; trigger all Talos jobs N times; backfill a specific job by triggering it on all skipped commits between this commit and the commit that previously ran the job, retrigger all pinned jobs in treeherder.  This should improve bug hunting for sheriffs and developers alike.
  • [jfrench] Logviewer ‘action buttons’ are now centralized in a Treeherder style navbar
  • [jfrench] Logviewer skipped steps are now recognized as non-failures and presented as blue info steps,
  • [jfrench] Middle-mouse-clicking on a job in treeherder now launches the Logviewer
  • [vaibhav] Added the ability to retrigger all pinned jobs (bug
  • Camd’s job chunking management will likely land next week
Perfherder/Performance Testing
  • [wlach] / [jmaher] Lots of perfherder updates, details here: Highlights below
  • [wlach] The compare pushes view in Perfherder has been improved to highlight the most important information.
  • [wlach] If your try push contains Talos jobs, you’ll get a url for the Perfherder comparison view when pushing (
  • [jmaher/wlach] Talos generates suite and test level metrics and perfherder now ingests those data points. This fixes results from internal benchmarks which do their own summarization to report proper numbers.
  • [jmaher/parkouss] Big talos updates (thanks to :parkouss), major refactoring, cleanup, and preparation to move talos in tree.
MozReview/Autoland Mobile Automation
  •  [gbrown] Demonstrated that some all-javascript robocop tests can run more efficiently as mochitest-chrome; about 20 such tests converted to mochitest-chrome, saving about 30 minutes per push.
  •  [gbrown] Working on “mach emulator” support: wip can download and run 2.3, 4.3, or x86 emulator images. Sorting out cache management and cross-platform issues.
  •  [jmaher/bc] landed code for tp4m/tsvgx on autophone- getting closer to running on autophone soon.
Dev Workflow
  • [ahal] Created patch to clobber compiled python files in srcdir
  • [ahal] More progress on mach/mozlog patch
  • [chmanchester] Fix to allow ‘mach try’ to work without test arguments (bug 1192484)
Media Automation
  • [maja_zf] firefox-media-tests ‘log steps’ and ‘failure summaries’ are now compatible with Treeherder’s log viewer, making them much easier to browse. This means the jobs now satisfy all Tier-2 Treeherder requirements.
  • [sydpolk] Refactoring of tests after fixing stall detection is complete. I can now take my network bandwidth prototype and merge it in.
Firefox Automation General Automation
  • Finished adapting mozregression ( and mozdownload ( to S3.
  • (Henrik) Isn’t only a temporary solution, before we move to TC?
  • (armenzg) I believe so but for the time being I believe we’re out of the woods
  • The manifestparser dependency was removed from mozprofile (bug 1189858)
  • [ahal] Fix for
  • [sydpolk] Platform Jenkins migration to the SCL data center has not yet begun in earnest due to PTO. Hope to start making that transition this week.
  • [chmanchester] work in progress to build annotations in to files to automatically select or prioritize tests based on what changed in a commit. Strawman implementation posted in , blog post about this work at
  • [adusca/armenzg] Try Extender ( ) is open for business, however, a new plan will soon be released to make a better version that integrates well with Treeherder and solves some technichal difficulties we’re facing
  • [armenzg] Code has landed on mozci to allow re-triggering tasks on TaskCluster. This allows re-triggering TaskCluster tasks on the try server when they fail.
  • [armenzg] Work to move Firefox UI tests to the test machines instead of build machines is solving some of the crash issues we were facing
  • [ahal] re-implemented test-informant to use ActiveData:
  • [ekyle] Work on stability: Monitoring added to the rest of the ActiveData machines.  
  • [ekyle] Problem:  ES was not balancing the import workload on the cluster; probably because ES assumes symmetric nodes, and we do not have that.  The architecture was changed to prefer a better distribution of work (and query load) – There now appears to be less OutOfMemoryExceptions, despite test-informant’s queries.
  • [ekyle] More Problems:  Two servers in the ActiveData complex failed: The first was the ActiveData web server; which became unresponsive, even to SSH.  The machine was terminated.  The second server was the ‘master’ node of the ES cluster: This resulted in total data loss, but it was expected to happen eventually given the cheap configuration we have.   Contingency was in place:  The master was rebooted, the  configuration was verified, and data re-indexed from S3.   More nodes would help with this, but given the rarity of the event, the contingency plan in place, and the low number of users, it is not yet worth paying for. 
WebDriver (highlights)
  • [ato] WebDriver is now officially a living standard (
  • [ato] Rewrote chapter on extending the WebDriver protocol with vendor-specific commands
  • [ato] Defined the Get Element CSS Value command in specification
  • [ato] Get Element Attribute no longer conflates DOM attributes and properties; introduces new command Get Element Property
  • [ato] Several significant infrastructural issues with the specification was fixed
  • Project managers for FxOS have a renewed interest in the project tracking, and overall status dashboards.   Talk only, no coding yet.

Categorieën: Mozilla-nl planet

Jonas Finnemann Jensen: Getting Started with TaskCluster APIs (Interactive Tutorials)

vr, 14/08/2015 - 00:25

When we started building TaskCluster about a year and a half ago one of the primary goals was to provide a self-serve experience, so people could experiment and automate things without waiting for someone else to deploy new configuration. Greg Arndt (:garndt) recently wrote a blog post demystifying in-tree TaskCluster scheduling. The in-tree configuration allows developers to write new CI tasks to run on TaskCluster, and test these new tasks on try before landing them like any other patch.

This way of developing test and build tasks by adding in-tree configuration in a patch is very powerful, and it allows anyone with try access to experiment with configuration for much of our CI pipeline in a self-serve manner. However, not all tools are best triggered from a post-commit-hook, instead it might be preferable to have direct API access when:

  • Locating existing builds in our task index,
  • Debugging for intermittent issues by running a specific task repeatedly, and
  • Running tools for bisecting commits.

To facilitate tools like this TaskCluster offers a series of well-documented REST APIs that can be access with either permanent or temporary TaskCluster credentials. We also provide client libraries for Javascript (node/browser), Python, Go and Java. However, being that TaskCluster is a loosely coupled set of distributed components it is not always trivial to figure out how to piece together the different APIs and features. To make these things more approachable I’ve started a series of interactive tutorials:

All these tutorials are interactive, featuring a runtime that will transpile your code with babel.js before running it in the browser. The runtime environment also exposes the require function from a browserify bundle containing some of my favorite npm modules, making the example editors a great place to test code snippets using taskcluster or related services.

Happy hacking, and feel free submit PRs for all my spelling errors at

Categorieën: Mozilla-nl planet

Air Mozilla: Intern Presentations

do, 13/08/2015 - 23:00

Intern Presentations 7 interns will be presenting what they worked on over the summer. 1. Nate Hughes - HTTP/2 on the Wire Adaptations of the Mozilla platform...

Categorieën: Mozilla-nl planet