Mozilla Nederland LogoDe Nederlandse

Abonneren op feed Mozilla planet
Planet Mozilla -
Bijgewerkt: 23 uur 44 min geleden

About:Community: Participation Lab Notes: Volunteer vrs Contributor

do, 01/10/2015 - 00:40


As part of the Participation Lab’s efforts we recently began conducting  experiments on the Get Involved Page seeking to better understand how people navigate and connect (or fail to connect) to contribution opportunities. In preparation for this experiment we looked at some of the research that the team had conducted in recent years, and a number of their  key learnings led us to a deeper conversation about the language and labels we use to invite contribution to Mozilla. Some of the those learnings were:

  • People need to understand what, and why before they’ll be interested in understanding how they can contribute to Mozilla.
  • We must make it immediately apparent that the Get Involved  page is seeking volunteers and not employees.
  • We need to set clear expectation of investment/journey needed to get involved or become a Mozillian.

This matched some other feedback,  and as a result we decided to conduct a series of interviews to discover more ideas and prejudices that exist around the terms volunteer and contributor. Eighteen interviews covering diverse perspectives were conducted, these included core contributors, project leaders, alumni, community project leads, those working in open science, advocacy and randomly selected people in my life who had never contributed to Mozilla.  We discovered four interesting insights shared below.

Project preference is ‘Contributor’


Overall, people working in, or already volunteering with Mozilla were more comfortable with ‘contributor’, but agreed that unless your background was working in a field like software engineering, or science where the term is already part of the language ecosystem, it might be challenging to grasp.  I also noticed  a trend in feedback that acknowledged  once you’re regularly involved  in a project you might no longer be objective, and that we may, in fact, be skewing even the most common understanding of these terms. One example given was the use of  ‘paid contributor’ and ‘volunteer contributor’, which made no sense for most people outside of Mozilla.

The term ‘Volunteering’ is more universally  associated with lending time and skills but…


While people seemed to generally understand that volunteering was about lending time and skills, I encountered sensitivities that the word ‘volunteer’ which invoked feelings of being ‘charitable’ vs the more empowered feeling of being a ‘contributor’ .  I heard that ‘contribution’ lends to a feeling we’re part ‘part of something ’ while ‘volunteering felt more detached.  One core contributor felt very, very strongly, that volunteering was not the term for what they do at Mozilla.

‘Contribution’ feels more like giving a gift or donation


Feedback from non-technical contributors ,and those I spoke with outside the Mozilla community indicated that  the term “contribution” was easy to misinterpret as being about donating funds or something of greater significance than some people felt they could offer.  When asked, a couple of people cited political campaigns, and fundraisers as the most common association they with the word ‘contribution’.

What’s in a Name?


It was also suggested that at Mozilla we should stop labouring over generalized terms like volunteer and contributor,  and instead focus  energies on clarifying ways people can help – One person felt that such opportunity  exists in more explicit ‘role titles’  i.e. Android Community Ambassador’.   The hypothesis is, that by providing role titles we can help people connect to opportunities that are resume-worthy with recognition that contribution is an opportunity.  Of course, there are already examples of success with role names demonstrated by the Mozilla Reps program and most recently Club Captains and Regional Leads in Webmaker Clubs.


We had an interesting suggestion that  we make up our own name!  Create a Mozilla-fied name for volunteers that makes volunteering at Mozilla a unique version of both terms.  An inspiring example was the London Olympics which called volunteers ‘Games Makers’, what a Mozilli-fied version would be remained unclear :)  but I’m sure we could come up with something.  What do you think?

Additional lure of a Mozilla-fied name is a chance to help people recognize the amazingness of the community they would be joining which MDN reported to be a factor in repeat contribution in their area  – and similar to how an Olympic volunteerism resonated with a name describing their impact.

So where from here?


There is the opportunity for continued experimentation and testing using the Get Involved Page, and  we would love to hear from you – contributor volunteer, Mozillia-fied name?

What experiment  do you think the Participation Lab should design  next with these new insights?



Image: “Mozilla Summit Day 2” by Roland Tanglaois licensed under CC BY 2.0

Categorieën: Mozilla-nl planet

About:Community: Meet an MDN contributor: klez

wo, 30/09/2015 - 23:57

Photo of Frederico klez Culloca

Frederico klez Culloca made his first few edits to MDN in 2013, but started contributing in earnest in 2014, as a localizer for Mozilla Italia. After a couple of months, he started attending the bi-weekly MDN Community meeting, and later the Learning Area meetings. After that, he concentrated his efforts on the Learning Area of MDN, especially the Glossary.

He says that working on MDN gives him a good idea of how an organization as big as Mozilla actually works, bringing together paid staff and volunteers.

His advice for MDN newcomers?

Don’t be afraid to make mistakes in good faith. Someone is always able and willing to correct them and help you learn from your errors. It’s a wiki, after all :-)

Thanks, klez, for all your contributions to MDN!

Categorieën: Mozilla-nl planet

About:Community: Participation Lab Notes: The Power of Swag

wo, 30/09/2015 - 22:51

It doesn’t take long, once you’ve entered the Mozilla community before you notice that swag is a big part of Mozilla. Stickers, t-shirts, lanyards are everywhere and for many Mozillians these things have become a kind of currency with emotional and physical value.

 Doug Belshaw on Flickr

Photo by DougBelshaw/CC BY 2.0

In 2014, Mozilla spent over $150,000 on swag to engage contributors across four major initiatives: Maker Party, MozFest, Mozilla Reps, and Firefox Student Ambassadors (FSAs).

However we rarely stop to examine what we are learning about the results, benefits and challenges of this investment.

In order to surface and capture these insights the Participation Team interviewed four groups at Mozilla, for whom swag is a core part of their activities, and identified the most interesting insights and challenges faced by each group.

As a result, we discovered that many of the groups face similar challenges but have found distinct solutions and strategies for managing them. The two major insights were that by encouraging local production of swag, and creating swag that is tailored specifically to the needs of the community, costs can be minimized and value to the community increased.

Maker Party

Although in the past year, swag has become a much smaller part of Maker Party as the campaign has become shorter (17 days vs. the previous 2 months) and more contained. In 2014 thousands of people spent the summer throwing events, and swag was an integral part of growing and motivating the Maker Party community.

Insight : Swag Legitimizes Hosts & Events

Much more than a form of recognition for event hosts, in many communities swag is perceived as a vote of confidence from Mozilla that legitimizes both the host and the event. Many communities feel that if we are willing to support an event host and their event through physical things, and it marks them as “officially sanctioned” by Mozilla and this alignment with the brand dramatically increases the influence and reputation of the contributor and the event.

For example, in South America, a Maker Party host created Mozilla branded mouse pads to legitimize their events in the eyes of local internet cafe owners who let them use the space for free in exchange for Mozilla branded mouse pads.

Challenge: Cost of Shipping

For Maker Party, shipping swag across the world, often to extremely remote areas, was very expensive and problematic. Certain countries charge enormous taxes on clothing and have been known to detain parcels with t-shirts – to the detriment of volunteers who often cannot pay the high customs fees.


While MozFest, as a short-term festival is a bit different from the other examples,  it identifies another way in which we use swag to build and support community.

Insight: Swag is Key to Partnerships

Every year MozFest partners with other like-minded organizations to put on the event. As part of this relationship, partners are offered the opportunity to distribute swag and some promotional material to attendees. As a result Mozilla can produce a small amount of swag like a tote bag and water bottle, and have partners add their swag to create fun gifts for participants that also act as promotional pieces for partners and Mozilla.

Challenges: The Right Swag

Finding swag that is re-usable and has value outside of the event is challenging, but water bottles and tote bags have proven popular and effective, and have the added benefit of reducing the event’s environmental footprint.

Mozilla Reps

Unlike Maker Party or MozFest, Mozilla Reps is a community where individuals participate in multiple ways over a sustained period of time. For this group, it is often the variety rather than quantity of swag that drives excitement.

Insight: Creating A Collectors Culture

Within Reps, swag is a great way to acknowledge contributors and support events. However, in some circumstances, swag can come to be seen as a symbol that represents value and status in the community. Therefore as more of a swag item is produced the value of each item diminishes, and collectors culture has developed. While rare swag is a very powerful tool for driving engagement and recognizing achievement in the Reps community, it is important to be aware of the number and variety of an item that is produced, and to carefully manage expectations to prevent swag becoming an end in and of itself.

Challenge: Mitigating Expectations

As swag is a large part of the Reps culture, it is important to be careful about the expectations that are set around the value of swag. Limiting the kinds of official swag that is produced to t-shirts, stickers, and posters and having clear value attached to each, may be one way to keep expectations low and guard against increasing expectations.


The Firefox Student Association has many parallels in it’s structure and its relationship to swag as the Mozilla Reps program. However by carefully controlling the value of swag, and encouraging local production, many of the challenges faced by other groups have been avoided.

Insight: Careful Curation & Local Production

The FSA’s have solved problems related to shipping swag, and reduced the “freebee” quality by having FSA’s create their own swag locally and then be reimbursed for the cost. Like the Reps program they also have a collectors culture but set formal expectations on the “value” of different kinds of swag ie. t-shirts are something you have to earn, stickers are freebees you can give away at your event, and posters are something you have to produce yourself.

Challenge: Tracking Designs

Because unique designs are a large part of what gives t-shirt swag it’s value, there is a struggle to find and keep track of the many ways t-shirts and designs are being used across Mozilla. In order to coordinate the FSA program suggested creating a central repository of t-shirt designs and what/when they should be distributed so that the use of swag can be better aligned across all of Mozilla.

Overall, across Mozilla there is a great deal being learned and experimented with around swag as well as many areas for growth and improvement. Our hope is that by surfacing these lessons and insights, we’ll spark new conversations and gain more insights into the swag processes and how it can be improved. If you have experience or thoughts around swag at Mozilla please share them in the comments here, or on the Participation Team Discourse page.

Categorieën: Mozilla-nl planet

Air Mozilla: Quality Team (QA) Public Meeting

wo, 30/09/2015 - 22:30

Quality Team (QA) Public Meeting This is the meeting where all the Mozilla quality teams meet, swap ideas, exchange notes on what is upcoming, and strategize around community building and...

Categorieën: Mozilla-nl planet

Mark Côté: Fixing MozReview's sore spots

wo, 30/09/2015 - 20:15

MozReview was intentionally released early, with a fairly minimal feature set, and some ugly things bolted onto a packaged code-review tool. The code-review process at Mozilla hasn’t changed much since the project began—Splinter, a slightly fancier UI than dealing with raw diffs, notwithstanding. We knew this would be a controversial subject, with a variety of (invariably strong) opinions. But we also knew that we couldn’t make meaningful progress on a number of long-desired features, like autolanding commits and automatic code analysis, without moving to a modern repository-based review system. We also knew that, with the incredible popularity of GitHub, many developers expect a workflow that involves them pushing up commits for review in a rich web UI, not exporting discrete patches and looking at almost raw diffs.

Rather than spending a couple years off in isolation developing a polished system that might not hit our goals of both appealing to future Mozillians and increasing productivity overall, we released a very early product—the basic set of features required by a push-based, repository-centric code-review system. Ironically, perhaps, this has decreased the productivity of some people, since the new system is somewhat raw and most definitely a big change from the old. It’s our sincere belief that the pain currently experienced by some people, while definitely regrettable and in some cases unexpected, will be balanced, in the long run, by the opportunities to regularly correct our course and reach the goal of a world-class code review-and-submission system that much faster.

And so, as expected, we’ve received quite a bit of feedback. I’ve been noticing a pattern, which is great, because it gives us insight into classes of problems and needs. I’ve identified four categories, which interestingly correspond to levels of usage, from basic to advanced.

Understanding MozReview’s (and Review Board’s) models

Some users find MozReview very opaque. They aren’t sure what many of the buttons and widgets do, and, in general, are confused by the interface. This caught us a little off-guard but, in retrospect, is understandable. Review Board is a big change from Splinter and much more complex. I believe one of the sources of most confusion is the overall review model, with its various states, views, entry points, and exit points. Splinter has the concept of a review in progress, but it is a lot simpler.

We also had to add the concept of a series of related commits to Review Board, which on its own has essentially a patch-based model, similar to Splinter’s, that’s too limited to build on. The relationship between a parent review request and the individual “child” commits is the source of a lot of bewilderment.

Improving the overall user experience of performing a review is a top priority for the next quarter. I’ll explore the combination of the complexity of Review Board and the commit-series model we added in a follow-up post.

Inconveniences and lack of clarity around some features

For users who are generally satisfied by MozReview, at least, enough to use it without getting too frustrated, there are a number of paper cuts and limitations that can be worked around but generate some annoyance. This is an area we knew we were going to have to improve. We don’t yet have parity with Splinter/Bugzilla attachments, e.g. reviewers can’t delegate review requests, nor can they mark specific files as reviewed. There are other areas that we can go beyond Bugzilla, such as being able to land parts of a commit series (this is technically possible in Bugzilla by having separate patches, but it’s difficult to track). And there are specific things that Review Board has that aren’t as useful for us as they could be, like the dashboard.

This will also be a big part of the work in the next quarter (at least).

Inability to use MozReview at all due to technological limitations

The single biggest item here is lack of support for git, particularly a git interface for hg repos like mozilla-central. There are many people interested in using MozReview, but their work flows are based around git using git-cinnabar. gps and kanru did some initial work around this in bug 1153053; fleshing this out into a proper solution isn’t a small task, but it seems clear that we’ll have to finish it regardless before too long, if we want MozReview to be the central code-review tool at Mozilla. We’re still trying to decide how this fits into the above priorities; more users is good, but making existing users happier is as well.

Big-ticket items

As mentioned at the beginning of this post, the main reason we’re building a new review tool is to make it repository-centric, that is, based around commits, not isolated patches. This makes a lot of long-desired tools and features much more feasible, including autoland, automatic static analysis, commit rewriting to automatically include metadata like reviewers, and a bunch of other things.

This has been a big focus for the last few months. We’ve had autoland-to-try for a little while now, and autoland-to-inbound is nearly complete. We have a generic library for static analysis with which we’ll be able to build various review bots. And, of course, the one big feature we started with, the ability push commits to MozReview instead of exporting standalone patches, which by itself is both more convenient and preserves more information.

After autoland-to-inbound we’ll be putting aside other big features for a little while to concentrate on general user experience so that people enjoy using MozReview, but rest assured we’ll be back here to build more powerful workflows for everyone.

Categorieën: Mozilla-nl planet

Air Mozilla: Product Coordination Meeting

wo, 30/09/2015 - 20:00

Product Coordination Meeting Duration: 10 minutes This is a weekly status meeting, every Wednesday, that helps coordinate the shipping of our products (across 4 release channels) in order...

Categorieën: Mozilla-nl planet

Air Mozilla: The Joy of Coding (mconley livehacks on Firefox) - Episode 29

wo, 30/09/2015 - 19:00

The Joy of Coding (mconley livehacks on Firefox) - Episode 29 Watch mconley livehack on Firefox Desktop bugs!

Categorieën: Mozilla-nl planet

Christian Heilmann: Of impostor syndrome and running in circles (part 3)

wo, 30/09/2015 - 16:15

These are the notes of my talk at SmartWebConf in Romania. Part 1 covered how Impostor Syndrome cripples us in using what we hear about at conferences. It covered how our training and onboarding focuses on coding instead of human traits. In Part 2 I showed how many great things browsers do for us we don’t seem to appreciate. In this final part I’ll explain why this is and why it is a bad idea. This here is a call to action to make yourself feel better. And to get more involved without feeling inferior to others.

This is part 3 of 3.

Part 2 of this series ended with the explanation that JavaScript is not fault tolerant, and yet we rely on it for almost everything we do. The reason is that we want to control the outcome of our work. It feels dangerous to rely on a browser to do our work for us. It feels great to be in full control. We feel powerful being able to tweak things to the tiniest detail.

JavaScript is the duct-tape of the web

There is no doubt that JavaScript is the duct-tape of the web: you can fix everything with it. It is a programming language and not a descriptive language or markup. We have all kind of logical constructs to write our solutions in. This is important. We seem to crave programmatic access to the things we work with. That explains the rise of CSS preprocessors like Sass. These turn CSS into JavaScript. Lately, PostCSS even goes further in merging these languages and ideas. We like detailed access. At the same time we complain about complexity.

No matter what we do – the problem remains that on the client side JavaScript is unreliable. Because it is fault intolerant. Any single error – even those not caused by you – result in our end users getting an empty screen instead of the solution they came for. There are many ways JavaScript can fail. Stuart Langridge maintains a great flow chart on that called “Everyone has JavaScript, right?“.

There is a bigger issue with fixing browser issues with JavaScript. It makes you responsible and accountable for things browser do. You put it onto yourself to fix the web, now it is your job to keep doing that – for ever, and ever and ever…

Taking over things like page rendering, CSS support and page loading with JavaScript feels good as it fixes issues. Instead of a slow page load we can show a long spinner. This makes us feel good, but doesn’t help our end users much. Especially when the spinner has no timeout error case – like browser loading has.

Fixing a problem with JavaScript is fun. It looks simple enough and it removes an unknown browser support issue. It allows us to concentrate on building bigger and better things. And we don’t have to worry about browser issues.

It is an inspiring feeling to be the person who solved a web-wide issue. It boosts our ego to see that people rely on our brains to solve issues for them. It is great to see them become more effective and faster and free to build the next Facebook.

It gets less amazing when you want to move on and do something else. And when people have outrageous demands or abuse your system. Remy Sharp lately released a series of honest and important blog posts on that matter. “The toxic side of free” is a great and terrifying read.

Publishing something new in JavaScript as “open source” is easy these days. GitHub made it more or less a one step process. And we get a free wiki, issue tracker and contribution process with it to boot. That, of course, doesn’t mean we can make this much more complex if we wanted to. And we do as Eric Douglas explains.

open source is free as in puppy

Releasing software or solutions as open source is not the same as making it available for free. It is the start of a long conversation with users and contributors. And that comes with all the drama and confusion that is human interaction. Open Source is free as in puppy. It comes with responsibilities. Doing it wrong results in a badly behaving product and community around it.

Help stop people falling off the bleeding edge

300 cliff

If you embrace the idea that open source and publishing on the web is a team effort, you realise that there is no need to be on the bleeding edge. On the opposite – any “out there” idea needs a group of peers to review and use it to get data on how sensible the idea really is. We tend to skip that part all too often. Instead of giving feedback or contributing to a solution we discard it and build our own. This means all we have to do is to deal with code and not people. It also means we pile on to the already unloved and unused “amazing solutions” for problems of the past that litter the web.

The average web page is 2MB with over 100 http requests. The bulk of this is images, but there is also a lot of JS and CSS magical solutions in the mix.

If we consider that the next growth of the internet is not in the countries we are in, but in emerging places with shaky connectivity, our course of action should be clear: clean up the web.

Of course we need to innovate and enhance our web technology stack. At the same time it is important to understand that the web is an unprecedented software environment. It is not only about what we put in, it is also about what we can’t afford to lose. And the biggest part of this is universal access. That also means it needs to remain easy to turn from consumer into creator on the web.

If you watch talks about internet usage in emerging countries, you’ll learn about amazing numbers and growth predictions.

You also learn about us not being able to control what end users see. Many of our JS solutions will get stripped out. Many of our beautiful, crafted pictures optimised into a blurry mess. And that’s great. It means the users of the web of tomorrow are as empowered as we were when we stood up and fought against browser monopolies.

So there you have it: you don’t have to be the inventor of the next NPM module to solve all our issues. You can be, but it shouldn’t make you feel bad that you’re not quite interested in doing so. As Bill Watterson of Calvin and Hobbes fame put it:

We all have different desires and needs, but if we don’t discover what we want from ourselves and what we stand for, we will live passively and unfulfilled.

So, be active. Don’t feel intimidated by how clever other people appear to be. Don’t be discouraged if you don’t get thousands of followers and GitHub stars. Find what you can do, how you can help the merging of bleeding edge technologies and what goes into our products. Above all – help the web get leaner and simpler again. This used to be a playground for us all – not only for the kids with the fancy toys.

You do that by talking to the messy kids. Those who build too complex and big solutions for simple problems. Those doing that because clever people told them they have to use all these tools to build them. The people on the bleeding edge are too busy to do that. You can. And I promise, by taking up teaching you end up learning.

Categorieën: Mozilla-nl planet

John Ford: Taskcluster Component Loader

wo, 30/09/2015 - 14:56
Taskcluster is the new platform for building Automation at Mozilla.  One of the coolest design decisions is that it's composed of a bunch of limited scope, interchangeable services that have well defined and enforced apis.  Examples of services are the Queue, Scheduler, Provisioner and Index.  In practice, the server-side components roughly map to a Heroku app.  Each app can have one or more web worker processes and zero or more background workers.

Since we're building our services with the same base libraries we end up having a lot of duplicated glue code.  During a set of meetings in Berlin, Jonas and I were lamenting about how much copied, pasted and modified boilerplate was in our projects.

Between the API definition file and the command line to launch a program invariably sits a bin/server.js file for each service.  This script basically loads up our config system, loads our Azure Entity library, loads a Pulse publisher, a JSON Schema validator and a Taskcluster-base App.  Each background worker has its own bin/something.js which basically has a very similar loop.  Services with unit tests have a test/helper.js file which initializes the various components for testing.  Furthermore, we might have things initialize inside of a given before() or beforeEach().

The problem with having so much boiler plate is twofold.  First, each time we modify one services's boilerplate, we are now adding maintenance complexity and risk because of that subtle difference to the other services.  We'd eventually end up with hundreds of glue files which do roughly the same thing, but accomplish it complete differently depending on which services it's in.  The second problem is that within a single project, we might load the same component ten ways in ten places, including in tests.  Having a single codepath that we can test ensures that we're always initializing the components properly.

During a little downtime between sessions, Jonas and I came up with the idea to have a standard component loading system for taskcluster services.  Being able to rapidly iterate and discuss in person made the design go very smoothly and in the end, we were able to design something we were both happy with in about an hour or so.

The design we took is to have two 'directories' of components.  One is the project wide set of components which has all the logic about how to build the complex things like validators and entities.  These components can optionally have dependencies.  In order to support different values for different environments, we force the main directory to declare which 'virtual dependencies' it requires.  They are declared as a list of strings.  The second level of component directory is where these 'virtual dependencies' have their value.

Both Virtual and Concrete dependencies can either be 'flat' values or objects.  If a dependency is a string, number, function, Promise or an object without a create property, we just give that exact value back as a resolved Promise.  If the component is an object with a create property, we initialize the dependencies specified by the 'requires' list property, pass those values as properties on an object to the function at the 'create' property.  The value of that function's return is stored as a resolved promise.  Components can only depend on other components non-flat dependencies.

Using code is a good way to show how this loader works:

// lib/components.js

let loader = require('taskcluster-base').loader;
let fakeEntityLibrary = require('fake');

module.exports = loader({
fakeEntity: {
requires: ['connectionString'],
setup: async deps => {
let conStr = await deps.connectionString;
return fakeEntityLibrary.create(conStr);
); In this file, we're building a really simple component directory which only contains a contrived 'fakeEntity'.  This component depends on having a connection string to fully configure.  Since we want to use this code in production, development and testing, we don't want to bake configuration into this file, so we force the thing using this to itself give us a way to configure what the connection string.

// bin/server.js
let config = require('taskcluster-base').config('development');
let loader = require('../lib/components.js');

let load = loader({
connectionString: config.entity.connectionString,

let configuredFakeEntity = await load('fakeEntity') In this file, we're providing a simple directory that satisifies the 'virtual' dependencies we know that need to be fulfilled before initializing can happen.

Since we're creating a dependency tree, we want to avoid having cyclic dependencies.  I've implemented a cycle checker which ensures that you cannot configure a cyclical dependency.  It doesn't rely on the call stack being exceeded from infinite recursion either!

This is far from being the only thing that we figured out improvements for during this chat.  Two other problems that we were able to talk through were splitting out taskcluster-base and having a background worker framework.

Currently, taskcluster-base is a monolithic library.  If you want our Entities at version 0.8.4, you must take our config at 0.8.4 and our rest system at 0.8.4.  This is great because it forces services to move all together.  This is also awful because sometimes we might need a new stats library but can't afford the time to upgrade a bunch of Entities.  It also means that if someone wants to hack on our stats module that they'll need to learn how to get our Entities unit tests to work to get a passing test run on their stats change.

Our plan here is to make taskcluster-base a 'meta-package' which depends on a set of taskcluster components that we support working together.  Each of the libraries (entities, stats, config, api) will be split out into their own packages using git filter-branch to maintain history.  This is just a bit of simple leg work of ensuring that the splitting out goes smooth.

The other thing we decided on was a standardized background looping framework.  A lot of background workers follow the pattern "do this thing, wait one minte, do this thing again".  Instead of each service implementing this its own special way for each background worker, what we'd really like is to have a library which does all the looping magic itself.  We can even have nice things like a watch dog timer to ensure that the loop doesn't stick.

Once the PR has landed for the loader, I'm going to be converting the provisioner to use this new loader.  This is a part of a new effort to make Taskcluster components easy to implement.  Once a bunch of these improvements have landed, I intend to write up a couple blog posts on how you can write your own Taskcluster service.
Categorieën: Mozilla-nl planet

Joel Maher: Say hi to Gabriel Machado- a newer contributor on the Perfherder project

wo, 30/09/2015 - 11:52

Earlier this summer, I got an email from Gabriel asking how he could get involved in Automation and Tools projects at Mozilla.  This was really cool and I was excited to see Gabriel join the Mozilla Community.  Gabriel is knows as :goma on IRC and based on his interests and projects with mentors  available, hacking on TreeHerder was a good fit.  Gabriel also worked on some test cases for the Firefox-UI-Tests.  You can see the list of bugs he has been involved with and check him out on Github.

While it is great to see a contributor become more comfortable in their programming skills, it is even better to get to know the people you are working with.  As I have done before, I would like to take a moment to introduce Gabriel and let him do the talking:

Tell us about where you live –

I lived in Durham-UK since last year, where I was doing an exchange program. About Durham I can say that is a lovely place, it has a nice weather, kind people, beautiful castles and a stunning “medieval ambient”.  Besides it is a student city, with several parties and cultural activities.

I moved back to Brazil 3 days ago, and next week I’ll move to the city of Ouro Preto to finish my undergrad course. Ouro Preto is another beautiful historical city, very similar to Durham in some sense. It is a small town with a good university and stunning landmarks. It’s a really great place, designated a world heritage site by UNESCO.

Tell us about your school –

In 2 weeks I’ll begin my third year in Computer Science at UFOP(Federal University of Ouro Preto). It is a really good place to study computer science, with several different research groups. In my second year I earned a scholarship from the Brazilian Government to study in the UK. So, I studied my second year at Durham University. Durham is a really great university, very traditional and it has a great infra-structure. Besides, they filmed several Harry Potter scenes there :P

Tell us about getting involved with Mozilla –

In 2014 I was looking for some open source project to contribute with when I found the Mozilla Contributing Guide. It is a really nice guide and helped me a lot. I worked on some minors bugs during the year. In July of 2015, as part of my scholarship to study in the UK, I was supposed to do a small final project and I decided to work with some open source project, instead of an academic research. I contacted jmaher by email and asked him about it. He answered me really kindly and guided me to contribute with the Treeherder. Since then, I’ve been working with the A-Team folks, working with Treeherder and Firefox-Ui-Tests.

I think Mozilla does a really nice job helping new contributors, even the new ones without experience like me. I used to think that I should be a great hacker, with tons of programming knowledge to contribute with an open source project. Now, I think that contributing with an open source project is a nice way to become a great hacker with tons of programming knowledge

Tell us what you enjoy doing –

I really enjoy computers. Usually I spent my spare time testing new operating systems, window managers or improving my Vim. Apart from that, I love music. Specially instrumental.  I play guitar, bass, harmonica and drums and I really love composing songs. You can listen some of my instrumental musics here:

Besides, I love travelling and meeting people from different cultures. I really like talking about small particularities of different languages.

Where do you see yourself in 5 years?

I hope be a software engineer, working with great and interesting problems and contributing for a better (and free) Internet.

If somebody asked you for advice about life, what would you say?

Peace and happiness comes from within, do not seek it without.

Please say hi to :goma on irc in #treeherder or #ateam.

Categorieën: Mozilla-nl planet

Byron Jones: happy bmo push day!

wo, 30/09/2015 - 09:42

the following changes have been pushed to

  • [1207926] change treeherder@bots.tld to orangefactor@bots.tld
  • [1204683] Add whoami endpoint
  • [1208135] security not being mailed when bugs change core-security-release state
  • [1199090] add printable recovery 2fa codes
  • [1204623] timestamp on flags should reference the latest updated activity, not the first
  • [1209745] Update get_permissions.html.tmpl to reflect new self-canconfirm process

discuss these changes on

Filed under: bmo, mozilla
Categorieën: Mozilla-nl planet

Daniel Stenberg: libbrotli is brotli in lib form

wo, 30/09/2015 - 08:20

Brotli is this new cool compression algorithm that Firefox now has support for in Content-Encoding, Chrome will too soon and Eric Lawrence wrote up this nice summary about.

So I’d love to see brotli supported as a Content-Encoding in curl too, and then we just basically have to write some conditional code to detect the brotli library, add the adaption code for it and we should be in a good position. But…

There is (was) no brotli library!

It turns out the brotli team just writes their code to be linked with their tools, without making any library nor making it easy to install and use for third party applications.

an unmotivated circle sawWe can’t have it like that! I rolled up my imaginary sleeves (imaginary since my swag tshirt doesn’t really have sleeves) and I now offer libbrotli to the world. It is just a bunch of files and a build system that sucks in the brotli upstream repo as a submodule and then it builds a decoder library (brotlidec) and an encoder library (brotlienc) out of them. So there’s no code of our own here. Just building on top of the great stuff done by others.

It’s not complicated. It’s nothing fancy. But you can configure, make and make install two libraries and I can now go on and write a curl adaption for this library so that we can get brotli support for it done. Ideally, this (making a library) is something the brotli project will do on their own at some point, but until they do I don’t mind handling this.

As always, dive in and try it out, file any issues you find and send us your pull-requests for everything you can help us out with!

Categorieën: Mozilla-nl planet

Sara Haghdoosti: A bridge between learning and advocacy

di, 29/09/2015 - 19:52

How do we bring together learning and advocacy? I wanted to start this conversation by giving example of campaigns I’ve worked on in the past.

Case Study 1: How much do you know about Iran quiz


This campaign happened at (which means let’s go in Farsi and whose mission was to support Iranian social innovators.) We were running a campaign to avoid war with Iran. What we found was that we were getting a lot of emails from our members who were confusing Iran with other groups or countries around the world. For example our members would write to us and say they wouldn’t take action anymore until women in Iran were allowed to drive. It was a clear case of misinformation as women in Saudi Arabia are restricted from driving not women in Iran.


Our goal was to be able to shatter misinformation about Iran in a way that was fun and that didn’t make our members – 90% of whom were not from an Iranian background -defensive.

The result:

As a result we launched a 10 question buzzfeed style quiz titled ‘how much do you know about Iran?’ We sent the quiz to our email list of 70,000+ people. The quiz was taken about 20,000 times and had a completion rate that was close to 90%. After the success of the quiz in our network it came to the notice of Upworthy who shared the quiz and then it reached over 100,000 people.

The majority of people got about 50% of the questions in the quiz right. This was intentional as we wanted our members to be challenged and realize that some of their assumptions were wrong. Our assumption was that doing so in the format of a fun quiz would be less confronting and more likely to sink in that doing it through a ‘mythbusting checklist.’ The feedback we received from members tended to indicate we were right:

Great!  Offerings like this which let us see the people and culture and humanity of Iran are the way.  Thanks! – Pat

Sara, very nice introduction to Iran. It’s a good way to begin the process to break down barriers.  -Dan

Great quiz…I missed two.  Sent it on to about 30 other folks. – Bob

The quiz was a great way to stimulate my thinking about Iran and correcting my misconceptions and adding to my knowledge about its products, way of life, historical events, etc.  Would love to see more.  – Sondra

How does this related to our work at Mozilla?

I wanted to use this case study to illustrate one tactic to use online organizing to educate people at scale. Right now we’re only conceptualizing the advocacy list as a way to mobilize people around legislative action – but in order to really build relationships and deep connections we can’t just ask people to take action we need to think about how we can serve our community.

That’s why I think it would be great for us to come together and create some learning goals for our list. For example – after a year of being on the list do we want people to be able to be able to articulate what net neutrality is? Do we want to know 5 things they can do to secure their privacy?

While the strategy and benchmarks is something we need to develop together – here are some tactical ideas to help illustrate the potential of this collaboration:

-Sending people a list of fun facts/anecdotes that relate to the open web that they can talk about with their families during thanksgiving

-Creating a list of gift ideas that will help people learn more about the open web during the holiday season.

– Running a campaign to ask people to make privacy a new year’s resolution and creating small things they can do each week to realize that resolution.

Categorieën: Mozilla-nl planet

William Lachance: Perfherder summer of contribution thoughts

di, 29/09/2015 - 17:37

A few months ago, Joel Maher announced the Perfherder summer of contribution. We wrapped things up there a few weeks ago, so I guess it’s about time I wrote up a bit about how things went.

As a reminder, the idea of summer of contribution was to give a set of contributors the opportunity to make a substantial contribution to a project we were working on (in this case, the Perfherder performance sheriffing system). We would ask that they sign up to do 5-10 hours of work a week for at least 8 weeks. In return, Joel and myself would make ourselves available as mentors to answer questions about the project whenever they ran into trouble.

To get things rolling, I split off a bunch of work that we felt would be reasonable to do by a contributor into bugs of varying difficulty levels (assigning them a bugzilla whiteboard tag ateam-summer-of-contribution). When someone first expressed interest in working on the project, I’d assign them a relatively easy front end one, just to cover the basics of working with the project (checking out code, making a change, submitting a PR to github). Then I’d assign them slightly harder or more complex tasks which dealt with other parts of the codebase, the nature of which depended on what they wanted to learn more about. Perfherder essentially has two components: a data storage and analysis backend written in Python and Django, and a web-based frontend written in JS and Angular. There was (still is) lots to do on both, which gave contributors lots of choice.

This system worked pretty well for attracting people. I think we got at least 5 people interested and contributing useful patches within the first couple of weeks. In general I think onboarding went well. Having good documentation for Perfherder / Treeherder on the wiki certainly helped. We had lots of the usual problems getting people familiar with git and submitting proper pull requests: we use a somewhat clumsy combination of bugzilla and github to manage treeherder issues (we “attach” PRs to bugs as plaintext), which can be a bit offputting to newcomers. But once they got past these issues, things went relatively smoothly.

A few weeks in, I set up a fortnightly skype call for people to join and update status and ask questions. This proved to be quite useful: it let me and Joel articulate the higher-level vision for the project to people (which can be difficult to summarize in text) but more importantly it was also a great opportunity for people to ask questions and raise concerns about the project in a free-form, high-bandwidth environment. In general I’m not a big fan of meetings (especially status report meetings) but I think these were pretty useful. Being able to hear someone else’s voice definitely goes a long way to establishing trust that you just can’t get in the same way over email and irc.

I think our biggest challenge was retention. Due to (understandable) time commitments and constraints only one person (Mike Ling) was really able to stick with it until the end. Still, I’m pretty happy with that success rate: if you stop and think about it, even a 10-hour a week time investment is a fair bit to ask. Some of the people who didn’t quite make it were quite awesome, I hope they come back some day. :)

On that note, a special thanks to Mike Ling for sticking with us this long (he’s still around and doing useful things long after the program ended). He’s done some really fantastic work inside Perfherder and the project is much better for it. I think my two favorite features that he wrote up are the improved test chooser which I talked about a few months ago and a get related platform / branch feature which is a big time saver when trying to determine when a performance regression was first introduced.

I took the time to do a short email interview with him last week. Here’s what he had to say:

1. Tell us a little bit about yourself. Where do you live? What is it you do when not contributing to Perfherder?

I’m a postgraduate student of NanChang HangKong university in China whose major is Internet of things. Actually,there are a lot of things I would like to do when I am AFK, play basketball, video game, reading books and listening music, just name it ; )

2. How did you find out about the ateam summer of contribution program?

well, I remember when I still a new comer of treeherder, I totally don’t know how to start my contribution. So, I just go to treeherder irc and ask for advice. As I recall, emorley and jfrench talk with me and give me a lot of hits. Then Will (wlach) send me an Email about ateam summer of contribution and perfherder. He told me it’s a good opportunity to learn more about treeherder and how to work like a team! I almost jump out of bed (I receive that email just before get asleep) and reply with YES. Thank you Will!

3. What did you find most challenging in the summer of contribution?

I think the most challenging thing is I not only need to know how to code but also need to know how treeherder actually work. It’s a awesome project and there are a ton of things I haven’t heard before (i.e T-test, regression). So I still have a long way to go before I familiar with it.

4. What advice would give you to future ateam contributors?

The only thing you need to do is bring your question to irc and ask. Do not hesitate to ask for help if you need it! All the people in here are nice and willing to help. Enjoy it!

Categorieën: Mozilla-nl planet

Ehsan Akhgari: My experience adding a new build type using TaskCluster

di, 29/09/2015 - 17:05

TaskCluster is Mozilla’s task queuing, scheduling and execution service.  It allows the user to schedule a DAG representing a task graph that describes a some tasks and their dependencies, and how to execute them, and it schedules them to run in the needed order on a number of slave machines.

As of a while ago, some of the continuous integration tasks have been runing on TaskCluster, and I recently set out to enable static analysis optimized builds on Linux64 on top of TaskCluster.  I had previously added a similar job for debug builds on OS X in buildbot, and I am amazed at how much the experience has improved!  It is truly easy to add a new type of job now as a developer without being familiar with buildbot or anything like that.  I’m writing this post to share my experience on how I did this.

The process of scheduling jobs in TaskCluster starts by a slave downloading a specific revision of a tree, and running the ./mach taskcluster-graph command to generate a task graph definition.  This is what happens in a “gecko-decision” jobs that you can see on TreeHerder.  The mentioned task graph is computed using the task definition information in testing/taskcluster.  All of the definitions are in YAML, and I found the naming of variables relatively easy to understand.  The build definitions are located in testing/taskcluster/tasks/builds and after some poking around, I found linux64_clobber.yml.

If you look closely at that file, a lot of things are clear from the names.  Here are important things that this file defines:

  • $inherits: These files have an single inheritance structure that allows you to refactor the common functionality into “base” definitions.
  • A lot of things have “linux64” in their name.  This gave me a good starting point when I was trying to add a “linux64-st-an” (a made-up name) build by copying the existing definiton.
  • payload.image contains the name of the docker image that this build runs.  This is handy to know if you want to run the build locally (yes, you can do that!).
  • It points to builds/ which contains the actual build definition.

Looking at the build definition file, you will find the steps run in the build, whether the build should trigger unit tests or Talos jobs, the environment variables used during the build, and most importantly the mozconfig and tooltool manifest paths.  (In case you’re not familiar with Tooltool, it lets you upload your own tools to be used during the build time.  This can be new experimental toolchains, custom programs your build needs to run, which is useful for things such as performing actions on the build outputs, etc.)

This basically gave me everything I needed to define my new build type, and I did that in bug 1203390, and these builds are now visible on TreeHerder as “[Tier-2](S)” on Linux64.  This is the gist of what I came up with.


I think this is really powerful since it finally allows you to fully control what happens in a job.  For example, you can use this to create new build/test types on TreeHerder, do try pushes that test changes to the environment a job runs in, do highly custom tasks such as creating code coverage results, which requires a custom build step and custom test steps and uploading of custom artifacts!  Doing this under the old BuildBot system is unheard of.   Even if you went out of your way to learn how to do that, as I understand it, there was a maximum number of build types that we were getting close to which prevented us from adding new job types as needed!  And it was much much harder to iterate on (as I did when I was working on this on the try server bootstrapping a whole new build type!) as your changes to BuildBot configs needed to be manually deployed.

Another thing to note is that I found out all of the above pretty much by myself, and didn’t even have to learn every bit of what I encountered in the files that I copied and repurposed!  This was extremely straightforward.  I’m already on my way to add another build type (using Ted’s bleeding edge Linux to OS X cross compiling support)!  I did hit hurdles along the way but almost none of them were related to TaskCluster, and with the few ones that were, I was shooting myself in the foot and Dustin quickly helped me out.  (Thanks, Dustin!)

Another near feature of TaskCluster is the inspector tool.  In TreeHerder, you can click on a TaskCluster job, go to Job Details, and click on “Inspect Task”.  You’ll see a page like this.  In that tool you can do a number of neat things.  One is that it shows you a “live.log” file which is the live log of what the slave is doing.  This means that you can see what’s happening in close to real time, without having to wait for the whole job to finish before you can inspect the log.  Another neat feature is the “Run locally” commands that show you how to run the job in a local docker container.  That will allow you to reproduce the exact same environment as the ones we use on the infrastructure.

I highly encourage people to start thinking about the ways they can harness this power.   I look forward to see what we’ll come up with!

Categorieën: Mozilla-nl planet

Air Mozilla: Martes mozilleros

di, 29/09/2015 - 17:00

Martes mozilleros Reunión bi-semanal para hablar sobre el estado de Mozilla, la comunidad y sus proyectos.

Categorieën: Mozilla-nl planet

Daniel Stenberg: daniel weekly 42, switching off Nagle

di, 29/09/2015 - 10:45


See you at ApacheCon on Friday!


14% HTTP/2 thanks to nginx ?

Brotli everywhere! Firefox, libbrotli

The –libcurl flaw is fixed (and it was GONE from github for a few hours)

http2 explained in Swedish

No, the cheat sheet cannot be in the man page. But…

bug of the week: the http/2 performance fix


option of the week: -k

Talking at the GOTO Conference next week

Categorieën: Mozilla-nl planet

Emily Dunham: Carrying credentials between environments

di, 29/09/2015 - 09:00
Carrying credentials between environments

This scenario is simplified for purposes of demonstration.

I have 3 machines: A, B, and C. A is my laptop, B is a bastion, and C is a server that I only access through the bastion.

I use an SSH keypair helpfully named AB to get from me@A to me@B. On B, I su to user. I then use an SSH keypair named BC to get from user@B to user@C.

I do not wish to store the BC private key on host B.

SSH Agent Forwarding

I have keys AB and BC on host A, where I start. Host A is running ssh-agent, which is installed by default on most Linux distributions.

me@A$ ssh-add ~/.ssh/AB # Add keypair AB to ssh-agent's keychain me@A$ ssh-add ~/.ssh/BC # Add keypair BC to the keychain me@A$ ssh -A me@B # Forward my ssh-agent

Now I’m logged into host B and have access to the AB and BC keypairs. An attacker who gains access to B after I log out will have no way to steal the BC keypair, unlike what would happen if that keypair was stored on B.

See here for pretty pictures explaining in more detail how agent forwarding works.

Anyways, I could now ssh me@C with no problem. But if I sudo su user, my agent is no longer forwarded, so I can’t then use the key that I added back on A!

Switch user while preserving environment variables me@B$ sudo -E su user user@B$ sudo -E ssh user@C What?

The -E flag to sudo preserves the environment variables of the user you’re logged in as. ssh-agent uses a socket whose name is of the form /tmp/ssh-AbCdE/agent.12345 to call back to host A when it’s time to do the handshake involving key BC, and the socket’s name is stored in me‘s SSH_AUTH_SOCK environment variable. So by telling sudo to preserve environment variables when switching user, we allow user to pass ssh handshake stuff back to A, where the BC key is available.

Why is sudo -E required to ssh to C? Because /tmp/sshAbCdE/agent.12345 is owned by me:me, and only the file’s owner may read, write, or execute it. Additionally, the socket itself (agent.12345) is owned by me:me, and is not writable by others.

If you must run ssh on B without sudo, chown -R /tmp/ssh-AbCdE to the user who needs to end up using the socket. Making them world read/writable would allow any user on the system to use any key currently added to the ssh-agent on A, which is a terrible idea.

For what it’s worth, the actual value of /tmp/ssh-AbCdE/agent.12345 is available at any time in this workflow as the result of printenv | grep SSH_AUTH_SOCK | cut -f2 -d =.

The Catch

Did you see what just happened there? An arbitrary user with sudo on B just gained access to all the keys added to ssh-agent on A. Simon pointed out that the right way address this issue is to use ProxyCommand instead of agent forwarding.

No, I really don’t want my keys accessible on B

See man ssh_config for more of the details on ProxyCommand. In ~/.ssh/config on A, I can put:

Host B User me Hostname 111.222.333.444 Host C User user Hostname 222.333.444.555 Port 2222 ProxyCommand ssh -q -w %h:%p B

So then, on A, I can ssh C and be forwarded through B transparently.

Categorieën: Mozilla-nl planet

George Roter: Participation Team: Getting organized and focused

di, 29/09/2015 - 02:26

The Participation Team was created back in January of this year with an ambitious mandate to simultaneously a) get more impact, for Mozilla’s mission and its volunteers, from core contributor participation methods we’re using today, and b) to find and develop new ways that participation can work at Mozilla.

This mandate stands on the shoulders of people and teams who lead this work around Mozilla in the past, including the Community Building Team. As a contrast with these past approaches, our team concentrates staff from around Mozilla, has a dedicated budget, and has the strong support of leadership, reporting to Mitchell Baker (the Executive Chair) and Mark Surman (CEO of the foundation).

For the first half of the year, our approach was to work with and learn from many different teams throughout Mozilla. From Dhaka to Dakar — and everywhere in between — we supported teams and volunteers around the world to increase their effectiveness. From MarketPulse to the Webmaker App launches we worked with different teams within Mozilla to test new approaches to building participation, including testing out what community education could look like. Over this time we talked with/interviewed over 150 staff around Mozilla, generated 40+ tangible participation ideas we’d want to test, and provided “design for participation” consulting sessions with 20+ teams during the Whistler all-hands.

Toward the end of July, we took stock of where we were. We established a set of themes for the rest of 2015 (and maybe beyond), are focused especially on enabling Mozilla’s Core Contributors, and I put in place a new team structure.

  • Focus  – We will partner with a small number of functional teams and work disproportionately with a small number of communities. We will commit to these teams and communities for longer and go deeper.
  • Learning – We’re continuing the work of the Participation Lab, having both focused experiments and paying attention to the new approaches to participation being tested by staff and volunteer Mozillians all around the organization. The emphasis will be on synthesizing lessons about high impact participation, and helping those lessons be applied throughout Mozilla.
  • Open and Effective – We’re investing in improving how we work as a team and our individual skills. A big part of this is building on the agile “heartbeat” method innovated by the foundation, powered by GitHub. Another part of this is solidifying our participation technology group and starting to play a role of aligning similar participation technologies around Mozilla.

You can see these themes reflected in our Q3 Objectives and Key Results.

Team structure:

The Participation Team is focused on activating, growing and increasing the effectiveness of our community of core contributors. Our modified team structure has 5 areas/groups, each with a Lead and a bottom-line accountability. You’ll note that all of these team members are staff — our aim in the coming months is to integrate core contributors into this structure, including existing leadership structures like the ReMo Council.

Participation Partners Global-Local Organizing Developing Leaders Participation Technology Performance and Learning Lead:

William Quiviger

Brian King


Rosana Ardila

Ruben Martin

Guillermo Movia

Konstantina Papadea

Francisco Picolini


George Roter (acting)

Emma Irwin


Pierros Papadeas

Nemo Giannelos

Tasos Katsoulas

Nikos Roussos


Lucy Harris

Bottom Line:

Catalyze participation with product and functional teams to deliver and sustain impact

Bottom Line:

Grow the capacity of Mozilla’s communities to engage volunteers and have impact
(includes Reps and Regional Communities)

Bottom Line:

Grow the capacity of Mozilla’s volunteer leaders and volunteers to have impact

Bottom Line:

Enable large scale, high impact participation at Mozilla through technology

Bottom Line:

Develop a high performing team, and drive learning and synthesize best practice through the Participation Lab

We have also established a Leadership and Strategy group accountable for:

  • Making decisions on team objectives, priorities and resourcing
  • Nurturing a culture of high performance through standard setting and role modelling

This is made up of Rosana Ardila, Lucy Harris, Brian King, Pierros Papadeas, William Quiviger and myself.


As always, I’m excited to hear your feedback on any of this — it is most certainly a work in progress. We also need your help:

  • If you’re a staff/functional team or volunteer team trying something new with participation, please get in touch!
  • If you’re a core contributor/volunteer, take a look at these volunteer tasks.
  • If you have ideas on what the team’s priorities should be over the coming quarter(s), please send me an email — .

As always, feel free to reach out to any member of the team; find us on IRC at #participation; follow along with what we’re doing on the Blog and by following [@MozParticipate on Twitter](; have a conversation on Discourse; or follow/jump into any issues on GitHub.

Categorieën: Mozilla-nl planet

The Servo Blog: This Week In Servo 35

ma, 28/09/2015 - 22:30

In the last week, we landed 37 PRs in the Servo repository!

In addition to a rustup by Manish and a lot of great cleanup, we also saw:

New Contributors Screenshots

Servo on Windows! Courtesy of Vladimir Vukicevic.

Servo on Windows

Text shaping improvements in Servo:

Text shaping improvements


At last week’s meeting, we discussed the outcomes from the Paris layout meetup, how to approach submodule updates, and trying to reduce the horrible enlistment experience with downloading Skia.

Categorieën: Mozilla-nl planet