mozilla

Mozilla Nederland LogoDe Nederlandse
Mozilla gemeenschap

Abonneren op feed Mozilla planet
Planet Mozilla - http://planet.mozilla.org/
Bijgewerkt: 19 uur 11 min geleden

Pascal Finette: The Open Source Disruption

di, 28/10/2014 - 16:52

Yesterday I gave a talk at Singularity University’s Executive Program on Open Source Disruption - it’s (somewhat) new content I developed; here’s the abstract of my talk:

The Open Source movement has upended the software world: Democratizing access, bringing billion dollar industries to their knees, toppling giants and simultaneously creating vast opportunities for the brave and unconventional. After decades in the making, the Open Source ideology, being kindled by ever cheaper and better technologies, is spreading like wildfire - and has the potential to disrupt many industries.

In his talk, Pascal will take you on a journey from the humble beginnings to the end of software as we knew it. He will make a case for why Open Source is an unstoppable force and present you with strategies and tactics to thrive in this brave new world.

And here’s the deck.

Categorieën: Mozilla-nl planet

Yunier José Sosa Vázquez: Disponible el Add-on SDK 1.17

di, 28/10/2014 - 13:10

Add-on-SDKUna nueva versión de la herramienta creada por Mozilla para  desarrollar complementos ha sido liberada y se encuentra disponible desde nuestro sitio web.

Descargar Add-on SDK 1.17.

En esta ocasión no encontraremos grandes novedades ni nuevas características añadidas al Add-on SDK pues este lanzamiento tiene como objetivo principal la actualización del comando cfx y la compatibilidad de las extensiones con las nuevas versiones de Firefox (32+).

El mayor cambio en el Add-on SDK lo veremos en la próxima versión ya que se dejará de utilizar cfx para emplear JPM (Jetpack Manager), un módulo de Node.JS. Según los desarrolladores de Mozilla con cfx era muy complejo empaquetar las dependencias en cada add-on y en su lugar JMP es más simple al eliminar algunas tareas que cfx hacía.

JPM también permitirá a los desarrolladores de complementos crear y usar los módulos npm como dependencias en sus complementos. En este artículo publicado en el sitio para Desarrolladores podrás aprender a trabajar con JPM y los cambios que debes realizar en tu complemento.

Si te interesa la creación de añadidos para Firefox puedes visitar nuestro sitio de Desarrolladores e investigar más al respecto. Allí encontrarás presentaciones, talleres y artículos que tocan este tema.

Antes de descargar el Add-on SDK 1.17 recuerda que puedes contribuir a la mejora de este reportando bugs, mirando el código para que contribuyas dando tus soluciones o simplemente dejar tu impresión sobre esta nueva versión.

Categorieën: Mozilla-nl planet

Will Kahn-Greene: Input: Removing the frontpage chart

di, 28/10/2014 - 11:00

I've been working on Input for a while now. One of the things I've actively disliked was the chart on the front page. This blog post talks about why I loathe it and then what's happening this week.

First, here's the front page dashboard as it is today:

Input front page dashboard

Input front page dashboard (October 2014)

When I started, Input gathered feedback solely on the Firefox desktop web browser. It was a one-product feedback gathering site. Because it was gathering feedback for a single product, the front page dashboard was entirely about that single product. All the feedback talked about that product. The happy/sad chart was about that product. Today, Input gathers feedback for a variety of products.

When I started, it was nice to have a general happy/sad chart on the front page because no one really looked at it and the people who did look at it understood why the chart slants so negatively. So the people who did look at it understood the heavy negative bias and could view the chart as such. Today, Input is viewed by a variety of people who have no idea how feedback on Input works or why it's so negatively biased.

When I started, Input didn't expose the data in helpful ways allowing people to build their own charts and dashboards to answer their specific questions. Thus there was a need for a dashboard to expose information from the data Input was gathering. I contend that the front page dashboard did this exceedingly poorly--what does the happy/sad lines actually mean? If it dips, what does that mean? If they spike, what does that mean? There's not enough information in the chart to make any helpful conclusions. Today, Input has an API allowing anyone to fetch data from Input in JSON format and generate their own dashboards of which there are several out there.

When I started, Input received some spam/abuse feedback, but the noise was far outweighed by the signal. Today, we get a ton of spam/abuse feedback. We still have no good way of categorizing spam/abuse as such and removing it from the system. That's something I want to work on more, but haven't had time to address. In the meantime, the front page dashboard chart has a lot of spammy noise in it. Thus the happy/sad lines aren't accurate.

Thus I argue we've far outlived the usefulness of the chart on the front page and it's time for it to go away.

So, what happens now? Bug 1080816 covers removing the front page dashboard chart. It covers some other changes to the front page, but I think I'm going to push those off until later since they're all pretty "up in the air".

If you depend on the front page dashboard chart, toss me an email. Depending on how many people depend on the front page chart and what the precise needs are, maybe we'll look into writing a better one.

Categorieën: Mozilla-nl planet

Byron Jones: happy bmo push day!

di, 28/10/2014 - 07:00

the following changes have been pushed to bugzilla.mozilla.org:

  • [1083790] Default version does not take into account is_active
  • [1078314] Missing links and broken unicode characters in some bugmail
  • [1072662] decrease the number of messages a jobqueue worker will process before terminating
  • [1086912] Fix BugUserLastVisit->get
  • [1062940] Please increase bmo’s alias length to match bugzilla 5.0 (40 chars instead of 20)
  • [1082113] The ComponentWatching extension should create a default watch user with a new database installation
  • [1082106] $dbh->bz_add_columns creates a foreign key constraint causing failure in checksetup.pl when it tries to re-add it later
  • [1084052] Only show “Add bounty tracking attachment” links to people who actually might do that (not everyone in core-security)
  • [1075281] bugmail filtering using “field name contains” doesn’t work correctly with flags
  • [1088711] New bugzilla users are unable to user bug templates
  • [1076746] Mentor field is missing in the email when a bug gets created
  • [1087525] fix movecomponents.pl creating duplicate rows in flag*clusions

discuss these changes on mozilla.tools.bmo.


Filed under: bmo, mozilla
Categorieën: Mozilla-nl planet

Karl Dubost: List Of Google Web Compatibility Bugs On Firefox

di, 28/10/2014 - 06:17

As of today, there are 706 Firefox Mobile bugs and 205 Firefox Desktop bugs in Mozilla Web Compatibility activity. These are OPEN bugs for many different companies (not only Google). We could add to that the any browser 237 bugs already collected on Webcompat.com. Help is welcome.

On these, we have a certain number of bugs related to Google Web properties.

Google and Mozilla

Let's make it very clear. We have an ongoing open discussions channel with Google about these bugs. I will not cite names of people at Google helping to deal with them for their own sake and privacy, but they do the best they can for getting us a resolution. It's not the case for all companies. So they know themselves and I want to thank them for this.

That said, there are long standing bugs where Firefox can't properly access the services developed by Google. The nature of bugs is diverse. It can be wrong user agent sniffing, -webkit- CSS or JS, or a codepath in JS using a very specific feature of Chrome. The most frustrating ones for the Web Compatibility people (and in return for users) are the ones where you can see that there's a bit of CSS here and there breaking the user experience but in the end, it seems it could work.

The issue is often with the "it seems it could work". We may have at first sight an impression that it will be working and then there is a hidden feature which has not be completely tested.

Also Firefox is not the only browser with issues with Google services. Opera browser, even after the switch to blink, has still issues with some Google services.

List Of Google Bugs For Firefox Browsers

Here the non exhaustive list of bugs which are still opened on Bugzilla. If you find bugs which are not valid anymore or resolved, please let us know. We do our best to find out what is resolved, but we might miss some sometimes. The more you test, the more it helps all users. With detail bug reports and analysis of the issues we have a better and more useful communications with Google Engineers for eventually fixing the bugs.

You may find similar bugs on webcompat.com. We haven't yet made the full transition.

You can help us.

Otsukare.

Categorieën: Mozilla-nl planet

Allison Naaktgeboren: Applying Privacy Series: The 1st meeting

di, 28/10/2014 - 01:05

The 1st Meeting

Product Manager: People, this could be a game changer! Think of the paid content we could open up to non-english speakers in those markets. How fast can we get it into trunk?

Engineer: First we have to figure out what *it* is.

Product Manager: I want users to be able to click a button in the main dropdown menu and translate all their text.

Engineering Manager: Shouldn’t we verify with the user which language they want? Many in the EU speak multiple languages. Also do we want translation per page?

Product Manager:  Worry about translation per page later. Yeah, verify with user is fine as long as we only do it once.

Engineering Manager: It doesn’t quite work like that. If you want translation per page later, we’ll need to architect this so it can support that in the future.

Product Manager: …Fine

Engineer: What about pages that fail translation? What would we display in that case?

Product Manager: Throw an error bar at the top and show the original page. That’ll cover languages the service can’t handle too. Use the standard error template from UX.

Engineering Manager: What device actually does the translation? The phone?

Product Manager: No, make the server do it, bandwidth on phones is precious and spotty. When they start up the phone next, it should download the content already translated to our app.

Engineer: Ok, well if there’s a server involved, we need to talk to the Ops folks.

Engineering Manager: and the DBAs. We’ll also need to find who is the expert on user data handling. We could be handling a lot of that before this is out.

Project Manager: Next UI release is in 6 weeks. I’ll see about scheduling some time with Ops and the database team.

Product Manager: Can you guys pull it off?

Engineer: Depends on the server folks’ schedule.

Who brought up user data safety & privacy concerns in this conversation?

The Engineering Manager.

Categorieën: Mozilla-nl planet

Robert O'Callahan: Are We Fast Yet? Yes We Are!

ma, 27/10/2014 - 23:58

Spidermonkey has passed V8 on Octane performance on arewefastyet, and is now leading V8 and JSC on Octane, Sunspider and Kraken.

Does this matter? Yes and no. On one hand, it's just a few JS benchmarks, real-world performance is much more complicated, and it's entirely possible that V8 (or even Chakra) could take the lead again in the future. On the other hand, beating your competitors on their own benchmarks is much more impressive than beating your competitors on benchmarks which you co-designed along with your engine to win on, which is the story behind most JS benchmarking to date.

This puts us in a position of strength, so we can say "these benchmarks are not very interesting; let's talk about other benchmarks (e.g. asm.js-related) and language features" without being accused of being sore losers.

Congratulations to the Spidermonkey team; great job!

Categorieën: Mozilla-nl planet

Gregory Szorc: Implications of Using Bugzilla for Firefox Patch Development

ma, 27/10/2014 - 23:25

Mozilla is very close to rolling out a new code review tool based on Review Board. When I became involved in the project, I viewed it as an opportunity to start from a clean slate and design the ideal code development workflow for the average Firefox developer. When the design of the code review experience was discussed, I would push for decisions that were compatible with my utopian end state.

As part of formulating the ideal workflows and design of the new tool, I needed to investigate why we do things the way we do, whether they are optimal, and whether they are necessary. As part of that, I spent a lot of time thinking about Bugzilla's role in shaping the code that goes into Firefox. This post is a summary of my findings.

The primary goal of this post is to dissect the practices that Bugzilla influences and to prepare the reader for the potential to reassemble the pieces - to change the workflows - in the future, primarily around Mozilla's new code review tool. By showing that Bugzilla has influenced the popularization of what I consider non-optimal practices, it is my hope that readers start to question the existing processes and open up their mind to change.

Since the impetus for this post in the near deployment of Mozilla's new code review tool, many of my points will focus on code review.

Before I go into my findings, I'd like to explicitly state that while many of the things I'm about to say may come across as negativity towards Bugzilla, my intentions are not to put down Bugzilla or the people who maintain it. Yes, there are limitations in Bugzilla. But I don't think it is correct to point fingers and blame Bugzilla or its maintainers for these limitations. I think we got where we are following years of very gradual shifts. I don't think you can blame Bugzilla for the circumstances that led us here. Furthermore, Bugzilla maintainers are quick to admit the faults and limitations of Bugzilla. And, they are adamant about and instrumental in rolling out the new code review tool, which shifts code review out of Bugzilla. Again, my intent is not to put down Bugzilla. So please don't direct ire that way yourself.

So, let's drill down into some of the implications of using Bugzilla.

Difficult to Separate Contexts

The stream of changes on a bug in Bugzilla (including review comments) is a flat, linear list of plain text comments. This works great when the activity of a bug follows a nice, linear, singular topic flow. However, real bug activity does not happen this way. All but the most trivial bugs usually involve multiple points of discussion. You typically have discussion about what the bug is. When a patch comes along, reviewer feedback comes in both high-level and low-level forms. Each item in each group is its own logical discussion thread. When patches land, you typically have points of discussion tracking the state of this patch. Has it been tested, does it need uplift, etc.

Bugzilla has things like keywords, flags, comment tags, and the whiteboard to enable some isolation of these various contexts. However, you still have a flat, linear list of plain text comments that contain the meat of the activity. It can be extremely difficult to follow these many interleaved logical threads.

In the context of code review, lumping all review comments into the same linear list adds overhead and undermines the process of landing the highest-quality patch possible.

Review feedback consists of both high-level and low-level comments. High-level would be things like architecture discussions. Low-level would be comments on the code itself. When these two classes of comments are lumped together in the same text field, I believe it is easy to lose track of the high-level comments and focus on the low-level. After all, you may have a short paragraph of high-level feedback right next to a mountain of low-level comments. Your eyes and brain tend to gravitate towards the larger set of more concrete low-level comments because you sub-consciously want to fix your problems and that large mass of text represents more problems, easier problems to solve than the shorter and often more abstract high-level summary. You want instant gratification and the pile of low-level comments is just too tempting to pass up. We have to train ourselves to temporarily ignore the low-level comments and focus on the high-level feedback. This is very difficult for some people. It is not an ideal disposition. Benjamin Smedberg's recent post on code review indirectly talks about some of this by describing his has rational approach of tackling high-level first.

As review iterations occur, the bug devolves into a mix of comments related to high and low-level comments. It thus becomes harder and harder to track the current high-level state of the feedback, as they must be picked out from the mountain of low-level comments. If you've ever inherited someone else's half-finished bug, you know what I'm talking about.

I believe that Bugzilla's threadless and contextless comment flow disposes us towards focusing on low-level details instead of the high-level. I believe that important high-level discussions aren't occurring at the rate they need and that technical debt increases as a result.

Difficulty Tracking Individual Items of Feedback

Code review feedback consists of multiple items of feedback. Each one is related to the review at hand. But oftentimes each item can be considered independent from others, relevant only to a single line or section of code. Style feedback is one such example.

I find it helps to model code review as a tree. You start with one thing you want to do. That's the root node. You split that thing into multiple commits. That's a new layer on your tree. Finally, each comment on those commits and the comments on those comments represent new layers to the tree. Code review thus consists of many related, but independent branches, all flowing back to the same central concept or goal. There is a one to many relationship at nearly every level of the tree.

Again, Bugzilla lumps all these individual items of feedback into a linear series of flat text blobs. When you are commenting on code, you do get some code context printed out. But everything is plain text.

The result of this is that tracking the progress on individual items of feedback - individual branches in our conceptual tree - is difficult. Code authors must pour through text comments and manually keep an inventory of their progress towards addressing the comments. Some people copy the review comment into another text box or text editor and delete items once they've fixed them locally! And, when it comes time to review the new patch version, reviewers must go through the same exercise in order to verify that all their original points of feedback have been adequately addressed! You've now redundantly duplicated the feedback tracking mechanism among at least two people. That's wasteful in of itself.

Another consequence of this unstructured feedback tracking mechanism is that points of feedback tend to get lost. On complex reviews, you may be sorting through dozens of individual points of feedback. It is extremely easy to lose track of something. This could have disastrous consequences, such as the accidental creation of a 0day bug in Firefox. OK, that's a worst case scenario. But I know from experience that review comments can and do get lost. This results in new bugs being filed, author and reviewer double checking to see if other comments were not acted upon, and possibly severe bugs with user impacting behavior. In other words, this unstructured tracking of review feedback tends to lessen code quality and is thus a contributor to technical debt.

Fewer, Larger Patches

Bugzilla's user interface encourages the writing of fewer, larger patches. (The opposite would be many, smaller patches - sometimes referred to as micro commits.)

This result is achieved by a user interface that handles multiple patches so poorly that it effectively discourages that approach, driving people to create larger patches.

The stream of changes on a bug (including review comments) is a flat, linear list of plain text comments. This works great when the activity of a bug follows a nice, linear flow. However, reviewing multiple patches doesn't work in a linear model. If you attach multiple patches to a bug, the review comments and their replies for all the patches will be interleaved in the same linear comment list. This flies in the face of the reality that each patch/review is logically its own thread that deserves to be followed on its own. The end result is that it is extremely difficult to track what's going on in each patch's review. Again, we have different contexts - different branches of a tree - all living in the same flat list.

Because conducting review on separate patches is so painful, people are effectively left with two choices: 1) write a single, monolithic patch 2) create a new bug. Both options suck.

Larger, monolithic patches are harder and slower to review. Larger patches require much more cognitive load to review, as the reviewer needs to capture the entire context in order to make a review determination. This takes more time. The increased surface area of the patch also increases the liklihood that the reviewer will find something wrong and will require a re-review. The added complexity of a larger patch also means the chances of a bug creeping in are higher, leading to more bugs being filed and more reviews later. The more review cycles the patch goes through, the greater the chances it will suffer from bit rot and will need updating before it lands, possibly incurring yet more rounds of review. And, since we measure progress in terms of code landing, the delay to get a large patch through many rounds of review makes us feel lethargic and demotivates us. Large patches have intrinsic properties that lead to compounding problems and increased development cost.

As bad as large patches are, they are roughly in the same badness range as the alternative: creating more bugs.

When you create a new bug to hold the context for the review of an individual commit, you are doing a lot of things, very few of them helpful. First, you must create a new bug. There's overhead to do that. You need to type in a summary, set up the bug dependencies, CC the proper people, update the commit message in your patch, upload your patch/attachment to the new bug, mark the attachment on the old bug obsolete, etc. This is arguably tolerable, especially with tools that can automate the steps (although I don't believe there is a single tool that does all of what I mentioned automatically). But the badness of multiple bugs doesn't stop there.

Creating multiple bugs fragments the knowledge and history of your change and diminishes the purpose of a bug. You got in the situation of creating multiple bugs because you were working on a single logical change. It just so happened that you needed/wanted multiple commits/patches/reviews to represent that singular change. That initial change was likely tracked by a single bug. And now, because of Bugzilla's poor user interface around mutliple patch reviews, you now find yourself creating yet another bug. Now you have two bug numbers - two identifiers that look identical, only varying by their numeric value - referring to the same logical thing. We've started with a single bug number referring to your logical change and created what are effectively sub-issues, but allocated them in the same namespace as normal bugs. We've diminished the importance of the average bug. We've introduced confusion as to where one should go to learn about this single, logical change. Should I go to bug X or bug Y? Sure, you can likely go to one and ultimately find what you were looking for. But that takes more effort.

Creating separate bugs for separate reviews also makes refactoring harder. If you are going the micro commit route, chances are you do a lot of history rewriting. Commits are combined. Commits are split. Commits are reordered. And if those commits are all mapping to individual bugs, you potentially find yourself in a huge mess. Combining commits might mean resolving bugs as duplicates of each other. Splitting commits means creating yet another bug. And let's not forget about managing bug dependencies. Do you set up your dependencies so you have a linear, waterfall dependency corresponding to commit order? That logically makes sense, but it is hard to keep in sync. Or, do you just make all the review bugs depend on a single parent bug? If you do that, how do you communicate the order of the patches to the reviewer? Manually? That's yet more overhead. History rewriting - an operation that modern version control tools like Git and Mercurial have enabled to be a lightweight operation and users love because it doesn't constrain them to pre-defined workflows - thus become much more costly. The cost may even be so high that some people forego rewriting completely, trading their effort for some poor reviewer who has to inherit a series of patches that isn't organized as logically as it could be. Like larger patches, this increases cognitive load required to perform reviews and increases development costs.

As you can see, reviewing multiple, smaller patches with Bugzilla often leads to a horrible user experience. So, we find ourselves writing larger, monolithic patches and living with their numerous deficiencies. At least with monolithic patches we have a predictable outcome for how interaction with Bugzilla will play out!

I have little doubt that large patches (whose existence is influenced by the UI of Bugzilla) slows down the development velocity of Firefox.

Commit Message Formatting

The heavy involvement of Bugzilla in our code development lifecycle has influenced how we write commit messages. Let's start with the obvious example. Here is our standard commit message format for Firefox:

Bug 1234 - Fix some feature foo; r=gps

The bug is right there at the front of the commit message. That prominent placement is effectively saying the bug number is the most important detail about this commit - everything else is ancillary.

Now, I'm sure some of you are saying, but Greg, the short description of the change is obviously more important than the bug number. You are right. But we've allowed ourselves to make the bug and the content therein more important than the commit.

Supporting my theory is the commit message content following the first/summary line. That data is almost always - wait for it - nothing: we generally don't write commit messages that contain more than a single summary line. My repository forensics show that that less than 20% of commit messages to Firefox in 2014 contain multiple lines (this excludes merge and backout commits). (We are doing better than 2013 - the rate was less than 15% then).

Our commit messages are basically saying, here's a highly-abbreviated summary of the change and a pointer (a bug number) to where you can find out more. And of course loading the bug typically reveals a mass of interleaved comments on various topics, hardly the high-level summary you were hoping was captured in the commit message.

Before I go on, in case you are on the fence as to the benefit of detailed commit messages, please lead Phabricator's recommendations on revision control and writing reviewable code. I think both write-ups are terrific and are excellent templates that apply to nearly everyone, especially a project as large and complex as Firefox.

Anyway, there are many reasons why we don't capture a detailed, multi-line commit message. For starters, you aren't immediately rewarded for doing it: writing a good commit message doesn't really improve much in the short term (unless someone yells at you for not doing it). This is a generic problem applicable to all organizations and tools. This is a problem that culture must ultimately rectify. But our tools shouldn't reinforce the disposition towards laziness: they should reward best practices.

I don't Bugzilla and our interactions with it do an adequate job rewarding good commit message writing. Chances are your mechanism for posting reviews to Bugzilla or posting the publishing of a commit to Bugzilla (pasting the URL in the simple case) brings up a text box for you to type review notes, a patch description, or extra context for the landing. These should be going in the commit message, as they are the type of high-level context and summarizations of choices or actions that people crave when discerning the history of a repository. But because that text box is there, taunting you with its presence, we write content there instead of in the commit message. Even where tools like bzexport exist to upload patches to Bugzilla, potentially nipping this practice in the bug, it still engages in frustrating behavior like reposting the same long commit message on every patch upload, producing unwanted bug spam. Even a tool that is pretty sensibly designed has an implementation detail that undermines a good practice.

Machine Processing of Patches is Difficult

I have a challenge for you: identify all patches currently under consideration for incorporation in the Firefox source tree, run static analysis on them, and tell me if they meet our code style policies.

This should be a solved problem and deployed system at Mozilla. It isn't. Part of the problem is because we're using Bugzilla for conducting review and doing patch management. That may sound counter-intuitive at first: Bugzilla is a centralized service - surely we can poll it to discover patches and then do stuff with those patches. We can. In theory. Things break down very quickly if you try this.

We are uploading patch files to Bugzilla. Patch files are representations of commits that live outside a repository. In order to get the full context - the result of the patch file - you need all the content leading up to that patch file - the repository data. When a naked patch file is uploaded to Bugzilla, you don't always have this context.

For starters, you don't know with certainly which repository the patch belongs to because that isn't part of the standard patch format produced by Mercurial or Git. There are patches for various repositories floating around in Bugzilla. So now you need a way to identify which repository a patch belongs to. It is a solvable problem (aggregate data for all repositories and match patches based on file paths, referenced commits, etc), albeit one Mozilla has not yet solved (but should).

Assuming you can identify the repository a patch belongs to, you need to know the parent commit so you can apply this patch. Some patches list their parent commits. Others do not. Even those that do may lie about it. Patches in MQ don't update their parent field when they are pushed, only after they are refreshed. You could be testing and uploading a patch with a different parent commit than what's listed in the patch file! Even if you do identify the parent commit, this commit could belong to another patch under consideration that's also on Bugzilla! So now you need to assemble a directed graph with all the patches known from Bugzilla applied. Hopefully they all fit in nicely.

Of course, some patches don't have any metadata at all: they are just naked diffs or are malformed commits produced by tools that e.g. attempt to convert Git commits to Mercurial commits (Git users: you should be using hg-git to produce proper Mercurial commits for Firefox patches).

Because Bugzilla is talking in terms of patch files, we often lose much of the context needed to build nice tools, preventing numerous potential workflow optimizations through automation. There are many things machines could be doing for us (such as looking for coding style violations). Instead, humans are doing this work and costing Mozilla a lot of time and lost developer productivity in the process. (A human costs ~$100/hr. A machine on EC2 is pennies per hour and should do the job with lower latency. In other words, you can operate over 300 machines 24 hours a day for what you may an engineer to work an 8 hour shift.)

Conclusion

I have outlined a few of the side-effects of using Bugzilla as part of our day-to-day development, review, and landing of changes to Firefox.

There are several takeways.

First, one cannot argue the fact that Firefox development is bug(zilla) centric. Nearly every important milestone in the lifecycle of a patch involves Bugzilla in some way. This has its benefits and drawbacks. This article has identified many of the drawbacks. But before you start crying to expunge Bugzilla from the loop completely, consider the benefits, such as a place anyone can go to to add metadata or comments on something. That's huge. There is a larger discussion to be had here. But I don't want to be inviting it quite yet.

A common thread between many of the points above is Bugzilla's unstructured and generic handling of code and metadata attached to it (patches, review comments, and landing information). Patches are attachments, which can be anything under the sun. Review comments are plain text comments with simple author, date, and tag metadata. Landings are also communicated by plain text review comments (at least initially - keywords and flags are used in some scenarios).

By being a generic tool, Bugzilla throws away a lot of the rich metadata that we produce. That data is still technically there in many scenarios. But it becomes extremely difficult if not practically impossible for both humans and machines to access efficiently. We lose important context and feedback by normalizing all this data to Bugzilla. This data loss creates overhead and technical debt. It slows Mozilla down.

Fortunately, the solutions to these problems and shortcomings are conceptually simple (and generally applicable): preserve rich context. In the context of patch distribution, push commits to a repository and tell someone to pull those commits. In the context of code review, create sub-reviews for different commits and allow tracking and easy-to-follow (likely threaded) discussions on found issues. Design workflow to be code first, not tool or bug first. Optimize workflows to minimize people time. Lean heavily on machines to do grunt work. Integrate issue tracking and code review, but not too tightly (loosely coupled, highly cohesive). Let different tools specialize in the handling of different forms of data: let code review handle code review. Let Bugzilla handle issue tracking. Let a landing tool handle tracking the state of landings. Use middleware to make them appear as one logical service if they aren't designed to be one from the start (such as is Mozilla's case with Bugzilla).

Another solution that's generally applicable is to refine and optimize the whole process to land a finished commit. Your product is based on software. So anything that adds overhead or loss of quality in the process of developing that software is fundamentally a product problem and should be treated as such. Any time and brain cycles lost to development friction or bugs that arise from things like inadequate code reviews tools degrade the quality of your product and take away from the user experience. This should be plain to see. Attaching a cost to this to convince the business-minded folks that it is worth addressing is a harder matter. I find management with empathy and shared understanding of what amazing tools can do helps a lot.

If I had to sum up the solution in one sentence, it would be: invest in tools and developer happiness.

I hope to soon publish a post on how Mozilla's new code review tool addresses many of the workflow deficiencies present today. Stay tuned.

Categorieën: Mozilla-nl planet

Soledad Penades: MozFest 2014, day 2

ma, 27/10/2014 - 23:05

As I was typing the final paragraphs of my previous post, hundreds of Flame devices were being handed to MozFest attendees that had got involved on sessions the day before.

When I arrived (late, because I felt like a lazy slug), there was a queue in the flashing station, which was, essentially, a table with a bunch of awesome Mozilla employees and volunteers from all over the world, working in shifts to make sure all those people with phones using Firefox OS 1.3 were upgraded to 2.1. I don’t have the exact numbers, but I believe the amount was close to 1000 phones. One thousand phones. BAM. Super amazing work, friends. **HATS OFF**

Flamemania at the MEGABOOTH #mozfest pic.twitter.com/Q6RRDXBIss

— solendid (@supersole) October 26, 2014

Potch was also improving the Flame starter guide. It had been renamed to Flame On, so go grab that one if you got a Flame and want to know what you can do now. If you want to contribute to the guide, here’s the code.

New Flame? Start here http://t.co/i5uA0lU0Er #mozfest pic.twitter.com/NJbyWeHqXb

— solendid (@supersole) October 26, 2014

I (figuratively) rolled up my sleeves (I was wearing a t-shirt), and joined Potch’s table in their effort to enable ADB+DevTools in the newly unboxed phones, so that then the flashing table could jump straight to that part of the process. Not everybody knew about that and they went directly to the other queue, so at some point Marcia went person by person and enabled ADB+DevTools in those phones. Which I found when I tried to help by making sure everybody had that done… and turns out that had already happened. Too late, Sole!

"FLASHING MEANS UPGRADING" #mozfest #flame pic.twitter.com/7PBQmWPhoV

— solendid (@supersole) October 26, 2014

They called us for “the most iconic clipart in Mozilla” i.e. the group photo. After we posed seriously (“interview picture”), smiling and scaring, we went upstairs again to deal with the flow of newly Flame owners.

I helped a bunch of people setting up WebIDE and explained them how it could be used to quickly get started in developing apps, install the simulators, try their app in them and in the phones, etc. But (cue dramatic voice) I saw versions of Firefox I hadn’t seen for years, had to reminisce things I hadn’t done in even longer (installing udev rules) and also did things that looked like straight out of a nightmare (like installing an unsigned driver in Windows 8). Basically, getting this up and running is super easy in Mac, less so in Linux, and quite tedious in Windows. The good news is: we’re working on making the connection part easier!

My favourite part of helping someone set this environment up was then asking, and learning, about how they planned to use it, and how’s tech like in their countries. As I said, MozFest has people from all the places, and that gave me the chance to understand how they use our tools. For example they might just have intermittent internet access which is also metered, BUT they have pretty decent local networks in schools or unis, so it’s feasible to get just one person to get the data (e.g. an updated Firefox) and then everyone else can go to the uni with your laptop to copy all that data. We also had a chance to discuss what sort of apps they are looking to build, and hopefully we will keep in touch so that I can help empower and teach them and then they can spread that knowledge to more people locally! Yay collaboration!

At some point I went out to get some food and get some quiet time. I was drinking water constantly so that was good for my throat but I was feeling little stings of pain occasionally.

On the way back, I grabbed some coffee nearby, and when I entered the college I stumbled upon Rosana, Krupa and Amy who were having some interesting discussions on the lobby. We left with a great life lesson from Amy: if someone is acting like a jerk, perhaps they have a terrible shitty job.

Upstairs to the 6th floor again, I stumble upon Bobby this time and we run a quick postmortem-so-far: mostly good experience, but I feel there’s too much noise for bigger groups, I get distracted and it’s terrible. I also should not let supertechnical people hijack conversations that scare less tech-savvy people away, and I should also explicitly ask each person for questions, not leave it up to them to ask (because they might be afraid of taking the initiative). I should know better, but I don’t facilitate sessions every day so I’m a bit out of my element. I’ll get better! I let Bobby eat his lunch (while standing), and go back to the MEGABOOTH. It’s still a hive of activity. I help where I can. WebIDE questions, Firefox OS questions, you name it.

I also had a chance to chat with Ioana, Flaki, Bebe and other members of the Romania community. We had interacted via Twitter before but never met! They’re supercool and I’m going to be visiting their country next month so we’re all super excited! Yay!

As the evening approaches the area starts to calm down. At some point we can see the flashing station volunteers again, once the queue is gone. They are still in one piece! I start to think they’re superhuman.

BIG SHOUT OUT to the flashing champions who've basically updated 1000 phones today – THANKS #mozfest pic.twitter.com/C7B45gyKZp

— solendid (@supersole) October 26, 2014

It’s starting to be demo-time again! I move downstairs to the 4th floor where people are installing screens and laptops for their demos, but before I know it someone comes nearby and entices me to go back to the Art Room where they are starting the party already. How can I say no?

I go there, they’ve turned off the lights for better atmosphere and so we can see the projected works in all their glory. It feels like being in the Tate Tanks-i.e. great and definitely atmospheric!

Ended up in the art room again… #mozfest pic.twitter.com/G12wuf2Hyd

— solendid (@supersole) October 26, 2014

Forrest from NoFlo is there, he’s used Mirobot, a Logo-Like robot kit, connected to NoFlo to program it (and I think it used the webcam as input too):

Mirobot + Noflo #mozfest pic.twitter.com/J9M7UbGCfM

— solendid (@supersole) October 26, 2014

When I come back to the 4th they’re having some announcements and wrap-up speeches, thanking everyone who’s put their efforts into making the festival a success. There’s a mention for pretty much everyone, so our hands hurt! Also, Dees revealed his true inner self:

.@cyberdees revealing his true inner self #mozfest pic.twitter.com/OJIyoNfqUX

— solendid (@supersole) October 26, 2014

I realised Bobby had changed to wear even more GOLD. Space wranglers could be identified because they were wearing a sort of golden pashmina, but Bobby took it further. Be afraid, mr. Tom “Gold pants” Dale!

MEGABOOTH space wrangler @secretrobotron with the best outfit ever #mozfest pic.twitter.com/O9dlXVYKdF

— solendid (@supersole) October 26, 2014

And then on to the demos, there was a table full of locks and I didn’t know what it was about:

No idea what this is but retro paper is always a #win #mozfest pic.twitter.com/RJGN458yn0

— solendid (@supersole) October 26, 2014

until someone explained to me that those locks came in various levels of difficulty and were there to learn how to pick locks! Which I started doing:

So I'm learning to pick locks o_O #mozfest pic.twitter.com/wC9CK2zDss

— solendid (@supersole) October 26, 2014

Now I cannot stop thinking about cylinders and feeling the mechanism each time I open doors… the harm has been done!

Chris Lord had told me they would be playing at MozFest but I thought I had missed them the night before. No! they played on Sunday! Everyone, please welcome The Vanguards:

Did I mention the grassroots band? /cc @cwiiis #mozfest pic.twitter.com/Db5lfyb8xM

— solendid (@supersole) October 26, 2014

And it wasn’t too long until the party would be over and was time to go home! Exhausted, but exhilarated!

I said goodbye to the Mozilla Reps gathering in front of the college, and thanks, and wished them a happy safe journey back home. Not sure why, since they were just going to have dinner, but I was loaded with good vibes and that felt like the right thing to do.

And so that was MozFest 2014 for me. A chaos like usual and I hardly had time to see anything outside from the MEGABOOTH. I’m so sorry I missed so many interesting sessions, but I’m glad I helped so many people too, so there’s that!

flattr this!

Categorieën: Mozilla-nl planet

Kim Moir: Mozilla pushes - September 2014

ma, 27/10/2014 - 22:11
Here's September 2014's monthly analysis of the pushes to our Mozilla development trees.
You can load the data as an HTML page or as a json file.


Trends
Suprise!  No records were broken this month.

Highlights
12267 pushes
409 pushes/day (average)
Highest number of pushes/day: 646 pushes on September 10, 2014
22.6 pushes/hour (average)

General Remarks
Try has around 36% of pushes and Gaia-Try comprise about 32%.  The three integration repositories (fx-team, mozilla-inbound and b2g-inbound) account around 22% of all the pushes.

Records
August 2014 was the month with most pushes (13,090  pushes)
August 2014 has the highest pushes/day average with 620 pushes/day
July 2014 has the highest average of "pushes-per-hour" with 23.51 pushes/hour
August 20, 2014 had the highest number of pushes in one day with 690 pushes





Categorieën: Mozilla-nl planet

Yunier José Sosa Vázquez: Actualizados los canales de Firefox y Thunderbird

ma, 27/10/2014 - 21:17

Se encuentra disponible actualizaciones para Firefox y Thunderbird. Esto incluye, la versión 15 de plugin Adobe Flash Player y las versiones para Andriod de Firefox.

Release: Firefox 33.0.1, Thunderbird 31.2.0, Firefox Mobile 33.0

Beta: Firefox 34

Aurora: Firefox 35

Nightly: Firefox 36 (con procesos separados gracias a Electrolysis) y Thunderbird 36

Ir a Descargas

Categorieën: Mozilla-nl planet

Tim Taubert: Deploying TLS the hard way

ma, 27/10/2014 - 19:00
  1. How does TLS work?
  2. The certificate
  3. (Perfect) Forward Secrecy
  4. Choosing the right cipher suites
  5. HTTP Strict Transport Security
  6. HSTS Preload List
  7. OCSP Stapling
  8. HTTP Public Key Pinning
  9. Known attacks

Last weekend I finally deployed TLS for timtaubert.de and decided to write up what I learned on the way hoping that it would be useful for anyone doing the same. Instead of only giving you a few buzz words I want to provide background information on how TLS and certain HTTP extensions work and why you should use them or configure TLS in a certain way.

One thing that bugged me was that most posts only describe what to do but not necessarily why to do it. I hope you appreciate me going into a little more detail to end up with the bigger picture of what TLS currently is, so that you will be able to make informed decisions when deploying yourselves.

To follow this post you will need some basic cryptography knowledge. Whenever you do not know or understand a concept you should probably just head over to Wikipedia and take a few minutes or just do it later and maybe re-read the whole thing.

Disclaimer: I am not a security expert or cryptographer but did my best to research this post thoroughly. Please let me know of any mistakes I might have made and I will correct them as soon as possible.

But didn’t Andy say this is all shit?

I read Andy Wingo’s blog post too and I really liked it. Everything he says in there is true. But what is also true is that TLS with the few add-ons is all we have nowadays and we better make the folks working for the NSA earn their money instead of not trying to encrypt traffic at all.

After you finished reading this page, maybe go back to Andy’s post and read it again. You might have a better understanding of what he is ranting about than you had before if the details of TLS are still dark matter to you.

So how does TLS work?

Every TLS connection starts with both parties sharing their supported TLS versions and cipher suites. As the next step the server sends its X.509 certificate to the browser.

Checking the server’s certificate

The following certificate checks need to be performed:

  • Does the certificate contain the server’s hostname?
  • Was the certificate issued by a CA that is in my list of trusted CAs?
  • Does the certificate’s signature verify using the CA’s public key?
  • Has the certificate expired already?
  • Was the certificate revoked?

All of these are very obvious crucial checks. To query a certificate’s revocation status the browser will use the Online Certificate Status Protocol (OCSP) which I will describe in more detail in a later section.

After the certificate checks are done and the browser ensured it is talking to the right host both sides need to agree on secret keys they will use to communicate with each other.

Key Exchange using RSA

A simple key exchange would be to let the client generate a master secret and encrypt that with the server’s public RSA key given by the certificate. Both client and server would then use that master secret to derive symmetric encryption keys that will be used throughout this TLS session. An attacker could however simply record the handshake and session for later, when breaking the key has become feasible or the machine is suspect to a vulnerability. They may then use the server’s private key to recover the whole conversation.

Key Exchange using (EC)DHE

When using (Elliptic Curve) Diffie-Hellman as the key exchange mechanism both sides have to collaborate to generate a master secret. They generate DH key pairs (which is a lot cheaper than generating RSA keys) and send their public key to the other party. With the private key and the other party’s public key the shared master secret can be calculated and then again be used to derive session keys. We can provide Forward Secrecy when using ephemeral DH key pairs. See the section below on how to enable it.

We could in theory also provide forward secrecy with an RSA key exchange if the server would generate an ephemeral RSA key pair, share its public key and would then wait for the master secret to be sent by the client. As hinted above RSA key generation is very expensive and does not scale in practice. That is why RSA key exchanges are not a practical option for providing forward secrecy.

After both sides have agreed on session keys the TLS handshake is done and they can finally start to communicate using symmetric encryption algorithms like AES that are much faster than asymmetric algorithms.

The certificate

Now that we understand authenticity is an integral part of TLS we know that in order to serve a site via TLS we first need a certificate. The TLS protocol can encrypt traffic between two parties just fine but the certificate provides the necessary authentication towards visitors.

Without a certificate a visitor could securely talk to either us, the NSA, or a different attacker but they probably want to talk to us. The certificate ensures by cryptographic means that they established a connection to our server.

Selecting a Certificate Authority (CA)

If you want a cheap certificate, have no specific needs, and only a single subdomain (e.g. www) then StartSSL is an easy option. Do of course feel free to take a look at different authorities - their services and prices will vary heavily.

In the chain of trust the CA plays an important role: by verifying that you are the rightful owner of your domain and signing your certificate it will let browsers trust your certificate. The browsers do not want to do all this verification themselves so they defer it to the CAs.

For your certificate you will need an RSA key pair, a public and private key. The public key will be included in your certificate and thus also signed by the CA.

Generating an RSA key and a certificate signing request

The example below shows how you can use OpenSSL on the command line to generate a key for your domain. Simply replace example.com with the domain of your website. example.com.key will be your new RSA key and example.com.csr will be the Certificate Signing Request that your CA needs to generate your certificate.

openssl req -new -newkey rsa:4096 -nodes -sha256 \ -keyout example.com.key -out example.com.csr

We will use a SHA-256 based signature for integrity as Firefox and Chrome will phase out support for SHA-1 based certificates soon. The RSA keys used to authenticate your website will use a 4096 bit modulus. If you need to handle a lot of traffic or your server has a weak CPU you might want to use 2048 bit. Never go below that as keys smaller than 2048 bit are considered insecure.

Get a signed certificate

Sign up with the CA you chose and depending on how they handle this process you probably will have to first verify that you are the rightful owner of the domain that you claim to possess. StartSSL will do that by sending a token to postmaster@example.com (or similar) and then ask you to confirm the receipt of that token.

Now that you signed up and are the verified owner of example.com you simply submit the example.com.csr file to request the generation of a certificate for your domain. The CA will sign your public key and the other information contained in the CSR with their private key and you can finally download the certificate to example.com.crt.

Upload the .crt and .key files to your web server. Be aware that any intermediate certificate in the CA’s chain must be included in the .crt file as well - you can just cat them together. StartSSL’s free tier has an intermediate Class 1 certificate - make sure to use the SHA-256 version of it. All files should be owned by root and must not be readable by anyone else. Configure your web server to use those and you should probably have TLS running configured out-of-the-box.

(Perfect) Forward Secrecy

To properly deploy TLS you will want to provide (Perfect) Forward Secrecy. Without forward secrecy TLS still seems to secure your communication today, it might however not if your private key is compromised in the future.

If a powerful adversary (think NSA) records all communication between a visitor and your server, they can decrypt all this traffic years later by stealing your private key or going the “legal” way to obtain it. This can be prevented by using short-lived (ephemeral) keys for key exchanges that the server will throw away after a short period.

Diffie-Hellman key exchanges

Using RSA with your certificate’s private and public keys for key exchanges is off the table as generating a 2048+ bit prime is very expensive. We thus need to switch to ephemeral (Elliptic Curve) Diffie-Hellman cipher suites. For DH you can generate a 2048 bit parameter once, choosing a private key afterwards is cheap.

openssl dhparam -out dhparam.pem 2048

Simply upload dhparam.pem to your server and instruct the web server to use it for Diffie-Hellman key exchanges. When using ECDH the predefined elliptic curve represents this parameter and no further action is needed.

(Nginx) ssl_dhparam /path/to/ssl/dhparam.pem;

Apache does unfortunately not support custom DH parameters, it is always set to 1024 bit and is not user configurable. This might hopefully be fixed in future versions.

Session IDs

One of the most important mechanisms to improve TLS performance is Session Resumption. In a full handshake the server sends a Session ID as part of the “hello” message. On a subsequent connection the client can use this session ID and pass it to the server when connecting. Because both the server and the client have saved the last session’s “secret state” under the session ID they can simply resume the TLS session where they left off.

Now you might notice that this could violate forward secrecy as a compromised server might reveal the secret state for all session IDs if the cache is just large enough. The forward secrecy of a connection is thus bounded by how long the session information is retained on the server. Ideally, your server would use a medium-sized in-memory cache that is purged daily.

Apache lets you configure that using the SSLSessionCache directive and you should use the high-performance cyclic buffer shmcd. Nginx has the ssl_session_cache directive and you should use a shared cache that is shared between workers. The right size of those caches would depend on the amount of traffic your server handles. You want browsers to resume TLS sessions but also get rid of old ones about daily.

Session Tickets

The second mechanism to resume a TLS session are Session Tickets. This extension transmits the server’s secret state to the client, encrypted with a key only known to the server. That ticket key is protecting the TLS connection now and in the future.

This might as well violate forward secrecy if the key used to encrypt session tickets is compromised. The ticket (just as the session cache) contains all of the server’s secret state and would allow an attacker to reveal the whole conversation.

Nginx and Apache by default generate a session ticket key at startup and do unfortunately provide no way to rotate it. If your server is running for months without a restart then you will use that same session ticket key for months and breaking into your server could reveal every recorded TLS conversation since the web server was started.

Neither Nginx nor Apache have a sane way to work around this, Nginx might be able to rotate the key by reloading the server config which is rather easy to implement with a cron job. Make sure to test that this actually works before relying on it though.

Thus if you really want to provide forward secrecy you should disable session tickets using ssl_session_tickets off for Nginx and SSLOpenSSLConfCmd Options -SessionTicket for Apache.

Choosing the right cipher suites

Mozilla’s guide on server side TLS provides a great list of modern cipher suites that needs to be put in your web server’s configuration. The combinations below are unfortunately supported by only modern browsers, for broader client support you might want to consider using the “intermediate” list.

ECDHE-RSA-AES128-GCM-SHA256: \ ECDHE-ECDSA-AES128-GCM-SHA256: \ ECDHE-RSA-AES256-GCM-SHA384: \ ECDHE-ECDSA-AES256-GCM-SHA384: \ DHE-RSA-AES128-GCM-SHA256: \ DHE-DSS-AES128-GCM-SHA256: \ [...] !aNULL:!eNULL:!EXPORT:!DES:!RC4:!3DES:!MD5:!PSK

All these cipher suites start with (EC)DHE which means they only support ephemeral Diffie-Hellman key exchanges for forward secrecy. The last line discards non-authenticated key exchanges, null-encryption (cleartext), legacy weak ciphers marked exportable by US law, weak ciphers (3)DES and RC4, weak MD5 signatures, and pre-shared keys.

Note: To ensure that the order of cipher suites is respected you need to set ssl_prefer_server_ciphers on for Nginx or SSLHonorCipherOrder on for Apache.

HTTP Strict Transport Security (HSTS)

Now that your server is configured to accept TLS connections you still want to support HTTP connections on port 80 to redirect old links and folks typing example.com in the URL bar to your shiny new HTTPS site.

At this point however a Man-In-The-Middle (or Woman-In-The-Middle) attack can easily intercept and modify traffic to deliver a forged HTTP version of your site to a visitor. The poor visitor might never know because they did not realize you offer TLS connections now.

To ensure your users are secured when visiting your site the next time you want to send a HSTS header to enforce strict transport security. By sending this header the browser will not try to establish a HTTP connection next time but directly connect to your website via TLS.

Strict-Transport-Security: max-age=15768000; includeSubDomains; preload

Sending these headers over a HTTPS connection (they will be ignored via HTTP) lets the browser remember that this domain wants strict transport security for the next six months (~15768000 seconds). The includeSubDomains token enforces TLS connections for every subdomain of your domain and the non-standard preload token will be required for the next section.

HSTS Preload List

If after deploying TLS the very first connection of a visitor is genuine we are fine. Your server will send the HSTS header over TLS and the visitor’s browser remembers to use TLS in the future. The very first connection and every connection after the HSTS header expires however are still vulnerable to a {M,W}ITM attack.

To prevent this Firefox and Chrome share a HSTS Preload List that basically includes HSTS headers for all sites that would send that header when visiting anyway. So before connecting to a host Firefox and Chrome check whether that domain is in the list and if so would not even try using an insecure HTTP connection.

Including your page in that list is easy, just submit your domain using the HSTS Preload List submission form. Your HSTS header must be set up correctly and contain the includeSubDomains and preload tokens to be accepted.

OCSP Stapling

OCSP - using an external server provided by the CA to check whether the certificate given by the server was revoked - might sound like a great idea at first. On the second thought it actually sounds rather terrible. First, the CA providing the OCSP server suddenly has to be able to handle a lot of requests: every client opening a connection to your server will want to know whether your certificate was revoked before talking to you.

Second, the browser contacting a CA and passing the certificate is an easy way to monitor a user’s browsing behavior. If all CAs worked together they probably could come up with a nice data set of TLS sites that people visit, when and in what order (not that I know of any plans they actually wanted to do that).

Let the server do the work for your visitors

OCSP Stapling is a TLS extension that enables the server to query its certificate’s revocation status at regular intervals in the background and send an OCSP response with the TLS handshake. The stapled response itself cannot be faked as it needs to be signed with the CA’s private key. Enabling OCSP stapling thus improves performance and privacy for your visitors immediately.

You need to create a certificate file that contains your CA’s root certificate prepended by any intermediate certificates that might be in your CA’s chain. StartSSL has an intermediate certificate for Class 1 (the free tier) - make sure to use the one having the SHA-256 signature. Pass the file to Nginx using the ssl_trusted_certificate directive and to Apache using the SSLCACertificateFile directive.

OCSP Must Staple

OCSP however is unfortunately not a silver bullet. If a browser does not know in advance it will receive a stapled response then the attacker might as well redirect HTTPS traffic to their server and block any traffic to the OCSP server (in which case browsers soft-fail). Adam Langley explains all possible attack vectors in great detail.

One solution might be the proposed OCSP Must Staple Extension. This would add another field to the certificate issued by the CA that says a server must provide a stapled OCSP response. The problem here is that the proposal expired and in practice it would take years for CAs to support that.

Another solution would be to implement a header similar to HSTS, that lets the browser remember to require a stapled OCSP response when connecting next time. This however has the same problems on first connection just like HSTS, and we might have to maintain a “OCSP-Must-Staple Preload List”. As of today there is unfortunately no immediate solution in sight.

HTTP Public Key Pinning (HPKP)

Even with all those security checks when receiving the server’s certificate we would still be completely out of luck in case your CA’s private key is compromised or your CA simply fucks up. We can prevent these kinds of attacks with an HTTP extension called Public Key Pinning.

Key pinning is a trust-on-first-use (TOFU) mechanism. The first time a browser connects to a host it lacks the the information necessary to perform “pin validation” so it will not be able to detect and thwart a {M,W}ITM attack. This feature only allows detection of these kinds of attacks after the first connection.

Generating a HPKP header

Creating an HPKP header is easy, all you need to do is to compute the base64-encoded “SPKI fingerprint” of your server’s certificate. An SPKI fingerprint is the output of a applying SHA-256 to the public key information contained in your certificate.

openssl req -inform pem -pubkey -noout < example.com.csr | openssl pkey -pubin -outform der | openssl dgst -sha256 -binary | base64

The result of running the above command can be directly used as the pin-sha256 values for the Public-Key-Pins header as shown below:

Public-Key-Pins: pin-sha256="GRAH5Ex+kB4cCQi5gMU82urf+6kEgbVtzfCSkw55AGk="; pin-sha256="lERGk61FITjzyKHcJ89xpc6aDwtRkOPAU0jdnUqzW2s="; max-age=15768000; includeSubDomains

Upon receiving this header the browser knows that it has to store the pins given by the header and discard any certificates whose SPKI fingerprints do not match for the next six months (max-age=15768000). We specified the includeSubDomains token so the browser will verify pins when connecting to any subdomain.

Include the pin of a backup key

It is considered good practice to include at least a second pin, the SPKI fingerprint of a backup RSA key that you can generate exactly as the original one:

openssl req -new -newkey rsa:4096 -nodes -sha256 \ -keyout example.com.backup.key -out example.com.backup.csr

In case your private key is compromised you might need to revoke your current certificate and request the CA to issue a new one. The old pin however would still be stored in browsers for six months which means they would not be able to connect to your site. By sending two pin-sha256 values the browser will later accept a TLS connection when any of the stored fingerprints match the given certificate.

Known attacks

In the past years (and especially the last year) a few attacks on SSL/TLS were published. Some of those attacks can be worked around on the protocol or crypto library level so that you basically do not have to worry as long as your web server is up to date and the visitor is using a modern browser. A few attacks however need to be thwarted by configuring your server properly.

BEAST (Browser Exploit Against SSL/TLS)

BEAST is an attack that only affects TLSv1.0. Exploiting this vulnerability is possible but rather difficult. You can either disable TLSv1.0 completely - which is certainly the preferred solution although you might neglect folks with old browsers on old operating systems - or you can just not worry. All major browsers have implemented workarounds so that it should not be an issue anymore in practice.

BREACH (Browser Reconnaissance and Exfiltration via Adaptive Compression of Hypertext)

BREACH is a security exploit against HTTPS when using HTTP compression. BREACH is based on CRIME but unlike CRIME - which can be successfully defended by turning off TLS compression (which is the default for Nginx and Apache nowadays) - BREACH can only be prevented by turning off HTTP compression. Another method to mitigate this would be to use cross-site request forgery (CSRF) protection or disable HTTP compression selectively based on headers sent by the application.

POODLE (Padding Oracle On Downgraded Legacy Encryption)

POODLE is yet another padding oracle attack on TLS. Luckily it only affects the predecessor of TLS which is SSLv3. The only solution when deploying a new server is to just disable SSLv3 completely. Fortunately, we already excluded SSLv3 in our list of preferred ciphers previously. Firefox 34 will ship with SSLv3 disabled by default, Chrome and others will hopefully follow soon.

Further reading

Thanks for reading and I am really glad you made it that far! I hope this post did not discourage you from deploying TLS - after all getting your setup right is the most important thing. And it certainly is better to to know what you are getting yourselves into than leaving your visitors unprotected.

If you want to read even more about setting up TLS, the Mozilla Wiki page on Server-Side TLS has more information and proposed web server configurations.

Thanks a lot to Frederik Braun for taking the time to proof-read this post and helping to clarify a few things!

Categorieën: Mozilla-nl planet

Daniel Stenberg: daniel.haxx.se episode 8

ma, 27/10/2014 - 17:17

Today I hesitated to make my new weekly video episode. I looked at the viewers number and how they basically have dwindled the last few weeks. I’m not making this video series interesting enough for a very large crowd of people. I’m re-evaluating if I should do them at all, or if I can do something to spice them up…

… or perhaps just not look at the viewers numbers at all and just do what think is fun?

I decided I’ll go with the latter for now. After all, I enjoy making these and they usually give me some interesting feedback and discussions even if the numbers are really low. What good is a number anyway?

This week’s episode:

Personal

Firefox

Fun

HTTP/2

TALKS

  • I’m offering two talks for FOSDEM

curl

  • release next Wednesday
  • bug fixing period
  • security advisory is pending

wget

Categorieën: Mozilla-nl planet

Gregory Szorc: Implications of Using Bugzilla for Firefox Patch Development

ma, 27/10/2014 - 16:27

Mozilla is very close to rolling out a new code review tool based on Review Board. When I became involved in the project, I viewed it as an opportunity to start from a clean slate and design the ideal code development workflow for the average Firefox developer. When the design of the code review experience was discussed, I would push for decisions that were compatible with my utopian end state.

As part of formulating the ideal workflows and design of the new tool, I needed to investigate why we do things the way we do, whether they are optimal, and whether they are necessary. As part of that, I spent a lot of time thinking about Bugzilla's role in shaping the code that goes into Firefox. This post is a summary of my findings.

The primary goal of this post is to dissect the practices that Bugzilla influences and to prepare the reader for the potential to reassemble the pieces - to change the workflows - in the future, primarily around Mozilla's new code review tool. By showing that Bugzilla has influenced the popularization of what I consider non-optimal practices, it is my hope that readers start to question the existing processes and open up their mind to change.

Since the impetus for this post in the near deployment of Mozilla's new code review tool, many of my points will focus on code review.

Before I go into my findings, I'd like to explicitly state that while many of the things I'm about to say may come across as negativity towards Bugzilla, my intentions are not to put down Bugzilla or the people who maintain it. Yes, there are limitations in Bugzilla. But I don't think it is correct to point fingers and blame Bugzilla or its maintainers for these limitations. I think we got where we are following years of very gradual shifts. I don't think you can blame Bugzilla for the circumstances that led us here. Furthermore, Bugzilla maintainers are quick to admit the faults and limitations of Bugzilla. And, they are adamant about and instrumental in rolling out the new code review tool, which shifts code review out of Bugzilla. Again, my intent is not to put down Bugzilla. So please don't direct ire that way yourself.

So, let's drill down into some of the implications of using Bugzilla.

Difficult to Separate Contexts

The stream of changes on a bug in Bugzilla (including review comments) is a flat, linear list of plain text comments. This works great when the activity of a bug follows a nice, linear, singular topic flow. However, real bug activity does not happen this way. All but the most trivial bugs usually involve multiple points of discussion. You typically have discussion about what the bug is. When a patch comes along, reviewer feedback comes in both high-level and low-level forms. Each item in each group is its own logical discussion thread. When patches land, you typically have points of discussion tracking the state of this patch. Has it been tested, does it need uplift, etc.

Bugzilla has things like keywords, flags, comment tags, and the whiteboard to enable some isolation of these various contexts. However, you still have a flat, linear list of plain text comments that contain the meat of the activity. It can be extremely difficult to follow these many interleaved logical threads.

In the context of code review, lumping all review comments into the same linear list adds overhead and undermines the process of landing the highest-quality patch possible.

Review feedback consists of both high-level and low-level comments. High-level would be things like architecture discussions. Low-level would be comments on the code itself. When these two classes of comments are lumped together in the same text field, I believe it is easy to lose track of the high-level comments and focus on the low-level. After all, you may have a short paragraph of high-level feedback right next to a mountain of low-level comments. Your eyes and brain tend to gravitate towards the larger set of more concrete low-level comments because you sub-consciously want to fix your problems and that large mass of text represents more problems, easier problems to solve than the shorter and often more abstract high-level summary. You want instant gratification and the pile of low-level comments is just too tempting to pass up. We have to train ourselves to temporarily ignore the low-level comments and focus on the high-level feedback. This is very difficult for some people. It is not an ideal disposition. Benjamin Smedberg's recent post on code review indirectly talks about some of this by describing his rational approach of tackling high-level first.

As review iterations occur, the bug devolves into a mix of comments related to high and low-level comments. It thus becomes harder and harder to track the current high-level state of the feedback, as they must be picked out from the mountain of low-level comments. If you've ever inherited someone else's half-finished bug, you know what I'm talking about.

I believe that Bugzilla's threadless and contextless comment flow disposes us towards focusing on low-level details instead of the high-level. I believe that important high-level discussions aren't occurring at the rate they need and that technical debt increases as a result.

Difficulty Tracking Individual Items of Feedback

Code review feedback consists of multiple items of feedback. Each one is related to the review at hand. But oftentimes each item can be considered independent from others, relevant only to a single line or section of code. Style feedback is one such example.

I find it helps to model code review as a tree. You start with one thing you want to do. That's the root node. You split that thing into multiple commits. That's a new layer on your tree. Finally, each comment on those commits and the comments on those comments represent new layers to the tree. Code review thus consists of many related, but independent branches, all flowing back to the same central concept or goal. There is a one to many relationship at nearly every level of the tree.

Again, Bugzilla lumps all these individual items of feedback into a linear series of flat text blobs. When you are commenting on code, you do get some code context printed out. But everything is plain text.

The result of this is that tracking the progress on individual items of feedback - individual branches in our conceptual tree - is difficult. Code authors must pore through text comments and manually keep an inventory of their progress towards addressing the comments. Some people copy the review comment into another text box or text editor and delete items once they've fixed them locally! And, when it comes time to review the new patch version, reviewers must go through the same exercise in order to verify that all their original points of feedback have been adequately addressed! You've now redundantly duplicated the feedback tracking mechanism among at least two people. That's wasteful in of itself.

Another consequence of this unstructured feedback tracking mechanism is that points of feedback tend to get lost. On complex reviews, you may be sorting through dozens of individual points of feedback. It is extremely easy to lose track of something. This could have disastrous consequences, such as the accidental creation of a 0day bug in Firefox. OK, that's a worst case scenario. But I know from experience that review comments can and do get lost. This results in new bugs being filed, author and reviewer double checking to see if other comments were not acted upon, and possibly severe bugs with user impacting behavior. In other words, this unstructured tracking of review feedback tends to lessen code quality and is thus a contributor to technical debt.

Fewer, Larger Patches

Bugzilla's user interface encourages the writing of fewer, larger patches. (The opposite would be many, smaller patches - sometimes referred to as micro commits.)

This result is achieved by a user interface that handles multiple patches so poorly that it effectively discourages that approach, driving people to create larger patches.

The stream of changes on a bug (including review comments) is a flat, linear list of plain text comments. This works great when the activity of a bug follows a nice, linear flow. However, reviewing multiple patches doesn't work in a linear model. If you attach multiple patches to a bug, the review comments and their replies for all the patches will be interleaved in the same linear comment list. This flies in the face of the reality that each patch/review is logically its own thread that deserves to be followed on its own. The end result is that it is extremely difficult to track what's going on in each patch's review. Again, we have different contexts - different branches of a tree - all living in the same flat list.

Because conducting review on separate patches is so painful, people are effectively left with two choices: 1) write a single, monolithic patch 2) create a new bug. Both options suck.

Larger, monolithic patches are harder and slower to review. Larger patches require much more cognitive load to review, as the reviewer needs to capture the entire context in order to make a review determination. This takes more time. The increased surface area of the patch also increases the liklihood that the reviewer will find something wrong and will require a re-review. The added complexity of a larger patch also means the chances of a bug creeping in are higher, leading to more bugs being filed and more reviews later. The more review cycles the patch goes through, the greater the chances it will suffer from bit rot and will need updating before it lands, possibly incurring yet more rounds of review. And, since we measure progress in terms of code landing, the delay to get a large patch through many rounds of review makes us feel lethargic and demotivates us. Large patches have intrinsic properties that lead to compounding problems and increased development cost.

As bad as large patches are, they are roughly in the same badness range as the alternative: creating more bugs.

When you create a new bug to hold the context for the review of an individual commit, you are doing a lot of things, very few of them helpful. First, you must create a new bug. There's overhead to do that. You need to type in a summary, set up the bug dependencies, CC the proper people, update the commit message in your patch, upload your patch/attachment to the new bug, mark the attachment on the old bug obsolete, etc. This is arguably tolerable, especially with tools that can automate the steps (although I don't believe there is a single tool that does all of what I mentioned automatically). But the badness of multiple bugs doesn't stop there.

Creating multiple bugs fragments the knowledge and history of your change and diminishes the purpose of a bug. You got in the situation of creating multiple bugs because you were working on a single logical change. It just so happened that you needed/wanted multiple commits/patches/reviews to represent that singular change. That initial change was likely tracked by a single bug. And now, because of Bugzilla's poor user interface around mutliple patch reviews, you now find yourself creating yet another bug. Now you have two bug numbers - two identifiers that look identical, only varying by their numeric value - referring to the same logical thing. We've started with a single bug number referring to your logical change and created what are effectively sub-issues, but allocated them in the same namespace as normal bugs. We've diminished the importance of the average bug. We've introduced confusion as to where one should go to learn about this single, logical change. Should I go to bug X or bug Y? Sure, you can likely go to one and ultimately find what you were looking for. But that takes more effort.

Creating separate bugs for separate reviews also makes refactoring harder. If you are going the micro commit route, chances are you do a lot of history rewriting. Commits are combined. Commits are split. Commits are reordered. And if those commits are all mapping to individual bugs, you potentially find yourself in a huge mess. Combining commits might mean resolving bugs as duplicates of each other. Splitting commits means creating yet another bug. And let's not forget about managing bug dependencies. Do you set up your dependencies so you have a linear, waterfall dependency corresponding to commit order? That logically makes sense, but it is hard to keep in sync. Or, do you just make all the review bugs depend on a single parent bug? If you do that, how do you communicate the order of the patches to the reviewer? Manually? That's yet more overhead. History rewriting - an operation that modern version control tools like Git and Mercurial have enabled to be a lightweight operation and users love because it doesn't constrain them to pre-defined workflows - thus become much more costly. The cost may even be so high that some people forego rewriting completely, trading their effort for some poor reviewer who has to inherit a series of patches that isn't organized as logically as it could be. Like larger patches, this increases cognitive load required to perform reviews and increases development costs.

As you can see, reviewing multiple, smaller patches with Bugzilla often leads to a horrible user experience. So, we find ourselves writing larger, monolithic patches and living with their numerous deficiencies. At least with monolithic patches we have a predictable outcome for how interaction with Bugzilla will play out!

I have little doubt that large patches (whose existence is influenced by the UI of Bugzilla) slows down the development velocity of Firefox.

Commit Message Formatting

The heavy involvement of Bugzilla in our code development lifecycle has influenced how we write commit messages. Let's start with the obvious example. Here is our standard commit message format for Firefox:

Bug 1234 - Fix some feature foo; r=gps

The bug is right there at the front of the commit message. That prominent placement is effectively saying the bug number is the most important detail about this commit - everything else is ancillary.

Now, I'm sure some of you are saying, but Greg, the short description of the change is obviously more important than the bug number. You are right. But we've allowed ourselves to make the bug and the content therein more important than the commit.

Supporting my theory is the commit message content following the first/summary line. That data is almost always - wait for it - nothing: we generally don't write commit messages that contain more than a single summary line. My repository forensics show that that less than 20% of commit messages to Firefox in 2014 contain multiple lines (this excludes merge and backout commits). (We are doing better than 2013 - the rate was less than 15% then).

Our commit messages are basically saying, here's a highly-abbreviated summary of the change and a pointer (a bug number) to where you can find out more. And of course loading the bug typically reveals a mass of interleaved comments on various topics, hardly the high-level summary you were hoping was captured in the commit message.

Before I go on, in case you are on the fence as to the benefit of detailed commit messages, please lead Phabricator's recommendations on revision control and writing reviewable code. I think both write-ups are terrific and are excellent templates that apply to nearly everyone, especially a project as large and complex as Firefox.

Anyway, there are many reasons why we don't capture a detailed, multi-line commit message. For starters, you aren't immediately rewarded for doing it: writing a good commit message doesn't really improve much in the short term (unless someone yells at you for not doing it). This is a generic problem applicable to all organizations and tools. This is a problem that culture must ultimately rectify. But our tools shouldn't reinforce the disposition towards laziness: they should reward best practices.

I don't believe Bugzilla and our interactions with it do an adequate job rewarding good commit message writing. Chances are your mechanism for posting reviews to Bugzilla or posting the publishing of a commit to Bugzilla (pasting the URL in the simple case) brings up a text box for you to type review notes, a patch description, or extra context for the landing. These should be going in the commit message, as they are the type of high-level context and summarizations of choices or actions that people crave when discerning the history of a repository. But because that text box is there, taunting you with its presence, we write content there instead of in the commit message. Even where tools like bzexport exist to upload patches to Bugzilla, potentially nipping this practice in the bug, it still engages in frustrating behavior like reposting the same long commit message on every patch upload, producing unwanted bug spam. Even a tool that is pretty sensibly designed has an implementation detail that undermines a good practice.

Machine Processing of Patches is Difficult

I have a challenge for you: identify all patches currently under consideration for incorporation in the Firefox source tree, run static analysis on them, and tell me if they meet our code style policies.

This should be a solved problem and deployed system at Mozilla. It isn't. Part of the problem is because we're using Bugzilla for conducting review and doing patch management. That may sound counter-intuitive at first: Bugzilla is a centralized service - surely we can poll it to discover patches and then do stuff with those patches. We can. In theory. Things break down very quickly if you try this.

We are uploading patch files to Bugzilla. Patch files are representations of commits that live outside a repository. In order to get the full context - the result of the patch file - you need all the content leading up to that patch file - the repository data. When a naked patch file is uploaded to Bugzilla, you don't always have this context.

For starters, you don't know with certainly which repository the patch belongs to because that isn't part of the standard patch format produced by Mercurial or Git. There are patches for various repositories floating around in Bugzilla. So now you need a way to identify which repository a patch belongs to. It is a solvable problem (aggregate data for all repositories and match patches based on file paths, referenced commits, etc), albeit one Mozilla has not yet solved (but should).

Assuming you can identify the repository a patch belongs to, you need to know the parent commit so you can apply this patch. Some patches list their parent commits. Others do not. Even those that do may lie about it. Patches in MQ don't update their parent field when they are pushed, only after they are refreshed. You could be testing and uploading a patch with a different parent commit than what's listed in the patch file! Even if you do identify the parent commit, this commit could belong to another patch under consideration that's also on Bugzilla! So now you need to assemble a directed graph with all the patches known from Bugzilla applied. Hopefully they all fit in nicely.

Of course, some patches don't have any metadata at all: they are just naked diffs or are malformed commits produced by tools that e.g. attempt to convert Git commits to Mercurial commits (Git users: you should be using hg-git to produce proper Mercurial commits for Firefox patches).

Because Bugzilla is talking in terms of patch files, we often lose much of the context needed to build nice tools, preventing numerous potential workflow optimizations through automation. There are many things machines could be doing for us (such as looking for coding style violations). Instead, humans are doing this work and costing Mozilla a lot of time and lost developer productivity in the process. (A human costs ~$100/hr. A machine on EC2 is pennies per hour and should do the job with lower latency. In other words, you can operate over 300 machines 24 hours a day for what you may an engineer to work an 8 hour shift.)

Conclusion

I have outlined a few of the side-effects of using Bugzilla as part of our day-to-day development, review, and landing of changes to Firefox.

There are several takeways.

First, one cannot argue the fact that Firefox development is bug(zilla) centric. Nearly every important milestone in the lifecycle of a patch involves Bugzilla in some way. This has its benefits and drawbacks. This article has identified many of the drawbacks. But before you start crying to expunge Bugzilla from the loop completely, consider the benefits, such as a place anyone can go to to add metadata or comments on something. That's huge. There is a larger discussion to be had here. But I don't want to be inviting it quite yet.

A common thread between many of the points above is Bugzilla's unstructured and generic handling of code and metadata attached to it (patches, review comments, and landing information). Patches are attachments, which can be anything under the sun. Review comments are plain text comments with simple author, date, and tag metadata. Landings are also communicated by plain text review comments (at least initially - keywords and flags are used in some scenarios).

By being a generic tool, Bugzilla throws away a lot of the rich metadata that we produce. That data is still technically there in many scenarios. But it becomes extremely difficult if not practically impossible for both humans and machines to access efficiently. We lose important context and feedback by normalizing all this data to Bugzilla. This data loss creates overhead and technical debt. It slows Mozilla down.

Fortunately, the solutions to these problems and shortcomings are conceptually simple (and generally applicable): preserve rich context. In the context of patch distribution, push commits to a repository and tell someone to pull those commits. In the context of code review, create sub-reviews for different commits and allow tracking and easy-to-follow (likely threaded) discussions on found issues. Design workflow to be code first, not tool or bug first. Optimize workflows to minimize people time. Lean heavily on machines to do grunt work. Integrate issue tracking and code review, but not too tightly (loosely coupled, highly cohesive). Let different tools specialize in the handling of different forms of data: let code review handle code review. Let Bugzilla handle issue tracking. Let a landing tool handle tracking the state of landings. Use middleware to make them appear as one logical service if they aren't designed to be one from the start (such as is Mozilla's case with Bugzilla).

Another solution that's generally applicable is to refine and optimize the whole process to land a finished commit. Your product is based on software. So anything that adds overhead or loss of quality in the process of developing that software is fundamentally a product problem and should be treated as such. Any time and brain cycles lost to development friction or bugs that arise from things like inadequate code reviews tools degrade the quality of your product and take away from the user experience. This should be plain to see. Attaching a cost to this to convince the business-minded folks that it is worth addressing is a harder matter. I find management with empathy and shared understanding of what amazing tools can do helps a lot.

If I had to sum up the solution in one sentence, it would be: invest in tools and developer happiness.

I hope to soon publish a post on how Mozilla's new code review tool addresses many of the workflow deficiencies present today. Stay tuned.

Categorieën: Mozilla-nl planet

Jen Fong-Adwent: RVLVVR: The Details

ma, 27/10/2014 - 16:00
RVLVVR has nothing to do with Meatspace mechanics (except for the use of websockets) nor does it have anything to do with a Meatspace 3.0 version.
Categorieën: Mozilla-nl planet

Martijn Wargers: T-Dose event in Eindhoven

ma, 27/10/2014 - 15:25
Last Saturday and Sunday, I went to the T-Dose event, manning a booth with my good friend Tim Maks van den Broek.

I met a lot of of smart and interesting people:
  • Someone with an NFC chip in his hand! I was able to import his contact details into my FirefoxOS phone by keeping it close to his hand. Very funny to see, but it also felt weird.
  • Bram Moolenaar (VIM author! Note to self, never complain about software in front of people you don't know)
  • Some crazy enthusiastic people about Perl and more specifically, Perl 6. They even brought a camel! (ok, not a live one)


Often asked questions during the event
Where can I phone with FirefoxOS? What price? 
The only phone that you can buy online in the Netherlands, that is getting the latest updates, is the Flame device. You can order it on everbuying.com. More information on Mozilla Developer Network. Because the phone will be shipped outside of the EU, import duties will be charged, which are probably somewhere around €40.
Can I put Firefox OS on my Android device?
On MDN, there are a couple of devices mentioned, but I wouldn't know how well that works.
Is there a backup tool to backup on your desktop computer?
As far as I know, there is no backup software on the desktop computer, that would be able to make a backup of your Firefox OS phone. I know there is a command line tool from Taiwan QA that allows you to backup  and restore your profile.
Does Whatsapp work on it?
There is no official Whatsapp, but there is an app called ConnectA2, that allows you to connect with your Whatsapp contacts. I haven't used it, but Tim Maks told me it works reasonably well.


Categorieën: Mozilla-nl planet

Kim Moir: Release Engineering in the classroom

ma, 27/10/2014 - 14:34
The second week of October, I had the pleasure of presenting lectures on release engineering to university students in Montreal as part of the PLOW lectures at École Polytechnique de Montréal.    Most of the students were MSc or PhD students in computer science, with a handful of postdocs and professors in the class as well. The students came from Montreal area universities and many were international students. The PLOW lectures consisted of several invited speakers from various universities and industry spread over three days.

View looking down from the university
Université de Montréal administration building
École Polytechnique building.  Each floor is painted a different colour to represent a differ layer of the earth.  So the ground floor is red, the next orange and finally green.
The first day, Jack Jiang from York University gave a talk about software performance engineering.The second day, I gave a lecture on release engineering in the morning.  The rest of the day we did a lot of labs to configure a Jenkins server to build and run tests on an open source project. Earlier that morning, I had setup m3.large instances for the students on Amazon that they could ssh into and conduct their labs.  Along the way, I talked about some release engineering concepts.  It was really interesting and I learned a lot from their feedback.  Many of the students had not been exposed to release engineering concepts so it was fun to share the information.

Several students came up to me during the breaks and said "So, I'm doing my PhD in release engineering, and I have several questions for you" which was fun.  Also, some of the students were making extensive use of code bases for Mozilla or other open source projects so that was interesting to learn more about.  For instance one research project looking at the evolution of multi-threading in a Mozilla code bases, and another student was conducting bugzilla comment sentiment analysis.  Are angry bug comments correlated with fewer bug fixes?  Looking forward to the results of this research!

I ended the day by providing two challenge exercises to the students that they could submit answers to.  One exercise was to setup a build pipeline in Jenkins for another open source project.  The other challenge was to use a the Jenkins REST API to query the Apache projects Jenkins server and present some statistics on their build history.  The results were pretty impressive!

My slides are on GitHub and the readme file describes how I setup the Amazon instances so Jenkins and some other required packages were installed before hand.  Please use them and distribute them if you are interested in teaching release engineering in your classroom.

Lessons I learned from this experience:
  • Computer science classes focus on writing software, but not necessarily building it is a team environment. So complex branching strategies are not necessarily a familiar concept to some students.  Of course, this depends on the previous work experience of the students and the curriculum at the school they attend. One of students said to me "This is cool.  We write code, but we don't build software".
  • Concepts such as building a pipeline for compilation, correctness/performance/
    regression testing, packing and deployment can also be unfamiliar.   As I said in the class, the work of the release engineer starts when the rest of the development team things they are done :-)
  • When you're giving a lecture and people would point out typos, or ask for clarification I'd always update the repository and ask the students to pull a new version.  I really liked this because my slides were in reveal.js and I didn't have to export a new PDF and redistribute.  Instant bug fixes!
  • Add bonus labs to the material so students who are quick to complete the exercises have more to do while the other students complete the original material.  Your classroom will have people with wildly different experience levels.
The third day there was a lecture by Michel Dagenais of Polytechnique Montréal on tracing heterogeneous cloud instances using (tracing framework for Linux).  The Eclipse trace compass project also made an appearance in the talk. I always like to see Eclipse projects highlighted.  One of his interesting points was that none of the companies that collaborate on this project wanted to sign a bunch of IP agreements so they could collaborate on this project behind closed doors.  They all wanted collaborate via an open source community and source code repository.  Another thing he emphasized was that students should make their work available on the web, via GitHub or other repositories so they have a portfolio of work available.  It was fantastic to seem him promote the idea of students being involved in open source as a way to help their job prospects when they graduate!

Thank you Foutse and  Bram  for the opportunity to lecture at your university!  It was a great experience!  Also, thanks Mozilla for the opportunity to do this sort of outreach to our larger community on company time!

Also, I have a renewed respect for teachers and professors.  Writing these slides took so much time.  Many long nights for me especially in the days leading up to the class.  Kudos to you all who do teach everyday.

References
The slides are on GitHub and the readme file describes how I setup the Amazon instances for the labs
Categorieën: Mozilla-nl planet

Kim Moir: Beyond the Code 2014: a recap

ma, 27/10/2014 - 14:33
I started this blog post about a month ago and didn't finish it because well, life is busy.  

I attended Beyond the Code last September 19.  I heard about it several months ago on twitter.  A one-day conference about celebrating women in computing, in my home town, with an fantastic speaker line up?  I signed up immediately.   In the opening remarks, we were asked for a show of hands to show how many of us were developers, in design,  product management, or students and there was a good representation from all those categories.  I was especially impressed to see the number of students in the audience, it was nice to see so many of them taking time out of their busy schedule to attend.

View of the Parliament Buildings and Chateau Laurier from the MacKenzie street bridge over the Rideau CanalOttawa Conference Centre, location of Beyond the Code 
There were seven speakers, three workshop organizers, a lunch time activity, and a panel at the end. The speakers were all women.  The speakers were not all white women or all heterosexual women.  There were many young women, not all industry veterans :-) like me.  To see this level of diversity at a tech conference filled me with joy.  Almost every conference I go to is very homogenous in the make up of the speakers and the audience.  To to see ~200 tech women in at conference and 10% men (thank you for attending:-) was quite a role reversal.

I completely impressed by the caliber of the speakers.  They were simply exceptional.

The conference started out with Kronda Adair giving a talk on Expanding Your Empathy.  One of the things that struck me from this talk was that she talked about how everyone lives in a bubble, and they don't see things that everyone does due to privilege.  She gave the example of how privilege is like a browser, and colours how we see the world.  For a straight white guy a web age looks great when they're running the latest Chrome on MacOSx.  For a middle class black lesbian, the web page doesn't look as great because it's like she's running IE7.  There is less inherent privilege.  For a "differently abled trans person of color" the world is like running IE6 in quirks mode. This was a great example. She also gave a shout out to the the Ascend Project which she and Lukas Blakk are running in Mozilla Portland office. Such an amazing initiative.

The next speaker was Bridget Kromhout who gave talk about Platform Ops in the Public Cloud.
I was really interested in this talk because we do a lot of scaling of our build infrastructure in AWS and wanted to see if she had faced similar challenges. She works at DramaFever, which she described as Netflix for Asian soap operas.  The most interesting things to me were the fact that she used all AWS regions to host their instances, because they wanted to be able to have their users download from a region as geographically close to them as possible.  At Mozilla, we only use a couple of AWS regions, but more instances than Dramafever, so this was an interesting contrast in the services used. In addition, the monitoring infrastructure they use was quite complex.  Her slides are here.

I was going to summarize the rest of the speakers but Melissa Jean Clark did an exceptional job on her blog.  You should read it!

Thank you Shopify for organizing this conference.  It was great to meet some many brilliant women in the tech industry! I hope there is an event next year too!

Categorieën: Mozilla-nl planet

Karl Dubost: How to deactivate UA override on Firefox OS

ma, 27/10/2014 - 11:39

Some Web sites do not send the right version of the content to Firefox OS. Some ill-defined server side and client side scripting do not detect Firefox OS as a mobile device and they send the desktop content instead. To fix that, we sometimes define UA override for certain sites.

It may improve the life of users but as damaging consequences when it's time for Web developers to test the site they are working on. The device is downloading the list on the server side at a regular pace. Luckily enough, you can deactivate it through preferences.

Firefox UA Override Preferences

There are two places in Firefox OS where you may store preferences:

  • /system/b2g/defaults/pref/user.js
  • /data/b2g/mozilla/something.default/prefs.js (where something is a unique id)

To change the UA override preferences, you need to set useragent.updates.enabled to true (default) for enabling and to false for disabling. If you put in /system/, each time you update the system, the file and its preferences will be crushed and replace by the update. On the other if you put it in /data/, it will be kept with all the data of your profiles.

On the command line, using adb, you can manipulate the preferences:

# Prepare set -x adb shell mount -o rw,remount /system # Local copy of the preferences adb pull /system/b2g/defaults/pref/user.js /tmp/user.js # Keep a local copy of the correct file. cp /tmp/user.js /tmp/user.js.tmp # Let's check if the preference is set and with which value grep useragent.updates.enabled /tmp/user.js # If not set, let's add it echo 'pref("general.useragent.updates.enabled", true);' >> /tmp/user.js.tmp # Push the new preferences to Firefox OS adb push /tmp/user.js.tmp /system/b2g/defaults/pref/user.js adb shell mount -o ro,remount /system # Restart adb shell stop b2g && adb shell start b2g

We created a script to help for this. If you find bugs, do not hesitate to tell us.

Otsukare.

Categorieën: Mozilla-nl planet

Wil Clouser: Migrating off Wordpress

ma, 27/10/2014 - 08:00

This post is a celebration of finishing a migration off of Wordpress for this site and on to flat files, built by Jekyll from Markdown files. I'm definitely looking forward to writing more Markdown and fewer HTML tags.

90% of the work was done by jekyll-import to roughly pull my wordpress data into Markdown files, and then I spent a late night with Vim macros and sed to massage it the way I wanted it.

If all I wanted to do was have my posts up, I'd be done, but having the option to accept comments is important to me and I wasn't comfortable using a platform like Disqus because I didn't want to force people to use a 3rd party.

Since my posts only average one or two comments I ended up using a slightly modified jekyll-static-comments to simply put a form on the page and email me any comments (effectively, a moderation queue). If it's not spam, it's easy to create a .comment file and it will appear on the site.

My original goal was to host this all on Github but they only allow a short list of plugins and the commenting system isn't on there so I'll stick with my own host for now.

Please let me know if you see anything broken.

Categorieën: Mozilla-nl planet

Pagina's