Mozilla Nederland LogoDe Nederlandse
Mozilla gemeenschap

Tantek Çelik: Happy Winter Solstice 2014! Ready For More Daylight Hours.

Mozilla planet - ma, 22/12/2014 - 03:56

The sun has set here in the Pacific Time Zone on the shortest northern hemisphere day of 2014.

I spent it at home, starting with an eight mile run at an even pace through Golden Gate Park to Ocean Beach and back with a couple of friends, then cooking and sharing brunch with them and a few more.

It was a good way to spend the minimal daylight hours we had: doing positive things, sharing genuinely with positive people who themselves shared genuinely.

Of all the choices we make day to day, I think these may be the most important we have to make:

  • What we choose to do with our time
  • Who we choose to spend our time with

These choices are particularly difficult because:

  • So many possibilities
  • So many people will tell you what you should do, and who you should spend time with; often only what they’re told, or to their advantage, not yours.
  • You have to explicitly choose, or others will choose for you.

When you find those who have explicitly chosen to spend time with you, doing positive things, and who appreciate that you have explicitly chosen (instead of being pressured by obligation, guilt, entitlement etc.) to spend time with them, hug them and tell them you’re glad they are there.

I’m glad you’re here.

Happy Winter Solstice and may you spend more of your hours doing positive things, and genuinely sharing (without pressures of obligation, guilt, or entitlement) with those who similarly genuinely share with you.

Here’s to more daylight hours, both physically and metaphorically.

Categorieën: Mozilla-nl planet

Karl Dubost: UA Detection and code libs legacy

Mozilla planet - zo, 21/12/2014 - 22:57

A Web site is a mix of technologies with different lifetimes. The HTML MarkUp is mostly rock-solid for years. CSS is not bad too, apart of the vendor prefixes. JavaScript, PHP, name-your-language are also not that bad. And then there's the full social and business infrastructure of these pieces put together. How do we create Web sites which are resilient and robust over time?

Cyclic Release of Web sites

The business infrastructure of Web agencies is set up for the temporary. They receive a request from a client. They create the Web site with the mood of the moment. After 2 or 3 years, the client thinks the Web site is not up to the new fashion, new trends. The new Web agency (because it's often the case) praises that they will do a better job. They throw the old Web site, breaking at the same time old URIs. They create a new Web site with the technologies of the day. Best case scenario, they understand that keeping URIs is good for branding and karma. They release the site for the next 2-3 years.

Basically the full Web site has changed in terms of content and technologies and is working fine in current browser. It's a 2-3 years release cycle of maintenance.

Understanding Legacy and Maintenance

Web browsers are being updated every 6 weeks or so. This is a very fast cycle. They are released with a lot of unstable technologies. Sometimes, they release entirely new browsers with new version numbers and new technologies. Device makers are releasing also new devices very often. It triggers both a consumerism habit and a difficulty for these to exist.

The Web developers design and focus their code on what is the most popular at the moment. Most of the time it's not their fault. These are the requirements from the business team in the Web agency or the clients. A lot of small Web agencies to not have the resources to invest in automated testing for the Web site. So they focus on two browsers. The one they develop with (the most popular of the moment) and the one the client said it was an absolute minimum bar (the most popular of the past).

Libraries of code are relying on User Agent detection for coping with bugs or unstable features of each browser. These libraries know only the past, never the future, not even the now. Libraries of code are often piles of legacy by design. Some are opensource, some have licenses fees attached to them. In both cases, they require a lot of maintenance and testing which are not planned into the budget of a Web site (which is already exploded by the development of the new shiny Web site).

UA detection and Legacy

The Web site will break for some users at a point in time. They chose to use a Web browser which didn't fit in the box of the current Web site. Last week, I went through WPTouch lib Web Compatibility bugs. Basically Firefox OS was not recognized by the User Agent detection code and in return WordPress Web sites didn't send the mobile version to the mobile devices. We opened that bug in August 2013. We contacted the BraveNewCode company which fixed the bug in March 2014. As of today, December 2014, there are still 7 sites in our list of 12 sites which have not switched to the new version of the library.

These were bugs reported by users of these sites. It means people who can't use their favorite browsers for accessing a particular Web site. I'm pretty sure that theree are more sites with the old version of WPTouch. Users either think their browser is broken or just don't understand what is happening.

Eventually these bugs will go away. It's one of my axioms in Web Compatibility: Wait long enough and the bug goes away. Usually the Web site doesn't exist anymore, or redesign from the ground up. In the meantime some users had a very bad experience.

We need a better story for code legacy, one with fallback, one which doesn't rely only on the past for making it work.


Categorieën: Mozilla-nl planet

Obama hekelt moord op 2 agenten - PowNed

Nieuws verzameld via Google - zo, 21/12/2014 - 18:03


Obama hekelt moord op 2 agenten
Twits; Elders. @IrfanSkillerz16: I can feel the pain in liverpool fans #arsenal#VS#liverpool. 2 minuten geleden. @bella_blau: #iPhone #App #KnowGate - #Business #mozilla #firefox #ipad #ipad #mini#VS#nexus #7 #ipad #mini #rumors #htc ...

Categorieën: Mozilla-nl planet

Rumbling Edge - Thunderbird: 2014-12-21 Calendar builds

Thunderbird - zo, 21/12/2014 - 15:10

Common (excluding Website bugs)-specific: (37)

  • Fixed: 392028 – Get/Set Calendar prefs from Google (color, title, etc)
  • Fixed: 396496 – install gdata-provider into dist/bin/extensions/ when built with Thunderbird
  • Fixed: 407961 – Google sends Email reminders to all non-google attendees, 24 hours before the event
  • Fixed: 411280 – Make sure EXDATEs are specified in UTC
  • Fixed: 442373 – Implement/Investigate OAuth support
  • Fixed: 461300 – switched off gdata calendar -> invite attendees dialog starts gdata login dialog again
  • Fixed: 467153 – Allow fast, easy configuration of Google Calendar (integrated into lightning)
  • Fixed: 491425 – Snoozed alarm instantly re-fires
  • Fixed: 493389 – Provider for Google Calendar cannot sync tasks.
  • Fixed: 565955 – Login seems to work, but no events are displayed and password isn’t stored in pw manager
  • Fixed: 600065 – Disallow setting invalid reminder values (i.e more than 28 days) for Google Calendar
  • Fixed: 604227 – GData Provider can’t sync shared calendar with access set to “See only free/busy”
  • Fixed: 668321 – Support Google two step verification
  • Fixed: 684482 – Investigate use of gCal:syncEvent to fix invitations
  • Fixed: 775516 – Update to Lightning 1.6: Google calendar entries disappearing from view after some time
  • Fixed: 795851 – Redirects cause dialog asking user if he wants to post his data again
  • Fixed: 867067 – MODIFICATION FAILED when custom reminder is > or = 29 days
  • Fixed: 1043171 – Editing events last selectable category in listbox is shown twice
  • Fixed: 1060363 – Provider for Google Calendar 0.32 – MODIFICATION_FAILED in notify for remind greater then 3 week or 22 day (32768 minutes)
  • Fixed: 1061363 – Syncing Lightning w/GDATA Provider blocks/hangs/freezes UI thread
  • Fixed: 1065612 – Frequently recurring dialog provides almost no useful diagnostic information
  • Fixed: 1073922 – Hard-coded date format in the calendar views
  • Fixed: 1073982 – Add context menu options to show a single calendar and show all calendars
  • Fixed: 1079189 – Code Review for bug 493389 – Provider for Google Calendar cannot sync tasks
  • Fixed: 1080208 – Google Calendar events are showing up twice.
  • Fixed: 1081534 – icaljs is broken in Lightning nightly
  • Fixed: 1082478 – Use application locale when showing OAuth window
  • Fixed: 1083084 – Provider for Google Calendar 1.0.1 fails to download, hash did not match
  • Fixed: 1083934 – Google calendar provider ignores default new event/task reminder setting
  • Fixed: 1085287 – Invalid time zone definition for start time
  • Fixed: 1099869 – Default task start and due dates use the wrong second
  • Fixed: 1100799 – Edit recurrence dialog: recurrence rule preview doesn’t work
  • Fixed: 1103637 – google agendas desappear if not connected
  • Fixed: 1106034 – Lightning 3.6b1 crashes entire Gnome session
  • Fixed: 1106047 – Compatibility for Postbox
  • Fixed: 1107865 – Lightning 3.6b1 does not work with SeaMonkey 2.31
  • Fixed: 1108597 – AddItem() fails when called from native code (stack.filename is null)

Sunbird will no longer be actively developed by the Calendar team.

Windows builds Official Windows

Linux builds Official Linux (i686), Official Linux (x86_64)

Mac builds Official Mac

Categorieën: Mozilla-nl planet

Rumbling Edge - Thunderbird: 2014-12-21 Thunderbird comm-central builds

Thunderbird - zo, 21/12/2014 - 15:08

Thunderbird-specific: (43)

  • Fixed: 398531 – Find and Replace dialogue: Pressing Enter key with focus outside buttons unexpectedly closes the dialogue: can’t continue to “Find next”
  • Fixed: 628035 – Adding unknown email addresses to Mailing list, then deleting ghost duplicate entries from contacts pane, causes dataloss in mailing list (contacts pane list is not updated/refreshed)
  • Fixed: 731145 – Provide a checkbox for “default offline.autoDetect” option
  • Fixed: 824909 – Print/Print preview of .eml files shows blank on linux – don’t add wrong file association to ~/.local/share/applications/mimeapps.list
  • Fixed: 910293 – TEST-UNEXPECTED-FAIL | newmailaccount/test-newmailaccount.js | test-newmailaccount.js::test_can_display_providers_in_other_languages
  • Fixed: 947616 – Existing SeaMonkey/outlook/oexpress/eudora profile is no longer offered for migration to Thunderbird (import in a first-run situation)
  • Fixed: 1000162 – Bad Tab layout on Beta builds
  • Fixed: 1039003 – Port |Bug 633773 – Use Google’s HTTPS search by default|, |Bug 958883 – Use HTTPS for Yahoo searches| and search plugin parts of |Bug 959576 – Create a component to get the list of priority domains| to Thunderbird
  • Fixed: 1040009 – .mozconfig configure options are ignored if objdir path is absolute
  • Fixed: 1064795 – TEST-UNEXPECTED-FAIL | older-widget/test-message-filters.js | test-message-filters.js::test_customize_toolbar_doesnt_double_get_mail_menu
  • Fixed: 1081190 – Using InstallTrigger gets “NS_ERROR_FACTORY_NOT_REGISTERED” error – need to port bug 926712 to TB and IB
  • Fixed: 1081693 – Port Bug 947507 to thunderbird: Reset intl.charset.detector if not set to off, Japanese, Russian or Ukrainian
  • Fixed: 1082607 – Disable Edit and Remove buttons on Tag Manager when there is no selection
  • Fixed: 1082896 – New mail notification shows garbled sender name when name is encoded.
  • Fixed: 1084973 – Port |Bug 1031352 – share the logic for determining what MSVC DLLs to package| to im, mail, and suite
  • Fixed: 1085205 – Create filter from – Chooses wrong field
  • Fixed: 1091675 – Implement quoting to disable multi-word search in addressbook
  • Fixed: 1099861 – There are some resource path mixing ://gre/modules and ://modules
  • Fixed: 1099866 – Not checking for attachments creates UI problems in Compose window
  • Fixed: 1100152 – TEST-UNEXPECTED-FAIL | /builds/slave/test/build/tests/xpcshell/tests/toolkit/components/terminator/tests/xpcshell/test_terminator_reload.js
  • Fixed: 1100380 – [10.10] Use vibrancy in the tabbar and address tabbar styling issues in Yosemite.
  • Fixed: 1100521 – [10.10] Yosemite styling for sidebars across TB.
  • Fixed: 1100534 – TEST-UNEXPECTED-FAIL | /builds/slave/test/build/mozmill/folder-display/test-columns.js | test-columns.js::test_persist_columns_gloda_collection
  • Fixed: 1100672 – Port |Bug 990799 – Update search plugins to use rel=”searchform”| to Thunderbird
  • Fixed: 1100951 – Toolbar -> Titlebar gradient doesn’t match in Yosemite.
  • Fixed: 1102013 – mail/installer/ needs to be updated after the cleanup in bug 1096494
  • Fixed: 1102377 – Port Bug 1094303 (Move buildlist invocations into misc tier) for c-c
  • Fixed: 1102588 – TEST-UNEXPECTED-FAIL | /builds/slave/test/build/mozmill/content-tabs/test-plugin-crashing.js | test-plugin-crashing.js::test_crashed_plugin_notification_bar
  • Fixed: 1102892 – Passwords gone after update – port bug 1030059 for Thunderbird
  • Fixed: 1104835 – Port bug 1101170 (Move Linux desktop sandboxing code into plugin-container) to fix bustage: Missing file(s): bin/
  • Fixed: 1104931 – unify the 4 identical accountProvisioner.css files
  • Fixed: 1105197 – check-sync-dirs is no longer run over build/ vs mozilla/build
  • Fixed: 1105715 – Add asan.manifests (port bug 886842)
  • Fixed: 1105723 – Port Bug 1067893 – Detect OTOOL in configure and use it for all ‘otool’ calls
  • Fixed: 1106349 – TEST-UNEXPECTED-FAIL | /builds/slave/test/build/tests/xpcshell/tests/toolkit/components/search/tests/xpcshell/test_selectedEngine.js
  • Fixed: 1106585 – Port ‘Bug 1094565 – Update sccache to e68dfc2′ to comm-central and comm-aurora to fix failure
  • Fixed: 1107902 – Unable to add accounts when a DWM registry key is missing.
  • Fixed: 1107959 – Port bug 1005456 to thunderbird / instantbird
  • Fixed: 1108057 – Port Bug 1106917 – Content font size setting affects some UI elements too, breaking the layout
  • Fixed: 1108198 – Port bug 639134 to TB – ["Allow pages to choose their own colors" does not work with high contrast windows 7 themes]
  • Fixed: 1108207 – Remove doubled styles in accountProvisioner.css
  • Fixed: 1109057 – does not follow the flake8 conventions
  • Fixed: 1113042 – comm-central build for win64 is note updated after 2014-11-25

MailNews Core-specific: (28)

  • Fixed: 286760 – email address with leading/trailing whitespace in Address book, Account manager, or Composition [ ] displays wrongly with added quotes when composing ["foo"], causing lost mails and malformed “duplicate” AB entry
  • Fixed: 331560 – “Order Received” reversed when copying messages from Local to IMAP account
  • Fixed: 467118 – Switch Integrated Search indexing over to use nsIIdleService instead of message displayed timers
  • Fixed: 846123 – Thunderbird 100% CPU for minutes when copying a large number of messages (IMAP Online Copy, Copy between Offline-Use=Off folders)
  • Fixed: 870556 – O(n^2) performance freezes UI for several minutes fetching new mail from IMAP server for very large folder
  • Fixed: 1008843 – React to the removal of T.61-8bit
  • Fixed: 1008845 – React to the removal of x-johab
  • Fixed: 1025886 – React to the removal of ISO-2022-KR and ISO-2022-CN
  • Fixed: 1026946 – React to the removal of EUC-TW
  • Fixed: 1067807 – React to the removal of non-Encoding Standard DOS encodings
  • Fixed: 1068505 – React to the removal of non-Encoding Standard Mac encoders
  • Fixed: 1068510 – React to the removal of ARMSCII8
  • Fixed: 1069907 – React to the removal of ISO-IR-111
  • Fixed: 1069909 – React to the removal of ISO-8859-6-I and -E and ISO-8859-8-E as Gecko-canonical names
  • Fixed: 1070986 – React to the removal of us-ascii as a Gecko-canonical name
  • Fixed: 1072202 – React to the removal of VISCII, x-viet-tcvn5712 and x-viet-vps
  • Fixed: 1074125 – Avoid duplication of in
  • Fixed: 1082243 – Port |Bug 1077151 – Always use expandlibs descriptors when they exist| to comm-central
  • Fixed: 1085004 – mIsCachable is always true
  • Fixed: 1096109 – Port |Bug 1091383 – Move delayload logic entirely in frontend code| and |Bug 1091384 – Remove EXPAND_LIBNAME and affiliated| to c-c
  • Fixed: 1096778 – Include browser-element.xpt that defines the nsIBrowserElementAPI interface
  • Fixed: 1099430 – Eliminate the duplication of the build system in comm-central
  • Fixed: 1103373 – Add strings for Bug 529697 – (CSP 1.1) Implement form-action directive
  • Fixed: 1106346 – Needs to follow nsInsIAlertsService.showAlertNotification change
  • Fixed: 1109058 – Fix a header guard in mailnews/base/src/nsMsgSearchDBView.h
  • Fixed: 1109061 – Remove some useless variables
  • Fixed: 1111304 – assertion failure loadinfo
  • Fixed: 1112413 – mailnews/compose/src/nsMsgCompose.cpp:1476:28: error: ‘class nsIScriptContext’ has no member named ‘GC

Windows builds Official Windows, Official Windows installer

Linux builds Official Linux (i686), Official Linux (x86_64)

Mac builds Official Mac

Categorieën: Mozilla-nl planet

François Marier: Mercurial and Bitbucket workflow for Gecko development

Mozilla planet - zo, 21/12/2014 - 09:30

While it sounds like I should really switch to a bookmark-based Mercurial workflow for my Gecko development, I figured that before I do that, I should document how I currently use patch queues and Bitbucket.

Starting work on a new bug

After creating a new bug in Bugzilla, I do the following:

  1. Create a new mozilla-central-mq-BUGNUMBER repo on Bitbucket using the web interface and use as the description.
  2. Create a new patch queue: hg qqueue -c BUGNUMBER
  3. Initialize the patch queue: hg init --mq
  4. Make some changes.
  5. Create a new patch: hg qnew -Ue bugBUGNUMBER.patch
  6. Commit the patch to the mq repo: hg commit --mq -m "Initial version"
  7. Push the mq repo to Bitbucket: hg push ssh://
  8. Make the above URL the default for pull/push by putting this in .hg/patches-BUGNUMBER/.hg/hgrc:

    [paths] default = default-push = ssh://
Working on a bug

I like to preserve the history of the work I did on a patch. So once I've got some meaningful changes to commit to my patch queue repo, I do the following:

  1. Add the changes to the current patch: hg qref
  2. Check that everything looks fine: hg diff --mq
  3. Commit the changes to the mq repo: hg commit --mq
  4. Push the changes to Bitbucket: hg push --mq
Switching between bugs

Since I have one patch queue per bug, I can easily work on more than one bug at a time without having to clone the repository again and work from a different directory.

Here's how I switch between patch queues:

  1. Unapply the current queue's patches: hg qpop -a
  2. Switch to the new queue: hg qqueue BUGNUMBER
  3. Apply all of the new queue's patches: hg qpush -a
Rebasing a patch queue

To rebase my patch onto the latest mozilla-central tip, I do the following:

  1. Unapply patches using hg qpop -a
  2. Update the branch: hg pull -u
  3. Reapply the first patch: hg qpush and resolve any conflicts
  4. Update the patch file in the queue: hg qref
  5. Repeat steps 3 and 4 for each patch.
  6. Commit the changes: hg commit --mq -m "Rebase patch"

Thanks to Thinker Lee for telling me about qqueue and Chris Pearce for explaining to me how he uses mq repos on Bitbucket.

Of course, feel free to leave a comment if I missed anything useful or if there's a easier way to do any of the above.

Categorieën: Mozilla-nl planet

Daniel Glazman: Bloomberg

Mozilla planet - zo, 21/12/2014 - 05:56

Welcoming Bloomberg as a new customer of Disruptive Innovations. Just implemented the proposed caret-color property for them in Gecko.

Categorieën: Mozilla-nl planet

Patrick Cloke: The so-called IRC "specifications"

Mozilla planet - za, 20/12/2014 - 21:55

In a previous post I had briefly gone over the "history of IRC" as I know it.  I’m going to expand on this a bit as I’ve come to understand it a bit more while reading through documentation.  (Hopefully it won’t sound too much like a rant, as it is all driving me crazy!)

IRC Specifications

So there’s the original specification (RFC 1459) in May 1993; this was expanded and replaced by four different specifications (RFC 2810, 2811, 2812, 2813) in April 2000.  Seems pretty straightforward, right?


Well, kind of…there’s also the DCC/CTCP specifications, which is a separate protocol embedded/hidden within the IRC protocol (e.g. they’re sent as IRC messages and parsed specially by clients, the server sees them as normal messages).  DCC/CTCP is used to send files as well as other particular messages (ACTION commands for roleplaying, SED for encrypting conversations, VERSION to get client information, etc.). Anyway, this get’s a bit more complicated — it starts with the DCC specification.  This was replaced/updated by the CTCP specification (which fully includes the DCC specification) in 1994.  An "updated" CTCP specification was released in February 1997.  There’s also a CTCP/2 specification from October 1998, which was meant to reformulate a lot of the previous three versions.  And finally, there’s the DCC2 specification (two parts: connection negotiation and file transfers) from April 2004.

But wait!  I lied…that’s not really the end of DCC/CTCP, there’s also a bunch of extensions to it: Turbo DCC, XDCC (eXtended DCC) in 1993, DCC Whiteboard, and a few other variations of this: RDCC (Reverse DCC), SDD (Secure DCC), DCC Voice, etc.  Wikipedia has a good summary.

Something else to note about the whole DCC/CTCP mess…parts of it just don’t have any documentation.  There’s noneat all for SED (at least that I’ve found, I’d love to be proved wrong) and very little (really just a mention) for DCC Voice.

So, we’re about halfway through now.  There’s a bunch of extensions to the IRC protocol specifications that add new commands to the actual protocol.


Originally IRC had no authentication ability except the PASS command, which very few servers seem to use, a variety of mechanisms have replaced this, including SASL authentication (both PLAIN and BLOWFISH methods, although BLOWFISH isn’t documented); and SASL itself is covered by at least four RFCs in this situation.  There also seems to be a method called "Auth" which I haven’t been able to pin down, as well as Ident (which is a more general protocol authentication method I haven’t looked into yet).

Extension Support

This includes a few that generally add a way by which servers are able to tell their clients exactly what a server supports.  The first of these was RPL_ISUPPORT, which was defined as a draft specification in January 2004, and updated in January of 2005.

A similar concept was defined as IRC Capabilities in March 2005.

Protocol Extensions

IRCX, a Microsoft extension to IRC used (at one point) for some of it’s instant messaging products exists as a draft from June 1998.

There’s also:


To fill in some of the missing features of IRC, services were created (Wikipedia has a good summary again).  This commonly includes ChanServ, NickServ, OperServ, and MemoServ.  Not too hard, but different server packages include different services (or even the same services that behave differently), one of more common ones is Anope, however (plus they have awesome documentation, so they get a link).

There was an attempt to standardize how to interact with services called IRC+, which included three specifications: conference control protocol, identity protocol and subscriptions protocol.  I don’t believe this are supported widely (if at all).

IRC URL Scheme

Finally this brings us to the IRC URL scheme of which there are a few versions.  A draft from August 1996 defines the original irc: URL scheme.  This was updated/replaced by another draft which defines irc: and ircs: URL schemes.

As of right now that’s all that I’ve found…an awful lot.  Plus it’s not all compatible with each other (and sometimes out right contradicts each other).  Often newer specifications say not to support older specifications, but who knows what servers/clients you’ll end up talking to!  It’s difficult to know what’s used in practice, especially since there’s an awful lot of IRC servers out there.  Anyway, if someone does know of another specification, etc. that I missed please let me know!

Updated [2014-12-20]
Fixed some dead links. Unfortunately some links now point to the Wayback Machine. There are also copies of most, if not all, of these links in my irc-docs repository. Thanks Ultra Rocks for the heads up!
Categorieën: Mozilla-nl planet

Mozilla assume son besoin d'argent et ses choix pour en avoir - Numerama

Nieuws verzameld via Google - za, 20/12/2014 - 12:49


Mozilla assume son besoin d'argent et ses choix pour en avoir
Alors que ses résultats financiers 2013 marquent une pause dans sa progression et confirment sa très forte dépendance à Google, Mozilla a redit vendredi que les accords signés avec Yandex, Baidu et Yahoo devaient lui apporter de nouveaux revenus, ...
Firefox sur iPhone : un premier aperçu du code et une capture d'écraniPhoneAddict (Blog)
Les meilleurs logiciels gratuits et indispensables pour Windows01net

alle 3 nieuwsartikelen »Google Nieuws
Categorieën: Mozilla-nl planet

Mozilla voorziet Androidversie Firefox van trackingbescherming -

Nieuws verzameld via Google - za, 20/12/2014 - 07:56

Mozilla voorziet Androidversie Firefox van trackingbescherming
In een testversie van Firefox voor Android heeft Mozilla een maatregel toegevoegd die de privacy van gebruikers op het web moet beschermen. De trackingbescherming zorgt ervoor dat bekende trackingdomeinen worden geblokkeerd. Het gaat dan met ...

Categorieën: Mozilla-nl planet

Laura Thomson: 2014: Engineering Operations Year in Review

Mozilla planet - za, 20/12/2014 - 03:00

On the first day of Mozlandia, Johnny Stenback and Doug Turner presented a list of key accomplishments in Platform Engineering/Engineering Operations in 2014.

I have been told a few times recently that people don’t know what my teams do, so in the interest of addressing that, I thought I’d share our part of the list. It was a pretty damn good year for us, all things considered, and especially given the level of organizational churn and other distractions.

We had a bit of organizational churn ourselves. I started the year managing Web Engineering, and between March and September ended up also managing the Release Engineering teams, Release Operations, SUMO and Input Development, and Developer Services. It’s been a challenging but very productive year.

Here’s the list of what we got done.

Web Engineering
  • Migrate crash-stats storage off HBase and into S3
  • Launch Crash-stats “hacker” API (access to search, raw data, reports)
  • Ship fully-localized Firefox Health Report on Android
  • Many new crash-stats reports including GC-related crashes, JS crashes, graphics adapter summary, and modern correlation reports
  • Crash-stats reporting for B2G
  • Pluggable processing architecture for crash-stats, and alternate crash classifiers
  • Symbol upload system for partners
  • Migrate to modern, flexible backend
  • Prototype services for checking health of the browser and a support API
  • Solve scaling problems in Moztrap to reduce pain for QA
  • New admin UI for Balrog (new update server)
  • Bouncer: correctness testing, continuous integration, a staging environment, and multi-homing for high availability
  • Grew Air Mozilla community contributions from 0 to 6 non-staff committers
  • Many new features for Air Mozilla including: direct download for offline viewing of public events, tear out video player, WebRTC self publishing prototype, Roku Channel, multi-rate HLS streams for auto switching to optimal bitrate, search over transcripts, integration with Mozilla Popcorn functionality, and access control based on Mozillians groups (e.g. “nda”)
  • Modeless, explorable UI with all-new JS
  • Case-insensitive searching
  • Proof-of-concept Rust analysis
  • Improved C++ analysis, with lots of new search types
  • Multi-tree support
  • Multi-line selection (linkable!)
  • HTTP API for search
  • Line-based searching
  • Multi-language support (Python already implemented, Rust and JS in progress)
  • Elasticsearch backend, bringing speed and features
  • Completely new plugin API, enabling binary file support and request-time analysis
  • Offline SUMO app in Marketplace
  • SUMO Community Hub
  • Improved SUMO search with Synonyms
  • Instant search for SUMO
  • Redesigned and improved SUMO support forums
  • Improved support for more products in SUMO (Thunderbird, Webmaker, Open Badges, etc.)
  • BuddyUP app (live support for FirefoxOS) (in progress, TBC Q1 2015)
  • Dashboards for everyone infrastructure: allowing anyone to build charts/dashboards using Input data
  • Backend for heartbeat v1 and v2
  • Overhauled the feedback form to support multiple products, streamline user experience and prepare for future changes
  • Support for Loop/Hello, Firefox Developer Edition, Firefox 64-bit for Windows
  • Infrastructure for automated machine and human translations
  • Massive infrastructure overhaul to improve overall quality
Release Engineering
  • Cut AWS costs by over 70% during 2014 by switching builds to spot instances and using intelligent bidding algorithms
  • Migrated all hardware out of SCL1 and closed datacenter to save $1 million per year (with Relops)
  • Optimized network transfers for build/test automation between datacenters, decreasing bandwidth usage by 50%
  • Halved build time on b2g-inbound
  • Parallelized verification steps in release automation, saving over an hour off the end-to-end time required for each release
  • Decommissioned legacy systems (e.g. tegras, tinderbox) (with Relops)
  • Enabled build slave reboots via API
  • Self-serve arbitrary builds via API
  • b2g FOTA updates
  • Builds for open H.264
  • Built flexible new update service (Balrog) to replace legacy system (will ship first week of January)
  • Support for Windows 64 as a first class platform
  • Supported FX10 builds and releases
  • Release support for switch to Yahoo! search
  • Update server support for OpenH264 plugins and Adobe’s CDM
  • Implement signing of EME sandbox
  • Per-checkin and nightly Flame builds
  • Moved desktop firefox builds to mach+mozharness, improving reproducibility and hackability for devs.
  • Helped mobile team ship different APKs targeted by device capabilities rather than a single, monolithic APK.
Release Operations
  • Decreased operating costs by $1 million per year by consolidating infrastructure from one datacenter into another (with Releng)
  • Decreased operating costs and improved reliability by decommissioning legacy systems (kvm, redis, r3 mac minis, tegras) (with Releng)
  • Decreased operating costs for physical Android test infrastructure by 30% reduction in hardware
  • Decreased MTTR by developing a simplified releng self-serve reimaging process for each supported build and test hardware platforms
  • Increased security for all releng infrastructure
  • Increased stability and reliability by consolidating single point of failure releng web tools onto a highly available cluster
  • Increased network reliability by developing a tool for continuous validation of firewall flows
  • Increased developer productivity by updating windows platform developer tools
  • Increased fault and anomaly detection by auditing and augmenting releng monitoring and metrics gathering
  • Simplified the build/test architecture by creating a unified releng API service for new tools
  • Developed a disaster recovery and business continuation plan for 2015 (with RelEng)
  • Researched bare-metal private cloud deployment and produced a POC
Developer Services
  • Ship Mozreview, a new review architecture integrated with Bugzilla (with A-team)
  • Massive improvements in hg stability and performance
  • Analytics and dashboards for version control systems
  • New architecture for try to make it stable and fast
  • Deployed treeherder (tbpl replacement) to production
  • Assisted A-team with Bugzilla performance improvements

I’d like to thank the team for their hard work. You are amazing, and I look forward to working with you next year.

At the start of 2015, I’ll share our vision for the coming year. Watch this space!

Categorieën: Mozilla-nl planet

Mozilla veröffentlicht Firefox 34.0.1 für Android -

Nieuws verzameld via Google - za, 20/12/2014 - 02:52

Mozilla veröffentlicht Firefox 34.0.1 für Android
Mozilla hat Firefox Mobile 34.0.1 für Android veröffentlicht. Mit der neuen Version behebt Mozilla die mögliche Anzeige der falschen Standard-Suchmaschine für Nutzer amerikanischer Zeitzonen. Außerdem funktioniert mit dem Update der Absturzmelder nun ...

Categorieën: Mozilla-nl planet

Building a Healthy Web to Hand to Future Generations

Mozilla Blog - za, 20/12/2014 - 02:21
Ten years ago, a scrappy group of ten Mozilla staff and thousands of volunteer Mozillians broke up Microsoft’s monopoly on accessing the Web with the release of Firefox 1.0. We won by bringing together a diverse and global community through … Continue reading
Categorieën: Mozilla-nl planet

Mozilla Fundraising: Thanks to Our Amazing Supporters: A New Goal

Mozilla planet - za, 20/12/2014 - 02:01
We set a goal for ourselves of $1.75 million to raise during our year-end fundraising campaign this year. I’m excited to report that today—thanks to 213,605 donors representing  more than 174 countries who gave an average gift of $8 to … Continue reading
Categorieën: Mozilla-nl planet

Mozilla: Programmiersprache Rust 1.0 im Frühjahr 2015 -

Nieuws verzameld via Google - vr, 19/12/2014 - 21:24

Mozilla: Programmiersprache Rust 1.0 im Frühjahr 2015
Mit Rust ist eine neue Programmiersprache in der Entstehung, in welcher die ebenfalls sich in Entwicklung befindliche neue Rendering-Engine von Mozilla geschrieben wird, die auf den Namen Servo hört. Im Frühjahr soll es soweit sein und Rust 1.0 ...

Categorieën: Mozilla-nl planet

Chris Pearce: Firefox video playback's skip-to-next-keyframe behavior

Mozilla planet - vr, 19/12/2014 - 20:33
One of the quirks of Firefox's video playback stack is our skip-to-next-keyframe behavior. The purpose of this blog post is to document the tradeoffs skip-to-next-keyframe makes.

The fundamental question that skip-to-next-keyframe answers is, "what do we do when the video stream decode can't keep up with the playback speed?

Video playback is a classic producer/consumer problem. You need to ensure that your audio and video stream decoders produce decoded samples at a rate no less that the rate at which the audio/video streams need to be rendered. You also don't want to produce decoded samples at a rate too much greater than the consumption rate, else you'll waste memory.

For example, if we're running on a low end PC, playing a 30 frames per second video, and the CPU is so slow that it can only decode an average of 10 frames per second, we're not going to be able to display all video frames.

This is also complicated by our video stack's legacy threading model. Our first video decoding implementation did the decoding of video and audio streams in the same thread. We assumed that we were using software decoding, because we were supporting Ogg/Theora/Vorbis, and later WebM/VP8/Vorbis, which are only commonly available in software.

The pseudo code for our "decode thread" used to go something like this:
while (!AudioDecodeFinished() || !VideoDecodeFinished()) {
  if (!HaveEnoughAudioDecoded()) {
  if (!HaveEnoughVideoDecoded()) {
  if (HaveLotsOfAudioDecoded() && HaveLotsOfVideoDecoded()) {
This was an unfortunate design, but it certainly made some parts of our code much simpler and easier to write.

We've recently refactored our code, so it no longer looks like this, but for some of the older backends that we support (Ogg, WebM, and MP4 using GStreamer on Linux), the pseudocode is still effectively (but not explicitly or obviously) this. MP4 on Windows, MacOSX, and Android in Firefox 36 and later now decode asynchronously, so we are not limited to decoding only on one thread.

The consequence of decoding audio and video on the same thread only really bites on low end hardware. I have an old Lenovo x131e netbook, which on some videos can take 400ms to decode a Theora keyframe. Since we use the same thread to decode audio as video, if we don't have at least 400ms of audio already decoded while we're decoding such a frame, we'll get an "audio underrun". This is where we don't have enough audio decoded to keep up with playback, and so we end up glitching the audio stream. This sounds is very jarring to the listener.

Humans are very sensitive to sound; the audio stream glitching is much more jarring to a human observer than dropping a few video frames. The tradeoff we made was to sacrifice the video stream playback in order to not glitch the audio stream playback. This is where skip-to-next-keyframe comes in.

With skip-to-next-keyframe, our pseudo code becomes:

while (!AudioDecodeFinished() || !VideoDecodeFinished()) {
  if (!HaveEnoughAudioDecoded()) {
  if (!HaveEnoughVideoDecoded()) {
    bool skipToNextKeyframe =
      (AmountOfDecodedAudio < LowAudioThreshold()) ||
  if (HaveLotsOfAudioDecoded() && HaveLotsOfVideoDecoded()) {

We also monitor how long a video frame decode takes, and if a decode takes longer than the low-audio-threshold, we increase the low-audio-threshold.

If we pass a true value for skipToNextKeyframe to the decoder, it is supposed to give up and skip its decode up to the next keyframe. That is, don't try to decode anything between now and the next keyframe.

Video frames are typically encoded as a sequence of full images (called "key frames", "reference frames", or  I-frames in H.264) and then some number of frames which are "diffs" from the key frame (P-Frames in H.264 speak). (H.264 also has B-frames which are a combination of diffs of frames frames both before and after the current frame, which can lead the encoded stream to be muxed out-of-order).

The idea here is that we deliberately drop video frames in the hope that we give time back to the audio decode, so we are less likely to get audio glitches.

Our implementation of this idea is not particularly good.

Often on low end Windows machines playing HD videos without hardware accelerated video decoding, you'll get a run of say half a second of video decoded, and then we'll skip everything up to the next keyframe (a couple of seconds), before playing another half a second, and then skipping again, ad nasuem, giving a slightly weird experience. Or in the extreme, you can end up with only getting the keyframes decoded, or even no frames if we can't get the keyframes decoded in time. Or if it works well enough, you can still get a couple of audio glitches at the start of playback until the low-audio-threshold adapts to a large enough value, and then playback is smooth.

The FirefoxOS MediaOmxReader also never implemented skip-to-next-keyframe correctly, our behavior there is particularly bad. This is compounded by the fact that FirefoxOS typically runs on lower end hardware anyway. The MediaOmxReader doesn't actually skip decode to the next keyframe, it decodes to the next keyframe. This will cause the video decode to hog the decode thread for even longer; this will give the audio decode even less time, which is the exact opposite of what you want to do. What they should do is skip the demux of video up to the next keyframe, but if I recall correctly there was bugs in the Android platform's video decoder library that FirefoxOS is based on that caused this to be unreliable.

All these issues occur because we share the same thread for audio and video decoding. This year we invested some time refactoring our video playback stack to be asynchronous. This enables backends that support it to do their decoding asynchronously, on another own thread. So since audio decodes on a separate thread to video, we should have glitch-free audio even when the video decode can't keep up, even without engaging skip-to-next-keyframe. We still need to do something like skipping the video decode when the video decode is falling behind, but it can probably engage less aggressively.

I did a quick test the other day on a low end Windows 8.0 tablet with an Atom Z2760 CPU with skip-to-next-keyframe disabled and async decoding enabled, and although the video decode falls behind and gets out of sync with audio (frames are rendered late) we never glitched audio.

So I think it's time to revisit our skip-to-next-keyframe logic, since we don't need to sacrifice video decode to ensure that audio playback doesn't glitch.

When using async decoding we still need some mechanism like skip-to-next-keyframe to ensure that when the video decode falls behind it can catch up. The existing logic to engage skip-to-next-keyframe also performs that role, but often we enter skip-to-next-keyframe and start dropping frames when video decode actually could keep up if we just gave it a chance. This often happens when switching streams during MSE playback.

Now that we have async decoding, we should experiment with modifying the HaveRunOutOfDecodedVideoFrames() logic to be more lenient, to avoid unnecessary frame drops during MSE playback. One idea would be to only engage skip-to-next-keyframe if we've missed several frames. We need to experiment on low end hardware.
Categorieën: Mozilla-nl planet

Gervase Markham: Global Posting Privileges on the Mozilla Discussion Forums

Mozilla planet - vr, 19/12/2014 - 18:10

Have you ever tried to post a message to a Mozilla discussion forum, particularly one you haven’t posted to before, and received back a “your message is held in a queue for the moderator” message?

Turns out, if you are subscribed to at least one forum in its mailing list form, you get global posting privileges to all forums via all mechanisms (mail, news or Google Groups). If you aren’t so subscribed, you have to be whitelisted by the moderator on a per-forum basis.

If this sounds good, and you are looking for a nice low-traffic list to use to get this privilege, try mozilla.announce.

Categorieën: Mozilla-nl planet

Wladimir Palant: Can Mozilla be trusted with privacy?

Mozilla planet - vr, 19/12/2014 - 18:10

A year ago I would have certainly answered the question in the title with “yes.” After all, who else if not Mozilla? Mozilla has been living the privacy principles which we took for the Adblock Plus project and called our own. “Limited data” is particularly something that is very hard to implement and defend against the argument of making informed decisions.

But maybe I’ve simply been a Mozilla contributor way too long and don’t see the obvious signs any more. My colleague Felix Dahlke brought my attention to the fact that Mozilla is using Google Analytics and Optimizely (trusted third parties?) on most of their web properties. I cannot really find a good argument why Mozilla couldn’t process this data in-house, insufficient resources certainly isn’t it.

And then there is Firefox Health Report and Telemetry. Maybe I should have been following the discussions, but I simply accepted the prompt when Firefox asked me — my assumption was that it’s anonymous data collection and cannot be used to track behavior of individual users. The more surprised I was to read this blog post explaining how useful unique client IDs are to analyze data. Mind you, not the slightest sign of concern about the privacy invasion here.

Maybe somebody else actually cared? I opened the bug but the only statement on privacy is far from being conclusive — yes, you can opt out and the user ID will be removed then. However, if you don’t opt out (e.g. because you trust Mozilla) then you will continue sending data that can be connected to a single user (and ultimately you). And then there is this old discussion about the privacy aspects of Firefox Health Reporting, a long and fruitless one it seems.

Am I missing something? Should I be disabling all feedback functionality in Firefox and recommend that everybody else do the same?

Side-note: Am I the only one who is annoyed by the many Mozilla bugs lately which are created without a description and provide zero context information? Are there so many decisions being made behind closed doors or are people simply too lazy to add a link?

Categorieën: Mozilla-nl planet

Laura Hilliger: Web Literacy Lensing: Identity

Mozilla planet - vr, 19/12/2014 - 17:46


Ever since version 1 of the Web Literacy Map came out, I’ve been waiting to see people take it and adjust it or interpret it for specific educational endeavors that are outside the wheelhouse of “teach the web”. As I’ve said before, I think the web can be embedded into anything, and I want to see the anything embedded into the web. I’ve been wanting to see how people put a lens on top of the web literacy map and combine teaching the web with educating a person around Cognitive Skill X. I’ve had ideas, but never put them out into the world. I was kind of waiting for someone to do it for me (ahem Web Literacy community :P Lately I’ve been realizing that I work to develop socio-emotional skills while I teach the web, and I wanted to see if I could look at the Web Literacy Map from a personal, but social (e.g. psychosocial) angle. What, exactly, does web literacy mean in the context of Identity? Theory First things first - there’s a media education theory (in this book) suggesting that technology has complicated our “identity”. It’s worth mentioning because it’s interesting, and I think it’s worth noting that I didn’t consider all the nuances of these various identities in thinking about how the Web Literacy Map becomes the Web Literacy Map for Identity. We as human beings have multiple, distinct identities we have to deal with in life. We have to deal with who we are with family vs with friends vs alone vs professionally regardless of whether or not we are online, but with the development of the virtual space, the theory suggests that identity has become even more complicated. Additionally, we now have to deal with:
  • The Real Virtual: an anonymous online identity that you try on. Pretending to be a particular identity online because you are curious as to how people react to it? That’s not pretending, really, it’s part of your identity that you need answers to curiosities.
  • The Real IN Virtual: an online identity that is affiliated with an offline identity. My name is Laura offline as well. Certain aspects of my offline personality are mirrored in the online space. My everyday identity is (partially) manifested online.
  • The Virtual IN Real: a kind of hybrid identity that you adopt when you interact first in an online environment and then in the physical world. People make assumptions about you when they meet you for the first time. Technology partially strips us of certain communication mannerisms (e.g. Body language, tone, etc), so those assumptions are quite different if you met through technology and then in real life.
  • The Virtual Real: an offline identity from a compilation of data about a particular individual. Shortly: Identity theft.
So, back to the Web Literacy Map: Identity - As you can gather from a single theory about the human understanding of “self”, Identity is a complicated topic anyway. But I like thinking about complicated problems. So here’s my first thinking about how Identity can be seen as a lens on top of the Web Literacy Map. webliteracy-lens-identity Exploring Identity (and the web) Navigation – Identity is personal, so maybe part of web literacy is about personalizing your experience. Perhaps skills become more granular when we talk about putting a lens on the Map? Example granularity: common features of the browser skill might break down into “setting your own homepage” and “pinning apps and bookmarks”. Web Mechanics - I didn’t find a way to lens this competency. It’s the only one I couldn’t. Very frustrating to have ONE that doesn’t fit. What does that say about Web Mechanics or the Web Literacy Map writ large? Search – Identity is manifested, so your tone and mood might dictate what you search for and how you share it. Are you a satirist? Are you funny? Are you serious or terse? Search is a connective competency under this lens because it connects your mood/tone to your manifestation of identity. Example skill modification/addition: Locating or finding desired information within search results ——> using specialized search machines to find desired emotional expression. (e.g. GIPHY!) Credibility – Identity is formed through beliefs and faith, and I wouldn’t have a hard time arguing that those things influence your understanding of credible information. If you believe something and someone confirms your belief, you’ll likely find that person more credible than someone who rejects your belief. Example skill modification/addition: Comparing information from a number of sources to judge the trustworthiness of content ——> Comparing information from a number of sources to judge the trustworthiness of people Security - Identity is influenced heavily by relationships. Keeping other people’s data secure seems like part of the puzzle, and there’s something about the innate need to keep people who have influenced your identity positively secure. I don’t have an example for this one off the top of my head, but it’s percolating. [caption id="attachment_2514" align="aligncenter" width="500"]braindump braindump[/caption] Building Identity (and the web) Composing for the Web, Remixing, and Coding/Scripting allow us to be expressive about our identities. The expression is the WHY of any of this, so directly connected to your own identity. It connects into your personality, motivations, and a mess of thinking skills we need to function in our world. Skills underneath these competencies could be modified to incorporate those emotional and psychological traits of that expression. Design and AccessibilityValues are inseparable from our identities. I think design and accessibility is a competency that radiates a persons values. It’s ok to back burner this if you’re being expressive for the sake of being expressive, but if you have a message, if you are being expressive in an effort to connect with other people (which, let’s face it, is part of the human condition), design and accessibility is a value. Not sure how I would modify the skills… Infrastructure - I was thinking that this one pulled in remembrance as a part of identity. Exporting data, moving data, understanding the internet stack and how to adequately use it so that you can keep a record of your or someone else’s online identity has lots of implications for remembrance, which I think influences who we are as much as anything else. Example skill modification/addition: “Exporting and backing up your data from web services” might lead to “Analyzing historical data to determine identity shifts” That's all for now. I've thought a little about the final strand, but I'm going to save it for next year. I would like to hear what you all think. Is this a useful experiment for the Web Literacy Map? Does this kind of thinking help hone in on ways to structure learning activities that use the web? Can you help me figure out what my brain is doing? Happy holidays everyone ;)
Categorieën: Mozilla-nl planet

Gregory Szorc: Why is Slow

Mozilla planet - vr, 19/12/2014 - 15:40

At Mozilla, I often hear statements like Mercurial is slow. That's a very general statement. Depending on the context, it can mean one or more of several things:

  • My Mercurial workflow is not very efficient
  • hg commands I execute are slow to run
  • hg commands I execute appear to stall
  • The Mercurial server I'm interfacing with is slow

I want to spend time talking about a specific problem: why (the server) is slow.

What Isn't Slow

If you are talking to over HTTP or HTTP (, there should not currently be any server performance issues. Our Mercurial HTTP servers are pretty beefy and are able to absorb a lot of load.

If is slow, chances are:

  • You are doing something like cloning a 1+ GB repository.
  • You are asking the server to do something really expensive (like generate JSON for 100,000 changesets via the pushlog query interface).
  • You don't have a high bandwidth connection to the server.
  • There is a network event.
Previous Network Events

There have historically been network capacity issues in the datacenter where is hosted (SCL3).

During Mozlandia, excessive traffic to essentially saturated the SCL3 network. During this time, requests to were timing out: Mercurial traffic just couldn't traverse the network. Fortunately, events like this are quite rare.

Up until recently, Firefox release automation was effectively overwhelming the network by doing some clownshoesy things.

For example, gaia-central was being cloned all the time We had a ~1.6 GB repository being cloned over a thousand times per day. We were transferring close to 2 TB of gaia-central data out of Mercurial servers per day

We also found issues with pushlogs sending 100+ MB responses.

And the build/tools repo was getting cloned for every job. Ditto for mozharness.

In all, we identified a few terabytes of excessive Mercurial traffic that didn't need to exist. This excessive traffic was saturating the SCL3 network and slowing down not only Mercurial traffic, but other traffic in SCL3 as well.

Fortunately, people from Release Engineering were quick to respond to and fix the problems once they were identified. The problem is now firmly in control. Although, given the scale of Firefox's release automation, any new system that comes online that talks to version control is susceptible to causing server outages. I've already raised this concern when reviewing some TaskCluster code. The thundering herd of automation will be an ongoing concern. But I have plans to further mitigate risk in 2015. Stay tuned.

Looking back at our historical data, it appears that we hit these network saturation limits a few times before we reached a tipping point in early November 2014. Unfortunately, we didn't realize this because up until recently, we didn't have a good source of data coming from the servers. We lacked the tooling to analyze what we had. We lacked the experience to know what to look for. Outages are effective flashlights. We learned a lot and know what we need to do with the data moving forward.

Available Network Bandwidth

One person pinged me on IRC with the comment Git is cloning much faster than Mercurial. I asked for timings and the Mercurial clone wall time for Firefox was much higher than I expected.

The reason was network bandwidth. This person was performing a Git clone between 2 hosts in EC2 but was performing the Mercurial clone between and a host in EC2. In other words, they were partially comparing the performance of a 1 Gbps network against a link over the public internet! When they did a fair comparison by removing the network connection as a variable, the clone times rebounded to what I expected.

The single-homed nature of in a single datacenter in northern California is not only bad for disaster recovery reasons, it also means that machines far away from SCL3 or connecting to SCL3 over a slow network aren't getting optimal performance.

In 2015, expect us to build out a geo-distributed so that connections are hitting a server that is closer and thus faster. This will probably be targeted at Firefox release automation in AWS first. We want those machines to have a fast connection to the server and we want their traffic isolated from the servers developers use so that hiccups in automation don't impact the ability for humans to access and interface with source code.

NFS on SSH Master Server

If you connect to or, you are hitting a pool of servers behind a load balancer. These servers have repository data stored on local disk, where I/O is fast. In reality, most I/O is serviced by the page cache, so local disks don't come into play.

If you connect to ssh://, you are hitting a single, master server. Its repository data is hosted on an NFS mount. I/O on the NFS mount is horribly slow. Any I/O intensive operation performed on the master is much, much slower than it should be. Such is the nature of NFS.

We'll be exploring ways to mitigate this performance issue in 2015. But it isn't the biggest source of performance pain, so don't expect anything immediately.

Synchronous Replication During Pushes

When you hg push to, the changes are first made on the SSH/NFS master server. They are subsequently mirrored out to the HTTP read-only slaves.

As is currently implemented, the mirroring process is performed synchronously during the push operation. The server waits for the mirrors to complete (to a reasonable state) before it tells the client the push has completed.

Depending on the repository, the size of the push, and server and network load, mirroring commonly adds 1 to 7 seconds to push times. This is time when a developer is sitting at a terminal, waiting for hg push to complete. The time for Try pushes can be larger: 10 to 20 seconds is not uncommon (but fortunately not the norm).

The current mirroring mechanism is overly simple and prone to many failures and sub-optimal behavior. I plan to work on fixing mirroring in 2015. When I'm done, there should be no user-visible mirroring delay.

Pushlog Replication Inefficiency

Up until yesterday (when we deployed a rewritten pushlog extension, the replication of pushlog data from master to server was very inefficient. Instead of tranferring a delta of pushes since last pull, we were literally copying the underlying SQLite file across the network!

Try's pushlog is ~30 MB. mozilla-central and mozilla-inbound are in the same ballpark. 30 MB x 10 slaves is a lot of data to transfer. These operations were capable of periodically saturating the network, slowing everyone down.

The rewritten pushlog extension performs a delta transfer automatically as part of hg pull. Pushlog synchronization now completes in milliseconds while commonly only consuming a few kilobytes of network traffic.

Early indications reveal that deploying this change yesterday decreased the push times to repositories with long push history by 1-3s.


Pretty much any interaction with the Try repository is guaranteed to have poor performance. The Try repository is doing things that distributed versions control systems weren't designed to do. This includes Git.

If you are using Try, all bets are off. Performance will be problematic until we roll out the headless try repository.

That being said, we've made changes recently to make Try perform better. The median time for pushing to Try has decreased significantly in the past few weeks. The first dip in mid-November was due to upgrading the server from Mercurial 2.5 to Mercurial 3.1 and from converting Try to use generaldelta encoding. The dip this week has been from merging all heads and from deploying the aforementioned pushlog changes. Pushing to Try is now significantly faster than 3 months ago.


Many of the reasons for slowness are known. More often than not, they are due to clownshoes or inefficiencies on Mozilla's part rather than fundamental issues with Mercurial.

We have made significant progress at making faster. But we are not done. We are continuing to invest in fixing the sub-optimal parts and making faster yet. I'm confident that within a few months, nobody will be able to say that the servers are a source of pain like they have been for years.

Furthermore, Mercurial is investing in features to make the wire protocol faster, more efficient, and more powerful. When deployed, these should make pushes faster on any server. They will also enable workflow enhancements, such as Facebook's experimental extension to perform rebases as part of push (eliminating push races and having to manually rebase when you lose the push race).

Categorieën: Mozilla-nl planet