Mozilla Nederland LogoDe Nederlandse

The Servo Blog: This Week In Servo 106

Mozilla planet - ma, 05/03/2018 - 01:30

Windows nightlies no longer crash on startup! Sorry about the long delay in reverting the change that originally triggered the crash.

In the last week, we merged 70 PRs in the Servo organization’s repositories.

Planning and Status

Our roadmap is available online, including the overall plans for 2018.

This week’s status updates are here.

Notable Additions
  • nox removed more ToCss implementations by deriving them.
  • paul added a URL prompt to allow navigating pages in nightly builds.
  • manish fixed a panic that appeared on Wikipedia due to the use of rowspan and colspan.
  • ajeffrey avoided a deadlock caused by IPC channels on certain pages.
  • gw adjusted the behaviour of clipped blend operations.
  • emilio improved the behaviour of iterating over CSS longhand properties in the style system.
  • manish implemented rowspan support for tables.
  • alexfjw improved the performance of some operations that check computed display values.
  • emilio made the style system respect conditionally-enabled CSS properties better.
New Contributors

Interested in helping build a web browser? Take a look at our curated list of issues that are good for new contributors!

Categorieën: Mozilla-nl planet

Cameron Kaiser: And now for something completely different: Make that Power Mac into a radio station (plus: the radioSHARK tank and AltiVec + LAME = awesome)

Mozilla planet - zo, 04/03/2018 - 03:17
As I watch Law and Order reruns on my business trip, first, a couple followups. The big note is that it looks like Intel and some ARM cores aren't the only ones vulnerable to Meltdown; Raptor Computer Systems confirms that Meltdown affects at least POWER7 through POWER9 as well, and the Talos II has already been patched. It's not clear if this is true for POWER4 (which would include the G5) through POWER6 as these processor generations have substantial microarchitectural differences. However, it doesn't change anything for the G3 and 7400, since because they appear to be immune to Spectre-type attacks means they must also be immune to Meltdown. As a practical matter, though, unless you're running an iffy program locally there is no known JavaScript vector that successfully works to exploit Spectre (let alone Meltdown) on Power Macs, even on the 7450 and G5 which are known to be vulnerable to Spectre.

Also, the TenFourFox Downloader is now live. After only a few days up with no other promotion, it's pulling down about 200 downloads a day. I note that some small number are current TenFourFox users, which isn't really what this is intended for: the Downloader is unavoidably -- and in this case, also unnecessarily -- less secure, and just consumes bandwidth on Floodgap downloading a tool to download something the browser can just download directly. If you're using TenFourFox already (at least 38 or later), please just download upgrades with the browser itself. In addition, some are Intel Mac users on 10.6 and earlier, which the Downloader intentionally won't grab for because we don't support them. Nevertheless, the Downloader is clearly accomplishing its goal, which is important given that many websites won't be accessible to Power Mac users anymore without it, so it will be a permanent addition to the site.

Anyway, let's talk about Power Macs and radios. I'm always fond of giving my beloved old Macs new things to do, so here's something you can think about for that little G4 Mac mini you tossed in the closet. Our 2,400 square foot house has a rather curious floor plan: it's a typical California single-floor ranch but configured as a highly elongated L-shape along the bottom and right legs of the property's quadrilateral. If I set something playing somewhere in the back of the house you probably won't hear it very well even just a couple rooms away. The usual solution is to buy something like a Sonos, which are convenient and easy to operate, but streaming devices like that can have synchronization issues and they are definitely not cheap.

But there's another solution: set up a house FM transmitter. With a little spare time and the cost of the transmitter (mine cost $125), you can devise a scheme that turns any FM radio inside your house into a remote speaker with decent audio quality. Larger and better engineered than those cheapo little FM transmitters you might use in a car, the additional power allows the signal to travel through walls and with careful calibration can cover even a relatively large property. Best of all, adding additional drops is just the cost of another radio (instead of an expensive dedicated receiver), and because it's broadcast everything is in perfect sync. If your phone has an FM radio you can even listen to your home transmitter on that!

There are some downsides to this approach, of course. One minor downside is because it's broadcast, your neighbours could tune in (don't play your potentially embarrassing, uh, "home movie" audio soundtracks this way). Another minor downside is that the audio quality is decent but not perfect. The transmitter is in your house, so interference is likely to be less, but things as simple as intermittently energized electrical circuits, bad antenna positioning, etc., can all make reception sometimes maddeningly unpredictable. If you're an uncompromising audiophile, or you need more than two-channel audio, you're just going to have to get a dedicated streaming system.

The big one, though, is that you are now transmitting on a legally regulated audio band without a license. The US Federal Communications Commission has provisions under Part 15 for unlicensed AM/FM transmission which limit your signal to an effective distance of just 200 feet. There are more specific regulations about radiated signal strength, but the rule of thumb I use is that if you can detect a usable signal at your property line you are probably already in violation (and you can bet I took a lot of samples when I was setting this up). The FCC doesn't generally drive around residential neighbourhoods with a radio detector van and no one's going to track down a signal no one but you can hear, but if your signal leaks off your property it only takes one neighbourhood busybody with a scanner and nothing better to do to complain and initiate an investigation. Worse, if you transmit on the same frequency as an actually licensed local station and meaningfully interfere with their signal, and they detect it (and if it's meaningful interference, I guarantee you they will sooner or later), you're in serious trouble. The higher the rated wattage for your transmitter, the greater the risk you run of getting busted, especially if you are in a densely populated area. If you ever get a notice of violation, take it seriously, take your transmitter completely offline immediately, and make sure you tell the FCC in writing you turned it off. Don't turn it back on again until you're sure you're in compliance or you may be looking at a fine of up to $75,000. If you're not in the United States, you'd better know what the law is there too.

So let's assume you're confident you're in (or can be in) compliance with your new transmitter, which you can be easily with some reasonable precautions I'll discuss in a moment. You could just plug the transmitter into a dedicated playback device, and some people do just that, but by connecting the transmitter to a handy computer you can do so many other useful things. So I plugged it into my Sawtooth G4 file server, which lives approximately in the middle of the house in the dedicated home server room:

There it is, the slim black box with the whip antenna coming off the top sandwiched between the FireWire hub (a very, very useful device and much more reliable than multiple FireWire controllers) and the plastic strut the power strip is mounted on. This is the Whole House FM Transmitter 3.0 "WHFT3" which can be powered off USB or batteries (portable!), has mic and line-level inputs (though in this application only line input is connected), includes both rubber duck and whip antennas (a note about this presently) and retails for about $125. Amazon carries it too (I don't get a piece of any sales, I'm just a satisfied customer). It can crank up to around 300 milliwatts, which may not seem like much to the uninitiated, but easily covers the 100 foot range of my house and is less likely to be picked up by nosy listeners than some of the multi-watt Chinese import RF blowtorches they sell on eBay (for a point of comparison, a typical ham mobile radio emits around 5 watts). It also has relatively little leakage, meaning it is unlikely to be a source of detectable RF interference when properly tuned.

By doing it this way, the G4, which is ordinarily just acting as an FTP and AFP server, now plays music from playlists and the audio is broadcast over the FM transmitter. How you decide to do this is where the little bit of work comes in, but I can well imagine just having MacAmp Lite X or Audion running on it and you can change what's playing over Screen Sharing or VNC. In my case, I wrote up a daemon to manage playlists and a command-line client to manipulate it. 10.5+ offers a built-in tool called afplay to play audio files from the command line, or you can use this command line playback tool for 10.2 through 10.4. The radio daemon uses this tool (the G4 server runs Tiger) to play each file in the selected folder in order. I'll leave writing such a thing to the reader since my radio daemon has some dependencies on the way my network is configured, but it's not very complex to devise in general.

Either way works fine, but you also need to make sure that the device has appropriate signal strength and input levels. The WHFT3 allows you to independently adjust how much strength it transmits with a simple control on the side; you can also adjust the relative levels for the mic and line input if you are using both. (There is a sorta secret high-level transmission mode you can enable which I strongly recommend you do not: you will almost certainly be out of FCC compliance if you do. Mine didn't need this.) You should set this only as high as necessary to get good reception where you need it, which brings us to making sure the input level is also correct, as the WHFT3 is somewhat more prone to a phenomenon called over-modulation than some other devices. This occurs when the input level is too high and manifests as distortion or clipping but only when audio is actually playing.

To calibrate my system, I first started with a silent signal. Since the frequency I chose had no receivable FM station in my region of greater Los Angeles (and believe me, finding a clear spot on the FM dial is tough in the Los Angeles area), I knew that I would only hear static on that frequency. I turned on the transmitter with no input using the "default" rubber duck antenna and went around the house with an FM radio with its antenna fully retracted. When I heard static instead of nothing, I knew I was exceeding the transmission range, which gave me an approximate "worst case" distance for inside the house. I then walked around the property line with the FM radio and its antenna fully extended this time for a "within compliance" test. I only picked up static outside the house, but inside I couldn't get enough range in the kitchen even with the transmitter cranked up all the way, so I ended up switching the rubber duck antenna for the included whip antenna. The whip is not the FCC-approved configuration (you are warned), but got me the additional extra range, and I was able to back down the transmitter strength and still be "neighbour proof" at the property line. This is also important for audio quality since if you have the transmitter power all the way up the WHFT3 tends to introduce additional distortion no matter what your input level is.

Next was to figure out the appropriate input level. I blasted Bucko and Champs Australian Christmas music and backed down the system volume on the G4 until there was no distortion for the entire album (insert your own choice of high volume audio here such as Spice Girls or Anthrax), and checked the new level a few times with a couple other albums until I was satisfied that distortion and overmodulation was at a minimum. Interestingly, while you can AppleScript setting the volume in future, what you get from osascript -e 'set ovol to output volume of (get volume settings)' is in different units than what you feed to osascript -e 'set volume X': the first returns a number from 0-100 with 14 unit steps, but the second expects a number from 1-10 in 0.1 unit steps. The volume on my G4 is reported by AppleScript as "56" but I set that on startup in a launchd startup item with a volume value of 4.0 (i.e., 4 times 14 equals 56). Don't ask me why Apple did it this way.

There were two things left to do. First was to build up a sufficient library of music to play from the file server, which (you may find this hard to believe) really is just a file server and handles things like backups and staging folders, not a media server. There are many tools like the most excellent X Lossless Decoder utility -- still Tiger and PowerPC compatible! -- which will rip your CDs into any format you like. I decided on MP3 since the audio didn't need to be lossless and they were smaller, but most of the discs I cared about were already ripped in lossless format on the G5, so it was more a matter of transcoding them quickly. The author of XLD makes the AltiVec-accelerated LAME encoder he uses available separately, but this didn't work right on 10.4, so I took his patches against LAME 3.100, tweaked them further, restored G3 and 10.4 compatibility, and generated a three-headed binary that selects for G3, G4 and a special optimized version for G5. You can download LAMEVMX here, or get the source code from Github.

On the G5 LAMEVMX just tears through music at around 25x to as much as 30x playback speed, over three times as fast as the non-SIMD version. I stuck the MP3 files on a USB drive and plugged that in the Sawtooth so I didn't have to take up space on its main RAID, and the radio daemon iterates off that.

The second was figuring out some way to use my radios as, well, radios. Yes, you could just tune them to another station and then tune them back, but I was lazy, and when you get an analogue tuner set at that perfect point you really don't want to have to do it again over and over. Moreover, I usually listen to AM radio, not FM. One option is to see if they stream over the Internet, which may even be better quality, though receiving them over the radio eliminates having to have a compatible client and any irregularities with your network. With a little help from an unusual USB device, you can do that too:

This is the Griffin radioSHARK, which is nothing less than a terrestrial radio receiver bolted onto a USB HID. It receives AM and FM and transmits back to the Mac over USB audio or analogue line-level out. How do we hook this up to our Mac radio station? One option is to just connect its audio output directly, but you should have already guessed I'd rather use the digital output over USB. While you can use Griffin's software to tune the radio and play it through (which is even AppleScript-able, at least version 2), it's PowerPC-only and won't run on 10.7+ if you're using an old Intel Mac for this purpose, and I always prefer to do this kind of thing programmatically anyhow.

For the tuner side, enterprising people on the Linux side eventually figured out how to talk to the HID directly and thus tune the radio manually (there are two different protocols for the two versions of the radioSHARK; more on this in a moment). I combined both protocols together and merged it with an earlier but more limited OS X utility, and the result is radioSH, a commandline radio tuner. (You can also set the radioSHARK's fun blue and red LEDs with this tool and use it as a cheapo annunciator device. Read the radioSH page for more on that.) I compiled it for PowerPC and 32-bit Intel, and the binary runs on anything from 10.4 to 10.13 until Apple cuts off 32-bit binary compatibility. The source code is available too.

For USB audio playthru, any USB audio utility will suffice, such as LineIn (free, PowerPC compatible) or SoundSource (not free, not PowerPC compatible), or even QuickTime Player with a New Audio Recording and the radioSHARK's USB audio output as source. Again, I prefer to do this under automatic control, so I wrote a utility using the MTCoreAudio framework to do the playback in the background. (Use this source file and tweak appropriately for your radioSHARK's USB audio endpoint UID.) At this point, getting the G4 radio station to play the radio was as simple as adding code to the radio daemon to tune the radio with radioSH and play the USB audio stream through the main audio output using that background tool when a playlist wasn't active (and to turn off the background streamer when a playlist was running). Fortunately, USB playthru uses very little CPU even on this 450MHz machine.

I mentioned there are two versions of the radioSHARK, white (v1) and black (v2), which have nearly completely different hardware (belied by their completely different HID protocols). The black radioSHARK is very uncommon. I've seen some reports that there are v1 white units with v2 black internals, but of the three white radioSHARKs I own, all of them are detected as v1 devices. This makes a difference because while neither unit tunes AM stations particularly well, the v1 seems to have poorer AM reception and more distortion, and the v2 is less prone to carrier hum. To get the AM stations I listen to more reliably with better quality, I managed to track down a black radioSHARK and stuck it in the attic:

To improve AM reception really all you can do is rotate or reposition the receiver and the attic seemed to get these stations best. A 12-foot USB extension cable routes back to the G4 radio station. The radioSHARK is USB-powered, so that's the only connection I had to run.

To receive the radio on the Quad G5 while I'm working, I connected one of the white radioSHARKs (since it's receiving FM, there wasn't much advantage to trying to find another black unit). I tune it on startup with radioSH to the G4 and listen with LineIn. Note that because it's receiving the radio signal over USB there is a tiny delay and the audio is just a hair out of sync with the "live" analogue radios in the house. If you're mostly an Intel Mac house, you can of course do the same thing with the same device in the same way (on my MacBook Air, I use radioSH to tune and play the audio in QuickTime Player).

For a little silliness I added a "call sign" cron job that uses /usr/bin/say to speak a "station ID" every hour on the hour. The system just mixes it over the radio daemon's audio output, so no other code changes were necessary. There you go, your very own automatic G4 radio station in your very own house. Another great use for your trusty old Power Mac!

Oh, one more followup, this time on Because I Got High Sierra. My mother's Mac mini, originally running Mavericks, somehow got upgraded to High Sierra without her realizing it. The immediate effect was to make Microsoft Word 2011 crash on startup (I migrated her to LibreOffice), but the delayed effect was, on the next reboot (for the point update to 10.13.2), this alarming screen:

The system wouldn't boot! On every startup it would complain that "macOS could not be installed on your computer" and "The path /System/Installation/Packages/OSInstall.mpkg appears to be missing or damaged." Clicking Restart just caused the same message to appear.

After some cussing and checking that the drive was okay in the Recovery partition, the solution was to start in Safe Mode, go to the App Store and force another system update. After about 40 minutes of chugging away, the system grudgingly came up after everything was (apparently) refreshed. Although some people with this error message reported that they could copy the OSInstall.mpkg file from some other partition on their drive, I couldn't find such a file even in the Recovery partition or anywhere else. I suspect the difference is that these people encountered this error immediately after "upgrading" to Because I Got High Sierra, while my mother's computer encountered this after a subsequent update. This problem does not appear to be rare. It doesn't seem to have been due to insufficient disk space or a hardware failure and I can't find anything that she did wrong (other than allowing High Sierra to install in the first place). What would she have done if I hadn't been visiting that weekend, I wonder? On top of all the other stupid stuff in High Sierra, why do I continue to waste my time with this idiocy?

Does Apple even give a damn anymore?

Categorieën: Mozilla-nl planet

Chris AtLee: Taskcluster migration update, the sequel

Mozilla planet - za, 03/03/2018 - 13:26
Firefox, now 100% buildbot-free!

First, the good news - Developer Edition 60.0b1 will be the first release in nearly 10 years done without using buildbot. This is an amazing milestone, and I'm incredibly proud of everybody who has contributed to make this possible!

Long time, no update

How did we get here? It's been, uh, almost 6 months since I last posted an update about our migration to Taskcluster.

In my last update, I described our plans for the end of 2017...

We're on track to ship builds produced in Taskcluster as part of the 56.0 release scheduled for late September. After that the only Firefox builds being produced by buildbot will be for ESR52. Meanwhile, we've started tackling the remaining parts of release automation. We prioritized getting nightly and CI builds migrated to Taskcluster, however, there are still parts of the release process still implemented in Buildbot. We're aiming to have release automation completely migrated off of buildbot by the end of the year. We've already seen many benefits from migrating CI to Taskcluster, and migrating the release process will realize many of those same benefits. How'd we do?

We're past the end of 2017, so how are we doing?

Well, we successfully shipped 56.0 with builds produced in Taskcluster. Our big Firefox Quantum release (57.0), was also shipped with builds produced by Taskcluster.

(side note: 57 had the most complex update scenarios we've ever had to support for Firefox...a subject for another post!)

Release scheduling

Post-56.0, our release process was using Taskcluster exclusively for producing the initial builds, and all the release process scheduling. We were still using Buildbot for many of the post-build tasks, like l10n repacks, publishing updates, pushing files to S3, etc. Once again we relied on the buildbot bridge to allow us to integrate existing buildbot components with the newer taskcluster pipeline. I learned from Kim Moir that this is a great example of the strangler pattern.

In the fall of 2017, we decided to begin migrating all of the scheduling logic for release automation into taskcluster using the in-tree taskgraph scheduling system. We did this for a few reasons...

  1. Having the release scheduling logic ride the trains is much more maintainable. Previous to this we had an externally defined release pipeline in our releasetasks repo. It was hard to keep this repository in sync with changes required for beta/release and ESR branches.

  2. More importantly, having the release scheduling logic in-tree meant that we could then rely on chain-of-trust to verify artifacts produced by the release pipeline.

  3. We felt that having the complete release pipeline defined in taskcluster would make it easier for us to tackle the remaining buildbot bridge tasks in parallel.

We hit this milestone in the 58 cycle. Starting with 58.0b3, Firefox and Fennec releases were completely scheduled using the in-tree taskgraph generation. We also migrated over the l10n repacks at the same time, removing a longstanding source of problems where repacks would fail when we first got to beta due to environmental differences between taskcluster and buildbot.

No-BBB Releases

Still, as of 58, much of release automation still ran on buildbot, even if Taskcluster was doing all the scheduling.

Since December, we've been working on removing these last few pieces of buildbot from the release process. Progress was initially a bit slow, given Austin and Christmas, but we've been hard at work in the new year.

That brings us to today.

We've moved uptake monitoring, update verify (and made it 2x faster too!), update submission, final verify, bouncer submission, version bumping and tagging, balrog submission all to run in Taskcluster via various kinds of scriptworkers.

As I mentioned above, DevEdition 60.0b1 will be the first release in nearly 10 years done without using buildbot. The rest of the 60 release cycle will follow suit, and once 60 hits the release channel, only ESR52 will remain on buildbot!

Categorieën: Mozilla-nl planet

K Lars Lohn: Things Gateway - Part 5

Mozilla planet - vr, 02/03/2018 - 18:00
In Part 4 of this series, I showed how to link the Things Gateway with a quartet of Philips Hue bulbs via the Hue Bridge.  There are advantages and disadvantages to using the Hue Bridge.  On the plus side, the Hue Bridge enables the mobile device app, a mature controller for Hue lights with plenty of bells and whistles.  On the downside, the Hue Bridge is an Internet capable device, and I'm just not sure I can trust that.

My experience shows that if you purchase a Hue bulb packaged without a Hue Bridge, you can pair the light directly with a Zigbee adapter.  This means that the light works like any Zigbee compatible bulb.

Purchasing Hue bulbs in a Starter Kit can be a less expensive way to purchase Hue bulbs.  Starter Kits include three or four bulbs along with a Hue Bridge.  However, the bulbs are effectively locked to the Hue Bridge that came in the package.  To use the bulb without the Hue Bridge, the bulbs need liberation. Fortunately, it is not as complicated and perilous as jail breaking a cell phone.  Ironically, Philips sells the tool to do the unlocking disguised as their own remote Hue Dimmer Switch.

Caveat emptor:  If you really want to avoid the Hue Bridge, compare the costs of buying single bulbs versus Starter Kits, factoring in the cost of the Hue Dimmer Switch.  The only way that Starter Kits saved money was when I found them on sale.  At their full price, they are a dubious bargain.

Goal: Get the Things Gateway to control Hue light bulbs without the use of a Hue bridge. Maintain total local control with no component communicating outside to the Internet.
To reproduce everything that I'm doing here today, you'll need these things:

Requirements & Parts List:
ItemWhat's it for?Where I got itThe Raspberry Pi and associated hardware from Part 2 of this series.This is the base platform that we'll be adding ontoFrom Part 2 of this seriesDIGI XStickThis allows the Raspberry Pi to talk the ZigBee protocol - there are several models, make sure you get the XU-Z11 model.The only place that I could find this was Mouser ElectronicsPhilips Hue White & Color Ambiance bulbTo demonstrate use of a Hue bulb without any extra parts or incantationsHome DepotPhilips Hue White & Color Ambiance Starter KitTo demonstrate unlocking Hue bulbs from their associated Hue BridgeAmazonPhilips Hue Dimmer SwitchThe magic key that can unlock Hue bulbsAmazon
Step 1: I'm assuming that we're starting with the Things Gateway configured for the Zigbee adapter.  See Part 2 of this series for instructions.  Remember to update the Zigbee add-on as specified in the instructions.

Step 2: First we're going to just show that we can, with no fuss, use a Philips Hue bulb that was not packaged with a Hue Bridge.
From the Things pane, press the "+" button.
Plug in your Hue light, you should see the bulb detected.  Press "Save" and "Done".
You now have full control of a Hue bulb in the Things Gateway.  You can repeat this step with any Hue bulbs that were purchased without a Hue Bridge.

Step 3:  From this point on, we're only going to deal with liberating Hue bulbs that were purchased in a Starter Kit that included a Hue Bridge.   If you've already set up your Hue lights and Bridge, these instructions will effectively undo that setup.
The first thing that we need to do is unlock the bulbs.  For that, we're going to use the Hue Dimmer Switch.  You'll notice that the instructions for the dimmer switch want you to pair it with the bridge.  Since our goal is to not use the bridge, we're going to ignore the instructions.  Pull the battery tab to get power to the dimmer.  There will be a tiny light in one corner of the ON switch that blinks orange.  This means that it is looking to pair with a Hue Bridge.  You may ignore the blinking light.

We're going to apply power to one of our Hue bulbs.  It should light up warm white in color.  Hold the Hue Dimmer in both hands with thumbs set to press both the ON and OFF buttons at the same time.  Move the dimmer to within four inches of the bulb to be unlocked.  Press and hold the ON and OFF buttons.  After ten seconds, continue holding when the bulb blinks a harsh bluish white light several times and then re-illuminates to a warm white.  Release the buttons.
Repeat this step for each bulb that you want to unlock.  In my case, since my bulbs are so close to each other, I had to ensure that only one bulb was powered at a time.

The Hue Dimmer can factory reset other Zigbee compatible bulbs, like the CREE bulbs and Ikea TRÅDFRI bulbs.  I'll have more on that in a future Ikea focused installment.

Step 4:  To add your newly unlocked Hue bulbs to the Things Gateway, repeat Step 2 for each bulb.  When completed, I had five Hue bulbs ready to color at will.  Now will somebody remind me why I would want lights of all these different colors?
In the next installment, I'm going to integrate TP-Link devices into this circus.
Categorieën: Mozilla-nl planet

Mike Conley: Things I’ve Learned This Week (May 25 – May 29, 2015)

Thunderbird - ma, 01/06/2015 - 07:49
MozReview will now create individual attachments for child commits

Up until recently, anytime you pushed a patch series to MozReview, a single attachment would be created on the bug associated with the push.

That single attachment would link to the “parent” or “root” review request, which contains the folded diff of all commits.

We noticed a lot of MozReview users were (rightfully) confused about this mapping from Bugzilla to MozReview. It was not at all obvious that Ship It on the parent review request would cause the attachment on Bugzilla to be r+’d. Consequently, reviewers used a number of workarounds, including, but not limited to:

  1. Manually setting the r+ or r- flags in Bugzilla for the MozReview attachments
  2. Marking Ship It on the child review requests, and letting the reviewee take care of setting the reviewer flags in the commit message
  3. Just writing “r+” in a MozReview comment

Anyhow, this model wasn’t great, and caused a lot of confusion.

So it’s changed! Now, when you push to MozReview, there’s one attachment created for every commit in the push. That means that when different reviewers are set for different commits, that’s reflected in the Bugzilla attachments, and when those reviewers mark “Ship It” on a child commit, that’s also reflected in an r+ on the associated Bugzilla attachment!

I think this makes quite a bit more sense. Hopefully you do too!

See gps’s blog post for the nitty gritty details, and some other cool MozReview announcements!

Categorieën: Mozilla-nl planet

Rumbling Edge - Thunderbird: 2015-05-26 Calendar builds

Thunderbird - wo, 27/05/2015 - 10:26

Common (excluding Website bugs)-specific: (23)

  • Fixed: 735253 – JavaScript Error: “TypeError: calendar is null” {file: “chrome://calendar/content/calendar-task-editing.js” line: 102}
  • Fixed: 768207 – Make the cache checkbox default-on in the new calendar dialog
  • Fixed: 1049591 – Fix lots of strict warnings
  • Fixed: 1086573 – Lightning and Thunderbird disagree about timezone support in ics files
  • Fixed: 1099592 – Make JS callers of ios.newChannel call ios.newChannel2 in calendar/
  • Fixed: 1149423 – Add Windows timezone names to list of aliases
  • Fixed: 1151011 – Calendar events show up on wrong day when printing
  • Fixed: 1151440 – Choose a color not responsive when creating a New calendar in Lightning 4.0b1
  • Fixed: 1153327 – Run compare-locales with merging for Lightning
  • Fixed: 1156015 – Email scheduling fails for recipients with URN id
  • Fixed: 1158036 – Support sendMailTo for URN type attendees
  • Fixed: 1159447 – TEST-UNEXPECTED-FAIL | xpcshell-icaljs.ini:calendar/test/unit/test_extract.js
  • Fixed: 1159638 – Getter fails in calender-migration-dialog on first run after installation
  • Fixed: 1159682 – Provide a more appropriate “learn more” page on integrated Lightning firstrun
  • Fixed: 1159698 – Opt-out dialog has a button for “disable”, but actually the addon is removed
  • Fixed: 1160728 – Unbreak Lightning 4.0b4 beta builds
  • Fixed: 1162300 – TEST-UNEXPECTED-FAIL | xpcshell-libical.ini:calendar/test/unit/test_alarm.js | xpcshell return code: 0
  • Fixed: 1163306 – Re-enable libical tests and disable ical.js in nightly builds when binary compatibility is back
  • Fixed: 1165002 – Lightning broken, tries to load libical backend although “calendar.icaljs” defaults to “true”
  • Fixed: 1165315 – TEST-UNEXPECTED-FAIL | xpcshell-icaljs.ini:calendar/test/unit/test_bug759324.js | xpcshell return code: 1 | ###!!! ASSERTION: Deprecated, use NewChannelFromURI2 providing loadInfo arguments!
  • Fixed: 1165497 – TEST-UNEXPECTED-FAIL | xpcshell-icaljs.ini:calendar/test/unit/test_alarmservice.js | xpcshell return code: -11
  • Fixed: 1165726 – TEST-UNEXPECTED-FAIL | /builds/slave/test/build/tests/mozmill/testBasicFunctionality.js | testBasicFunctionality.js::testSmokeTest
  • Fixed: 1165728 – TEST-UNEXPECTED-FAIL | xpcshell-icaljs.ini:calendar/test/unit/test_bug494140.js | xpcshell return code: -11

Sunbird will no longer be actively developed by the Calendar team.

Windows builds Official Windows

Linux builds Official Linux (i686), Official Linux (x86_64)

Mac builds Official Mac

Categorieën: Mozilla-nl planet

Rumbling Edge - Thunderbird: 2015-05-26 Thunderbird comm-central builds

Thunderbird - wo, 27/05/2015 - 10:25

Thunderbird-specific: (54)

  • Fixed: 401779 – Integrate Lightning Into Thunderbird by Default and Ship Thunderbird with Lightning Enabled
  • Fixed: 717292 – Spell check language setting for subject and body not synchronized, but temporarily appears so when changing language and depending on focus (confusing ux)
  • Fixed: 914225 – Support hotfix add-on in Thunderbird
  • Fixed: 1025547 – newmailaccount/jquery.tmpl.js, line 123: reference to undefined property def[1]
  • Fixed: 1088975 – Answering mail with sendername containing encoded special chars and comma creates two “To”-entries
  • Fixed: 1101237 – Remove distribution directory during install
  • Fixed: 1109178 – Thunderbird OAuth implementation does not work with Evernote
  • Fixed: 1110166 – Port |Bug 1102219 – Rename String.prototype.contains to String.prototype.includes| to comm-central
  • Fixed: 1113097 – Fix misuse of fixIterator
  • Fixed: 1130854 – Package Lightning with Thunderbird
  • Fixed: 1131997 – Adapt for Debugger Server code for changes in bug 1059308
  • Fixed: 1135291 – Update chat log entries added to Gloda since bug 955292 to use relative paths
  • Fixed: 1135588 – New conversations get indexed twice by gloda, leading to duplicate search results
  • Fixed: 1138154 – Plugins default to “always activate” in Thunderbird
  • Fixed: 1142879 – [meta] track Mozilla-central (Core) issues that we want to have fixed in TB38
  • Fixed: 1146698 – Chat Messages added to logs just before shutdown may not be indexed by gloda
  • Fixed: 1148330 – Font indicator doesn’t update when cursor is placed in text where core returns sans-serif (Windows). Serif and monospace don’t work (Linux).
  • Fixed: 1148512 – TEST-UNEXPECTED-FAIL | mailnews/imap/test/unit/test_dod.js | xpcshell return code: 0||1 | streamMessages – [streamMessages : 94] false == true | application crashed [@ mozalloc_abort(char const * const)]
  • Fixed: 1149059 – splitter in compose window can be resized down to completely obscure composition area
  • Fixed: 1151206 – Using a theme hides minimize, maximize and close button in composer window [Mac]
  • Fixed: 1151475 – Remove use of expression closures in mail/
  • Fixed: 1152299 – [autoconfig] Cosmetic changes for WEB.DE config
  • Fixed: 1152706 – Upgrade to Correspondents column (combined To/From column) too agressive
  • Fixed: 1152796 – chrome://messenger/content/folderDisplay.js, line 697: TypeError: this._savedColumnStates.correspondentCol is undefined
  • Fixed: 1152926 – New mail sound preview doesn’t work for default system sound on Mac OS X
  • Fixed: 1154737 – Permafail: TEST-UNEXPECTED-FAIL | toolkit/components/telemetry/tests/unit/test_TelemetryPing.js | xpcshell return code: 0
  • Fixed: 1154747 – TEST-UNEXPECTED-FAIL | /builds/slave/test/build/tests/mozmill/session-store/test-session-store.js | test-session-store.js::test_message_pane_height_persistence
  • Fixed: 1156669 – Trash folder duplication while using IMAP with localized TB
  • Fixed: 1157236 – In-content dialogs: Port bug 1043612, bug 1148923 and bug 1141031 to TB
  • Fixed: 1157649 – TEST-UNEXPECTED-FAIL | dom/push/test/xpcshell/test_clearAll_successful.js (and most other push tests)
  • Fixed: 1158824 – Port bug 138009 to fix packaging errors | Missing file(s): bin/defaults/autoconfig/platform.js
  • Fixed: 1159448 – Thunderbird ignores proxy settings on POP3S protocol
  • Fixed: 1159627 – resource:///modules/dbViewWrapper.js, line 560: SyntaxError: unreachable code after return statement
  • Fixed: 1159630 – components/glautocomp.js, line 155: SyntaxError: unreachable code after return statement
  • Fixed: 1159676 – mailnews/mime/jsmime/test/test_custom_headers.js | run_next_test 0 – TypeError: _gRunningTest is undefined at /builds/slave/test/build/tests/xpcshell/head.js:1435 (and other jsmime tests)
  • Fixed: 1159688 – After switching/changing the window layout, dragging the splitter between threadpane and messagepane can create gray/grey area/space (misplaced notificationbox)
  • Fixed: 1159815 – Take bug 1154791 “Inline spell checker loses red underlines after a backspace is used – take two” in Thunderbird 38
  • Fixed: 1159817 – Take “Bug 1100966 – Inline spell checker loses red underlines after a backspace is used” in Thunderbird 38
  • Fixed: 1159834 – Consider taking “Bug 756984 – Changing location in editor doesn’t preserve the font when returning to end of text/line” in Thunderbird 38
  • Fixed: 1159923 – Take bug 1140105 “Can’t query for a specific font face when the selection is collapsed” in TB 38
  • Fixed: 1160105 – Fix strict mode warnings in protovis-r2.6-modded.js
  • Fixed: 1160106 – “Searching…” spinner at the bottom of gloda search results never goes away
  • Fixed: 1160114 – Strict mode warnings on faceted search
  • Fixed: 1160805 – Missing Windows and Linux nightly builds, build step set props: previous_buildid fails
  • Fixed: 1161162 – “Join Chat” doesn’t focus the newly joined MUC
  • Fixed: 1162396 – Take bug 1140617 “Pasting an image loses the composition style” in TB38
  • Fixed: 1163086 – Take bug 967494 “changing spellcheck language in one composition window affects all open and new compositions” in TB38
  • Fixed: 1163299 – “TypeError: getBrowser(…) is null” in contentAreaClick with Lightning installed and started in calendar view
  • Fixed: 1163343 – Incorrectly formatted error message “sending failed”
  • Fixed: 1164415 – Error in comment for imapEnterServerPasswordPrompt
  • Fixed: 1164658 – TypeError: Cc[‘;1’] is undefined at resource://gre/modules/FxAccountsWebChannel.jsm:227
  • Fixed: 1164707 – missing toolkit_perfmonitoring.xpt in aurora builds
  • Fixed: 1165152 – Take bug 1154894 in TB 38 branch: Disable test_plugin_default_state.js so Thunderbird can ship with plugins disabled by default
  • Fixed: 1165320 – TEST-UNEXPECTED-FAIL | /builds/slave/test/build/tests/mozmill/notification/test-notification.js

MailNews Core-specific: (30)

  • Fixed: 610533 – crash [@ nsMsgDatabase::GetSearchResultsTable(char const*, int, nsIMdbTable**)] with virtual folder
  • Fixed: 745664 – Rename Address book aaa to aaa_test, delete another address book bbb, and renamed address book aaa_test will lose its name and appear deleted after restart (dataloss! involving localized names)
  • Fixed: 777770 – get rid of nsVoidArray from /mailnews
  • Fixed: 786141 – Use nsIFile.exists() instead of stat to check the existence of the file
  • Fixed: 1069790 – Email addresses with parenthesis are not pretty-printed anymore
  • Fixed: 1072611 – Ctrl+P not working from Composition’s Print Preview window
  • Fixed: 1099587 – Make JS callers of ios.newChannel call ios.newChannel2 in mail/ and mailnews/
  • Fixed: 1130248 – |To: “” <>| becomes |”foo@example.comfoo”| when I compose mail to it
  • Fixed: 1138220 – some headers are not not properly capitalized
  • Fixed: 1141446 – Behaviour of malformed rfc2047 encoded From message header inconsistent
  • Fixed: 1143569 – User-agent error when posting to NNTP due to RFC5536 violation of Tb (user-agent header is folded just after user-agent:, “user-agent:[CRLF][SP]Mozilla…”)
  • Fixed: 1144693 – Disable libnotify usage on Linux by default for new-mail notifications (doesn’t always work after bug 858919)
  • Fixed: 1149320 – fix compile warnings in mailnews/extensions/
  • Fixed: 1150891 – Port changes from Bug 1115495 – Part 2: PAC generator for browsing and system wide proxy
  • Fixed: 1151782 – Inputting 29th Feb as a birthday in the addressbook contact replaces it with 1st Mar.
  • Fixed: 1152364 – crash in Address Book via nsAbBSDirectory::GetChildNodes nsCOMArrayEnumerator::operator new(unsigned int, nsCOMArray_base const&)
  • Fixed: 1152989 – Account Manager Extensions broken in Thunderbird 37/38
  • Fixed: 1154521 – jsmime fails on long references header and e-mail gets sent and stored in Sent without headers
  • Fixed: 1155491 – Support autoconfig and manual config of gmail IMAP OAuth2 authentication
  • Fixed: 1155952 – Nesting level does not match indentation
  • Fixed: 1156691 – GUI “Edit filters”: Conditions/actions (for specfic accounts) not visible
  • Fixed: 1156777 – nsParseMailbox.cpp:505:55: error: ‘do_QueryObject’ was not declared in this scope
  • Fixed: 1158501 – Port bug 1039866 (metro code removal) and bug 1085557 (addition of socorro symbol upload API)
  • Fixed: 1158751 – Port NO_JS_MANIFEST changes | mozbuild.frontend.reader.SandboxValidationError: calendar/base/backend/icaljs/
  • Fixed: 1159255 – Build error: MSVC_ENABLE_PGO = True is not permitted to be used in mailnews/intl/
  • Fixed: 1159626 – chrome://messenger/content/accountUtils.js, line 455: SyntaxError: unreachable code after return statement
  • Fixed: 1160647 – Port |Bug 1159972 – Remove the fallible version of PL_DHashTableInit()| to comm-central
  • Fixed: 1163347 – Don’t require scope in ispdb config for OAuth2
  • Fixed: 1165737 – Fix usage of NS_LITERAL_CSTRING in mailnews, port Bug 1155963 to comm-central
  • Fixed: 1166842 – Re-enable binary extensions for comm-central

Windows builds Official Windows, Official Windows installer

Linux builds Official Linux (i686), Official Linux (x86_64)

Mac builds Official Mac

Categorieën: Mozilla-nl planet

Andrew Sutherland: Talk Script: Firefox OS Email Performance Strategies

Thunderbird - do, 30/04/2015 - 22:11

Last week I gave a talk at the Philly Tech Week 2015 Dev Day organized by the delightful people at on some of the tricks/strategies we use in the Firefox OS Gaia Email app.  Note that the credit for implementing most of these techniques goes to the owner of the Email app’s front-end, James Burke.  Also, a special shout-out to Vivien for the initial DOM Worker patches for the email app.

I tried to avoid having slides that both I would be reading aloud as the audience read silently, so instead of slides to share, I have the talk script.  Well, I also have the slides here, but there’s not much to them.  The headings below are the content of the slides, except for the one time I inline some code.  Note that the live presentation must have differed slightly, because I’m sure I’m much more witty and clever in person than this script would make it seem…

Cover Slide: Who!

Hi, my name is Andrew Sutherland.  I work at Mozilla on the Firefox OS Email Application.  I’m here to share some strategies we used to make our HTML5 app Seem faster and sometimes actually Be faster.

What’s A Firefox OS (Screenshot Slide)

But first: What is a Firefox OS?  It’s a multiprocess Firefox gecko engine on an android linux kernel where all the apps including the system UI are implemented using HTML5, CSS, and JavaScript.  All the apps use some combination of standard web APIs and APIs that we hope to standardize in some form.

Firefox OS homescreen screenshot Firefox OS clock app screenshot Firefox OS email app screenshot

Here are some screenshots.  We’ve got the default home screen app, the clock app, and of course, the email app.

It’s an entirely client-side offline email application, supporting IMAP4, POP3, and ActiveSync.  The goal, like all Firefox OS apps shipped with the phone, is to give native apps on other platforms a run for their money.

And that begins with starting up fast.

Fast Startup: The Problems

But that’s frequently easier said than done.  Slow-loading websites are still very much a thing.

The good news for the email application is that a slow network isn’t one of its problems.  It’s pre-loaded on the phone.  And even if it wasn’t, because of the security implications of the TCP Web API and the difficulty of explaining this risk to users in a way they won’t just click through, any TCP-using app needs to be a cryptographically signed zip file approved by a marketplace.  So we do load directly from flash.

However, it’s not like flash on cellphones is equivalent to an infinitely fast, zero-latency network connection.  And even if it was, in a naive app you’d still try and load all of your HTML, CSS, and JavaScript at the same time because the HTML file would reference them all.  And that adds up.

It adds up in the form of event loop activity and competition with other threads and processes.  With the exception of Promises which get their own micro-task queue fast-lane, the web execution model is the same as all other UI event loops; events get scheduled and then executed in the same order they are scheduled.  Loading data from an asynchronous API like IndexedDB means that your read result gets in line behind everything else that’s scheduled.  And in the case of the bulk of shipped Firefox OS devices, we only have a single processor core so the thread and process contention do come into play.

So we try not to be a naive.

Seeming Fast at Startup: The HTML Cache

If we’re going to optimize startup, it’s good to start with what the user sees.  Once an account exists for the email app, at startup we display the default account’s inbox folder.

What is the least amount of work that we can do to show that?  Cache a screenshot of the Inbox.  The problem with that, of course, is that a static screenshot is indistinguishable from an unresponsive application.

So we did the next best thing, (which is) we cache the actual HTML we display.  At startup we load a minimal HTML file, our concatenated CSS, and just enough Javascript to figure out if we should use the HTML cache and then actually use it if appropriate.  It’s not always appropriate, like if our application is being triggered to display a compose UI or from a new mail notification that wants to show a specific message or a different folder.  But this is a decision we can make synchronously so it doesn’t slow us down.

Local Storage: Okay in small doses

We implement this by storing the HTML in localStorage.

Important Disclaimer!  LocalStorage is a bad API.  It’s a bad API because it’s synchronous.  You can read any value stored in it at any time, without waiting for a callback.  Which means if the data is not in memory the browser needs to block its event loop or spin a nested event loop until the data has been read from disk.  Browsers avoid this now by trying to preload the Entire contents of local storage for your origin into memory as soon as they know your page is being loaded.  And then they keep that information, ALL of it, in memory until your page is gone.

So if you store a megabyte of data in local storage, that’s a megabyte of data that needs to be loaded in its entirety before you can use any of it, and that hangs around in scarce phone memory.

To really make the point: do not use local storage, at least not directly.  Use a library like localForage that will use IndexedDB when available, and then fails over to WebSQLDatabase and local storage in that order.

Now, having sufficiently warned you of the terrible evils of local storage, I can say with a sorta-clear conscience… there are upsides in this very specific case.

The synchronous nature of the API means that once we get our turn in the event loop we can act immediately.  There’s no waiting around for an IndexedDB read result to gets its turn on the event loop.

This matters because although the concept of loading is simple from a User Experience perspective, there’s no standard to back it up right now.  Firefox OS’s UX desires are very straightforward.  When you tap on an app, we zoom it in.  Until the app is loaded we display the app’s icon in the center of the screen.  Unfortunately the standards are still assuming that the content is right there in the HTML.  This works well for document-based web pages or server-powered web apps where the contents of the page are baked in.  They work less well for client-only web apps where the content lives in a database and has to be dynamically retrieved.

The two events that exist are:

DOMContentLoaded” fires when the document has been fully parsed and all scripts not tagged as “async” have run.  If there were stylesheets referenced prior to the script tags, the script tags will wait for the stylesheet loads.

load” fires when the document has been fully loaded; stylesheets, images, everything.

But none of these have anything to do with the content in the page saying it’s actually done.  This matters because these standards also say nothing about IndexedDB reads or the like.  We tried to create a standards consensus around this, but it’s not there yet.  So Firefox OS just uses the “load” event to decide an app or page has finished loading and it can stop showing your app icon.  This largely avoids the dreaded “flash of unstyled content” problem, but it also means that your webpage or app needs to deal with this period of time by displaying a loading UI or just accepting a potentially awkward transient UI state.

(Trivial HTML slide)

<link rel=”stylesheet” ...> <script ...></script> DOMContentLoaded!

This is the important summary of our index.html.

We reference our stylesheet first.  It includes all of our styles.  We never dynamically load stylesheets because that compels a style recalculation for all nodes and potentially a reflow.  We would have to have an awful lot of style declarations before considering that.

Then we have our single script file.  Because the stylesheet precedes the script, our script will not execute until the stylesheet has been loaded.  Then our script runs and we synchronously insert our HTML from local storage.  Then DOMContentLoaded can fire.  At this point the layout engine has enough information to perform a style recalculation and determine what CSS-referenced image resources need to be loaded for buttons and icons, then those load, and then we’re good to be displayed as the “load” event can fire.

After that, we’re displaying an interactive-ish HTML document.  You can scroll, you can press on buttons and the :active state will apply.  So things seem real.

Being Fast: Lazy Loading and Optimized Layers

But now we need to try and get some logic in place as quickly as possible that will actually cash the checks that real-looking HTML UI is writing.  And the key to that is only loading what you need when you need it, and trying to get it to load as quickly as possible.

There are many module loading and build optimizing tools out there, and most frameworks have a preferred or required way of handling this.  We used the RequireJS family of Asynchronous Module Definition loaders, specifically the alameda loader and the r-dot-js optimizer.

One of the niceties of the loader plugin model is that we are able to express resource dependencies as well as code dependencies.

RequireJS Loader Plugins

var fooModule = require('./foo'); var htmlString = require('text!./foo.html'); var localizedDomNode = require('tmpl!./foo.html');

The standard Common JS loader semantics used by node.js and io.js are the first one you see here.  Load the module, return its exports.

But RequireJS loader plugins also allow us to do things like the second line where the exclamation point indicates that the load should occur using a loader plugin, which is itself a module that conforms to the loader plugin contract.  In this case it’s saying load the file foo.html as raw text and return it as a string.

But, wait, there’s more!  loader plugins can do more than that.  The third example uses a loader that loads the HTML file using the ‘text’ plugin under the hood, creates an HTML document fragment, and pre-localizes it using our localization library.  And this works un-optimized in a browser, no compilation step needed, but it can also be optimized.

So when our optimizer runs, it bundles up the core modules we use, plus, the modules for our “message list” card that displays the inbox.  And the message list card loads its HTML snippets using the template loader plugin.  The r-dot-js optimizer then locates these dependencies and the loader plugins also have optimizer logic that results in the HTML strings being inlined in the resulting optimized file.  So there’s just one single javascript file to load with no extra HTML file dependencies or other loads.

We then also run the optimizer against our other important cards like the “compose” card and the “message reader” card.  We don’t do this for all cards because it can be hard to carve up the module dependency graph for optimization without starting to run into cases of overlap where many optimized files redundantly include files loaded by other optimized files.

Plus, we have another trick up our sleeve:

Seeming Fast: Preloading

Preloading.  Our cards optionally know the other cards they can load.  So once we display a card, we can kick off a preload of the cards that might potentially be displayed.  For example, the message list card can trigger the compose card and the message reader card, so we can trigger a preload of both of those.

But we don’t go overboard with preloading in the frontend because we still haven’t actually loaded the back-end that actually does all the emaily email stuff.  The back-end is also chopped up into optimized layers along account type lines and online/offline needs, but the main optimized JS file still weighs in at something like 17 thousand lines of code with newlines retained.

So once our UI logic is loaded, it’s time to kick-off loading the back-end.  And in order to avoid impacting the responsiveness of the UI both while it loads and when we’re doing steady-state processing, we run it in a DOM Worker.

Being Responsive: Workers and SharedWorkers

DOM Workers are background JS threads that lack access to the page’s DOM, communicating with their owning page via message passing with postMessage.  Normal workers are owned by a single page.  SharedWorkers can be accessed via multiple pages from the same document origin.

By doing this, we stay out of the way of the main thread.  This is getting less important as browser engines support Asynchronous Panning & Zooming or “APZ” with hardware-accelerated composition, tile-based rendering, and all that good stuff.  (Some might even call it magic.)

When Firefox OS started, we didn’t have APZ, so any main-thread logic had the serious potential to result in janky scrolling and the impossibility of rendering at 60 frames per second.  It’s a lot easier to get 60 frames-per-second now, but even asynchronous pan and zoom potentially has to wait on dispatching an event to the main thread to figure out if the user’s tap is going to be consumed by app logic and preventDefault called on it.  APZ does this because it needs to know whether it should start scrolling or not.

And speaking of 60 frames-per-second…

Being Fast: Virtual List Widgets

…the heart of a mail application is the message list.  The expected UX is to be able to fling your way through the entire list of what the email app knows about and see the messages there, just like you would on a native app.

This is admittedly one of the areas where native apps have it easier.  There are usually list widgets that explicitly have a contract that says they request data on an as-needed basis.  They potentially even include data bindings so you can just point them at a data-store.

But HTML doesn’t yet have a concept of instantiate-on-demand for the DOM, although it’s being discussed by Firefox layout engine developers.  For app purposes, the DOM is a scene graph.  An extremely capable scene graph that can handle huge documents, but there are footguns and it’s arguably better to err on the side of fewer DOM nodes.

So what the email app does is we create a scroll-region div and explicitly size it based on the number of messages in the mail folder we’re displaying.  We create and render enough message summary nodes to cover the current screen, 3 screens worth of messages in the direction we’re scrolling, and then we also retain up to 3 screens worth in the direction we scrolled from.  We also pre-fetch 2 more screens worth of messages from the database.  These constants were arrived at experimentally on prototype devices.

We listen to “scroll” events and issue database requests and move DOM nodes around and update them as the user scrolls.  For any potentially jarring or expensive transitions such as coordinate space changes from new messages being added above the current scroll position, we wait for scrolling to stop.

Nodes are absolutely positioned within the scroll area using their ‘top’ style but translation transforms also work.  We remove nodes from the DOM, then update their position and their state before re-appending them.  We do this because the browser APZ logic tries to be clever and figure out how to create an efficient series of layers so that it can pre-paint as much of the DOM as possible in graphic buffers, AKA layers, that can be efficiently composited by the GPU.  Its goal is that when the user is scrolling, or something is being animated, that it can just move the layers around the screen or adjust their opacity or other transforms without having to ask the layout engine to re-render portions of the DOM.

When our message elements are added to the DOM with an already-initialized absolute position, the APZ logic lumps them together as something it can paint in a single layer along with the other elements in the scrolling region.  But if we start moving them around while they’re still in the DOM, the layerization logic decides that they might want to independently move around more in the future and so each message item ends up in its own layer.  This slows things down.  But by removing them and re-adding them it sees them as new with static positions and decides that it can lump them all together in a single layer.  Really, we could just create new DOM nodes, but we produce slightly less garbage this way and in the event there’s a bug, it’s nicer to mess up with 30 DOM nodes displayed incorrectly rather than 3 million.

But as neat as the layerization stuff is to know about on its own, I really mention it to underscore 2 suggestions:

1, Use a library when possible.  Getting on and staying on APZ fast-paths is not trivial, especially across browser engines.  So it’s a very good idea to use a library rather than rolling your own.

2, Use developer tools.  APZ is tricky to reason about and even the developers who write the Async pan & zoom logic can be surprised by what happens in complex real-world situations.  And there ARE developer tools available that help you avoid needing to reason about this.  Firefox OS has easy on-device developer tools that can help diagnose what’s going on or at least help tell you whether you’re making things faster or slower:

– it’s got a frames-per-second overlay; you do need to scroll like mad to get the system to want to render 60 frames-per-second, but it makes it clear what the net result is

– it has paint flashing that overlays random colors every time it paints the DOM into a layer.  If the screen is flashing like a discotheque or has a lot of smeared rainbows, you know something’s wrong because the APZ logic is not able to to just reuse its layers.

– devtools can enable drawing cool colored borders around the layers APZ has created so you can see if layerization is doing something crazy

There’s also fancier and more complicated tools in Firefox and other browsers like Google Chrome to let you see what got painted, what the layer tree looks like, et cetera.

And that’s my spiel.


The source code to Gaia can be found at

The email app in particular can be found at

(I also asked for questions here.)

Categorieën: Mozilla-nl planet

Thunderbird Blog: Thunderbird 38 goes to beta!

Thunderbird - vr, 03/04/2015 - 11:13

The next major release of Thunderbird, version 38, is now in beta and available for testing. You may download Thunderbird 38.0b1 here.

This version of Thunderbird is the first that is mostly managed by volunteer community members rather than by Mozilla staff. We have many new features, including:

  • Message filtering when a message is sent or archived
  • File-per-message local storage available for new accounts (maildir)
  • Contact search over multiple address books
  • Internationalized domain names for RSS feeds
  • Allow expanded columns to the folder pane for folder size and counts

Release notes are available here.

There are still a couple of features missing from this beta that we hope to ship in the final version of Thunderbird 38. Those are:

  • Ship Lightning calendar addon with Thunderbird with an opt-out dialog
  • Use OAUTH authentication with Gmail IMAP accounts


Categorieën: Mozilla-nl planet

Joshua Cranmer: Breaking news

Thunderbird - wo, 01/04/2015 - 09:00
It was brought to my attention recently by reputable sources that the recent announcement of increased usage in recent years produced an internal firestorm within Mozilla. Key figures raised alarm that some of the tech press had interpreted the blog post as a sign that Thunderbird was not, in fact, dead. As a result, they asked Thunderbird community members to make corrections to emphasize that Mozilla was trying to kill Thunderbird.

The primary fear, it seems, is that knowledge that the largest open-source email client was still receiving regular updates would impel its userbase to agitate for increased funding and maintenance of the client to help forestall potential threats to the open nature of email as well as to innovate in the space of providing usable and private communication channels. Such funding, however, would be an unaffordable luxury and would only distract Mozilla from its central goal of building developer productivity tooling. Persistent rumors that Mozilla would be willing to fund Thunderbird were it renamed Firefox Email were finally addressed with the comment, "such a renaming would violate our current policy that all projects be named Persona."

Categorieën: Mozilla-nl planet