mozilla

Mozilla Nederland LogoDe Nederlandse
Mozilla-gemeenschap

Abonneren op feed Mozilla planet
Planet Mozilla - http://planet.mozilla.org/
Bijgewerkt: 6 uur 23 min geleden

Ben Kero: Working around flaky internet connections

wo, 28/01/2015 - 23:36

In many parts of the world a reliable internet connection is hard to come by. Ethernet is essentially non-existent and WiFi is king. As any digital nomad can testify, this is ‘good enough’ for doing productive work.

Unfortunately not all WiFi connections work perfectly all the time. They’re fraught with unexpected problems including dropping out entirely, abruptly killing connections, and running into connection limits.

Thankfully with a little knowledge it is possible to regain productivity that would otherwise be lost to a flaky internet connection. These techniques are applicable to coffee shops, hotels, and other places with semi-public WiFi.

Always have a backup connection

Depending on a WiFi connections as your sole source of connectivity is a losing proposition. If all you’re doing are optional tasks it can work, although critical tasks demand a backup source should the primary fail.

This usually takes the shape of a cellular data connection. I USB or WiFi tether my laptop to my cell phone. This is straightforward in your home country, where you have a reliable data connection already. If working from another country it is advisable to get a local prepaid SIM card with data plan. These are usually inexpensive and never require a contract. Almost all Android devices support this behavior already.

If you’re too lazy to get a local SIM card, or are not in a country long enough to benefit from one (I usually use 1 full week as the cutoff), T-Mobile US’s post-paid plans offer roaming data in other countries. This is only EDGE (2.5G) connectivity, but is still entirely usable if you’re careful and patient with the connection.

Reducing background data

Some of the major applications that you’re using do updates in the background, including Firefox and Chrome. They can detect that your computer is connected to an internet connection, and will attempt to do updates anytime. Obviously if you’re using a connection with limited bandwidth, this can ruin the experience for everybody (including yourself).

You can disable this feature in Firefox by navigating to Edit -> Preferences -> Advanced -> Update, and switching Firefox Updates to Never check for updates.

Your operating system might do this as well, so it is worth investigating so you can disable it.

Mosh: The Mobile Shell

If you’re a command-line junkie or a keyboard cowboy, you’ll usually spend a lot of time SSHing into other servers. Mosh is an application like SSH that is specifically designed for unreliable connections. It allows some conveniences like resume-after-sleep even if your connection changes, and local echo so that you can see/revise your input even if the other side is non-responsive. There are some known security concerns with using Mosh, so I’ll leave it as an exercise to the reader if they feel safe using it.

It should be noted that with proper configuration, OpenSSH can also gain some of this resiliency.

Tunneling

Often the small wireless routers you’re connecting to are not configured to handle the load of several people. One symptom of this is the TCP connection limit. The result of the router hitting this limit is that you will no longer be able to establish new TCP connections until one is closed. The way around this is to use a tunnel.

The simplest method to do this is a SOCKS proxy. A SOCKS proxy is a small piece of software that runs on your laptop. Its purpose is to tunnel new connections through an existing connection. The way I use it is by establishing a connection to my colocated server in Portland, OR, then tunneling all my connections through that. The server is easy to set up.

The simplest way to do this is with SSH. To use it, simply open up a terminal and type the following command (replacing my host name with your own)

$ ssh -v -N -D1080 bke.ro

This will open a tunnel between your laptop and the remote host. You’re not done yet though. The next part is telling your software to use the tunnel. In Firefox this can be done in Edit -> Preferences -> Advanced -> Network -> Connection Settings -> Manual Proxy Configuration -> SOCKS Host. You’ll also want to check “Remote DNS” below. You can test this is working by visiting a web site such as whatismyip.com.

Command-line applications can use a SOCKS proxy by using the program called tsocks. Tsocks will transparently tunnel the connections of your command-line applications through your proxy. It is invoked like this:

$ tsocks curl http://bke.ro/

Some other methods of tunneling that have been used successfully include real VPN software such as OpenVPN. There is an entire market of OpenVPN providers available that will give you access to endpoints in many countries. You can also just run this yourself.

An alternative to that is sshuttle. This uses iptables on Linux (and the built-in firewall on OS X) to transparently tunnel connections over a SSH session.All system connections will transparently be routed through it. One cool feature of this approach is that no special software needs to be installed on the remote side. This means that it’s easy to use with several different hosts.

Local caching

Some content can be cached and reused without having to hit the Internet. This isn’t perfect, but reducing the amount of network traffic should result in less burden on the network and faster page-load times. There are a couple pieces of software that can help achieve this.

Unbound is a local DNS caching daemon. It runs on your computer and listens for applications to make DNS requests. It then asks the internet for the answer, and caches that. This results in less DNS queries hitting the internet, which reduces network burden and theoretically loads pages faster. I’ve been subjecting Unbound to constant daily use for 6 months, and have not attributed a single problem to it. Available in a distro near you.

Polipo is a local caching web proxy. This is a small daemon that runs on your computer and transparently caches web content. This can speed up page load times and reduce amount of network traffic done. It has a knob to tune the cache size, and you can empty the cache whenever you want. Again, this should be available in any major package manager.

Ad blocking software

Privoxy is a web proxy that filters out unwanted content, such as tracking cookies, advertisements, social-media iframes, and other “obnoxious internet junk”. It can be used in conjunction with polipo, and there is even a mention in the docs about how to layer them.

SomeoneWhoCares Hosts File is an alternative /etc/hosts file that promises “to make the internet not suck (as much)”. This replaces your /etc/hosts file, which is used before DNS queries are made. This particular /etc/hosts file simply resolves many bad domains to ‘127.0.0.1’ instead of their real address. This blocks many joke sites (goatse, etc) as well as ad servers. I’ve used this for a long time and have never encountered a problem associated with it.

AdBlock Plus might be a Firefox extension you’re familiar with it. It is a popular extension that removes ads from web pages, which should save you bandwidth, page load speed, and battery life. AdBlock Plus is a heavy memory user, so if you’re on a device with limited memory (< 4GB) it might be worth considering an alternate ad blocking extension.

Second browser instance (that doesn’t use any of the aforementioned)

As great as these pieces are, sometimes you’ll encounter a problem. At that point it could be advantageous to have a separate browser instance to access the internet “unadulterated”. This will let you know if the problem is on your side, the remote host, or your connection.

I hope that using these techniques will help you have a better experience while using questionable connections. It’s an ongoing struggle, but the state of connectivity is getting better. Hopefully one day these measures will be unnecessary.

Please leave a comment if you learned something from reading this, or notice anything I missed.

Categorieën: Mozilla-nl planet

Benoit Girard: Testing a JS WebApp

wo, 28/01/2015 - 22:13
Test Requirements

I’ve been putting off testing my cleopatra project for a while now https://github.com/bgirard/cleopatra because I wanted to take the time to find a solution that would satisfy the following:

  1. The tests can be executed by running a particularly URL.
  2. The tests can be executed headless using a script.
  3. No server side component or proxy is required.
  4. Stretch goal: Continuous integration tests.

After a bit of research I came up with a solution that addressed my requirements. I’m sharing here in case this helps others.

First I found that the easiest way to achieve this is to find a Test Framework to get 1) and find a solution to run a headless browser for 3.

Picking a test framework

For the Test Framework I picked QUnit. I didn’t have any strong requirements there so you may want to review your options if you do. With QUnit I load my page in an iframe and inspect the resulting document after performing operations. Here’s an example:

QUnit.test("Select Filter", function(assert) { loadCleopatra({ query: "?report=4c013822c9b91ffdebfbe6b9ef300adec6d5a99f&select=200,400", assert: assert, testFunc: function(cleopatraObj) { }, profileLoadFunc: function(cleopatraObj) { }, updatedFiltersFunc: function(cleopatraObj) { var samples = shownSamples(cleopatraObj); // Sample count for one of the two threads in the profile are both 150 assert.ok(samples === 150, "Loaded profile"); } }); });

Here I just load a profile, and once the document fires an updateFilters event I check that the right number of samples are selected.

You can run the latest cleopatra test here: http://people.mozilla.org/~bgirard/cleopatra/test.html

Picking a browser (test) driver

Now that we have a page that can run our test suite we just need a way to automate the execution. Turns out that PhantomJS, for webkit, and SlimerJS, for Gecko, provides exactly this. With a small driver script we can load our test.html page and set the process return code based on the result of our test framework, QUnit in this case.

Stretch goal: Continuous integration

If you hooked up the browser driver to run via a simple test.sh script adding continuous integration should be simple. Thanks to Travis-CI and Github it’s easy to setup your test script to run per check-in and set notifications.

All you need is to configure Travis-CI to looks at your repo and to check-in an appropriate .travis.cml config file. Your travis.yml should configure the environment. PhantomJS is pre-installed and should just work. SlimerJS requires a Firefox binary and a virtual display so it just requires a few more configuration lines. Here’s the final configuration:

env: - SLIMERJSLAUNCHER=$(which firefox) DISPLAY=:99.0 PATH=$TRAVIS_BUILD_DIR/slimerjs:$PATH addons: firefox: "33.1" before_script: - "sh -e /etc/init.d/xvfb start" - "echo 'Installing Slimer'" - "wget http://download.slimerjs.org/releases/0.9.4/slimerjs-0.9.4.zip" - "unzip slimerjs-0.9.4.zip" - "mv slimerjs-0.9.4 ./slimerjs" notifications: irc: channels: - "irc.mozilla.org#perf" template: - "BenWa: %{repository} (%{commit}) : %{message} %{build_url}" on_success: change on_failure: change script: phantomjs js/tests/run_qunit.js test.html && ./slimerjs/slimerjs js/tests/run_qunit.js $PWD/test.html

Happy testing!


Categorieën: Mozilla-nl planet

Air Mozilla: Product Coordination Meeting

wo, 28/01/2015 - 20:00

Product Coordination Meeting Weekly coordination meeting for Firefox Desktop & Android product planning between Marketing/PR, Engineering, Release Scheduling, and Support.

Categorieën: Mozilla-nl planet

Ben Hearsum: Signing Software at Scale

wo, 28/01/2015 - 17:45

Mozilla produces a lot of builds. We build Firefox for somewhere between 5 to 10 platforms (depending how you count). We release Nightly and Aurora every single day, Beta twice a week, and Release and ESR every 6 weeks (at least). Each release contains an en-US build and nearly a hundred localized repacks. In the past the only builds we signed were Betas (which were once a week at the time), Releases, and ESRs. We had a pretty well established manual for it, but due to being manual it was still error prone and impractical to use for Nightly and Aurora. Signing of Nightly and Aurora became an important issue when background updates were implemented because one of the new security requirements with background updates was signed installers and MARs.

Enter: Signing Server

At this point it was clear that the only practical way to sign all the builds that we need to is to automate it. It sounded crazy to me at first. How can you automate something that depends on secret keys, passphrases, and very unfriendly tools? Well, there's some tricks you need to know, and throughout the development and improvement of our "signing server", we've learned a lot. In the post I'll talk about those tricks and show you how can use them (or even our entire signing server!) to make your signing process faster and easier.

Credit where credit is due: Chris AtLee wrote the core of the signing server and support for some of the signature types. Over time Erick Dransch, Justin Wood, Dustin Mitchell, and I have made some improvements and added support for additional types of signatures.

Tip #1: Collect passphrases at startup

This should be obvious to most, but it's very important not to store the passphrases to your private keys unencrypted. However, because they're needed to unlock the private keys when doing any signing the server needs to have access to them somehow. We've dealt with this by asking for them when launching a signing server instance:

$ bin/python tools/release/signing/signing-server.py signing.ini gpg passphrase: signcode passphrase: mar passphrase:

Because instances are started manually by someone in the small set of people with access to passphrases we're able to ensure that keys are never left unencrypted at rest.

Tip #2: Don't let just any machine request signed files

One of the first problems you run into when you have an API for signing files is how to make sure you don't accidentally sign malicious files. We've dealt with this in a few ways:

  • You need a special token in order to request any type of signing. These tokens are time limited and only a small subset of segregated machines may request them (on behalf of the build machines). Since build jobs can only be created if you're able to push to hg.mozilla.org, random people are unable to submit anything for signing.
  • Only our build machines are allowed to make signing requests. Even if you managed to get hold of a valid signing token, you wouldn't be able to do anything with it without also having access to a build machine. This is a layer of security that helps us protect against a situation where an evil doer may gain access to a loaner machine or other less restricted part of our infrastructure.

We have other layers of security built in too (HTTPS, firewalls, access control, etc.), but these are the key ones built into the signing server itself.

Tip #3: Use input redirection and other tricks to work around unfriendly command line tools

One of the trickiest parts about automating signing is getting all the necessary command line tools to accept input that's not coming from a console. Some of them are relative easy and accept passphrases via stdin:

proc = Popen(command, stdout=stdout, stderr=STDOUT, stdin=PIPE) proc.stdin.write(passphrase) proc.stdin.close()

Others, like OpenSSL, are fussier and require the use of pexpect:

proc = pexpect.spawn("openssl", args) proc.logfile_read = stdout proc.expect('Enter pass phrase') proc.sendline(passphrase)

And it's no surprise at all that OS X is the fussiest of them all. In order to sign you have to unlock the keychain by hand, run the signing command, and relock the keychain yourself:

child = pexpect.spawn("security unlock-keychain" + keychain) child.expect('password to unlock .*') child.sendline(passphrase) check_call(sign_command + [f], cwd=dir_, stdout=stdout, stderr=STDOUT) check_call(["security", "lock-keychain", keychain])

Although the code is simple in the end, a lot of trial, error, and frustration was necessary to arrive at it.

Tip #4: Sign everything you can on Linux (including Windows binaries!)

As fussy as automating tools like openssl can be on Linux, it pales in comparison to trying to automate anything on Windows. In the days before the signing server we had a scripted signing method that ran on Windows. Instead of providing the passphrase directly to the signing tool, it had to typed into a modal window. It was "automated" with an AutoIt script that typed in the password whenever the window popped up. This was hacky, and sometimes lead to issues if someone moved the mouse or pressed a key at the wrong time and changed window focus.

Thankfully there's tools available for Linux that are capable of signing Windows binaries. We started off by using Mono's signcode - a more or less drop in replacement for Microsoft's:

$ signcode -spc MozAuthenticode.spc -v MozAuthenticode.pvk -t http://timestamp.verisign.com/scripts/timestamp.dll -i http://www.mozilla.com -a sha1 -tr 5 -tw 60 /tmp/test.exe Mono SignCode - version 2.4.3.1 Sign assemblies and PE files using Authenticode(tm). Copyright 2002, 2003 Motus Technologies. Copyright 2004-2008 Novell. BSD licensed. Enter password for MozAuthenticode.pvk: Success

This works great for 32-bit binaries - we've been shipping binaries signed with it for years. For some reason that we haven't figured out though, it doesn't sign 64-bit binaries properly. For those we're using "osslsigncode", which is an OpenSSL based tool to do Authenticode signing:

$ osslsigncode -certs MozAuthenticode.spc -key MozAuthenticode.pvk -i http://www.mozilla.com -h sha1 -in /tmp/test64.exe -out /tmp/test64-signed.exe Enter PEM pass phrase: Succeeded $ osslsigncode verify /tmp/test64-signed.exe Signature verification: ok Number of signers: 1 Signer #0: Subject: /C=US/ST=CA/L=Mountain View/O=Mozilla Corporation/CN=Mozilla Corporation Issuer : /C=US/O=DigiCert Inc/OU=www.digicert.com/CN=DigiCert Assured ID Code Signing CA-1 Number of certificates: 3 Cert #0: Subject: /C=US/O=DigiCert Inc/OU=www.digicert.com/CN=DigiCert Assured ID Root CA Issuer : /C=US/O=DigiCert Inc/OU=www.digicert.com/CN=DigiCert Assured ID Root CA Cert #1: Subject: /C=US/O=DigiCert Inc/OU=www.digicert.com/CN=DigiCert Assured ID Code Signing CA-1 Issuer : /C=US/O=DigiCert Inc/OU=www.digicert.com/CN=DigiCert Assured ID Root CA Cert #2: Subject: /C=US/ST=CA/L=Mountain View/O=Mozilla Corporation/CN=Mozilla Corporation Issuer : /C=US/O=DigiCert Inc/OU=www.digicert.com/CN=DigiCert Assured ID Code Signing CA-1

In addition to Authenticode signing we also do GPG, APK, and couple of Mozilla-specific types of signing (MAR, EME Voucher) on Linux. We also sign our Mac builds with the signing server. Unfortunately, the tools needed for that are only available on OS X, so we have to run separate signing servers for these.

Tip #5: Run multiple signing servers Nobody likes a single point of failure, so we've built support our signing client to retry against multiple instances. Even if we lose part of our signing server pool, our infrastructure stays up: $ python signtool.py --cachedir cache -t token -n nonce -c host.cert -H dmgv2:mac-v2-signing1.srv.releng.scl3.mozilla.com:9120 -H dmgv2:mac-v2-signing2.srv.releng.scl3.mozilla.com:9120 -H dmgv2:mac-v2-signing3.srv.releng.scl3.mozilla.com:9120 -H dmgv2:mac-v2-signing4.srv.releng.scl3.mozilla.com:9120 --formats dmgv2 Firefox.app 2015-01-23 06:17:59,112 - ed40176524e7c197f4e23f6065a64dc3c9a62e71: processing Firefox.app.tar.gz on https://mac-v2-signing3.srv.releng.scl3.mozilla.com:9120 2015-01-23 06:17:59,118 - ed40176524e7c197f4e23f6065a64dc3c9a62e71: connection error; trying again soon 2015-01-23 06:18:00,119 - ed40176524e7c197f4e23f6065a64dc3c9a62e71: processing Firefox.app.tar.gz on https://mac-v2-signing4.srv.releng.scl3.mozilla.com:9120 2015-01-23 06:18:00,141 - ed40176524e7c197f4e23f6065a64dc3c9a62e71: uploading for signing 2015-01-23 06:18:10,748 - ed40176524e7c197f4e23f6065a64dc3c9a62e71: processing Firefox.app.tar.gz on https://mac-v2-signing4.srv.releng.scl3.mozilla.com:9120 2015-01-23 06:19:11,848 - ed40176524e7c197f4e23f6065a64dc3c9a62e71: processing Firefox.app.tar.gz on https://mac-v2-signing4.srv.releng.scl3.mozilla.com:9120 2015-01-23 06:19:40,480 - ed40176524e7c197f4e23f6065a64dc3c9a62e71: OK Running your own signing server

It's easy! All of the code you need to run your own signing server is in our tools repository. You'll need to set-up a virtualenv and create your own config file, but once you're ready you can attempt to start it with the following command:

python signing-server.py signing.ini

You'll be prompted for the passphrases to your private keys. If there's any problems with your config file or the passphrases the server will fail to start. Once you've got it up and running you can use try signing! get_token.py has an example of how to generate a signing token, and signtool.py will take your unsigned files and give you back signed versions. Happy signing!

Categorieën: Mozilla-nl planet

Robert Longson: New SVG/CSS Filter support in Firefox

wo, 28/01/2015 - 16:36

There’s a new specification for filters that replaces the filters module in SVG 1.1. Firefox and Chrome are both implementing new features from this specification.

Firefox 30 was the first version to support feDropShadow As well as being simpler to write, feDropShadow will be faster than the equivalent individual filters as it skips some unnecessary colour conversions that we’d otherwise perform.

Firefox 35 has support for all CSS Filters so for simple cases you no longer need any SVG markup to create a filter. We’ve examples on MDN showing how to use CSS filters.

We’ve also implemented filter chaining, this is we support multiple filter either via URLs or CSS filters on a single element.

As with earlier versions of Firefox you can apply SVG and CSS filters to both SVG and HTML elements.

As part of the rewrite to support SVG filters we’ve improved their performance by using D2D on Windows to render them thus taking advantage of any hardware acceleration possibilities on that platform and on other platforms using SIMD and SSE2 to accelerate rendering so you can now use more filters without slowing your site down.


Categorieën: Mozilla-nl planet

Pete Moore: Weekly review 2015-01-28

wo, 28/01/2015 - 16:30

Task Cluster Go Client

This week I have got the task cluster go client talking to the TaskCluster API service end points.

See: https://github.com/petemoore/taskcluster-client-go/blob/master/README.md

I also now have part of the client library auto-generating, e.g. see: https://github.com/petemoore/taskcluster-client-go/blob/master/client/generated-code.go

Shouldn’t be too far from auto-generating the entire client library soon and having it working, tested, documented and published.

Categorieën: Mozilla-nl planet

Mozilla Privacy Blog: How Mozilla Addresses the Privacy Paradox

wo, 28/01/2015 - 14:39
Earlier this month, a 20 year old NBA Clippers fan held up a sign in a crowded Washington DC arena with her phone number on it. Seasoned privacy professionals have long lamented the old adage that if you give someone … Continue reading
Categorieën: Mozilla-nl planet

Ryan Kelly: Are we Python yet?

wo, 28/01/2015 - 14:01
Categorieën: Mozilla-nl planet

William Lachance: mozregression updates

wo, 28/01/2015 - 00:52

Lots of movement in mozregression (a tool for automatically determining when a regression was introduced in Firefox by bisecting builds on ftp.mozilla.org) in the last few months. Here’s some highlights:

  • Support for win64 nightly and inbound builds (Kapil Singh, Vaibhav Agarwal)
  • Support for using an http cache to reduce time spent downloading builds (Sam Garrett)
  • Way better logging and printing of remaining time to finish bisection (Julien Pagès)
  • Much improved performance when bisecting inbound (Julien)
  • Support for automatic determination on whether a build is good/bad via a custom script (Julien)
  • Tons of bug fixes and other robustness improvements (me, Sam, Julien, others…)

Also thanks to Julien, we have a spiffy new website which documents many of these features. If it’s been a while, be sure to update your copy of mozregression to the latest version and check out the site for documentation on how to use the new features described above!

Thanks to everyone involved (especially Julien) for all the hard work. Hopefully the payoff will be a tool that’s just that much more useful to Firefox contributors everywhere. :)

Categorieën: Mozilla-nl planet

Justin Wood: Release Engineering does a lot…

wo, 28/01/2015 - 00:11

Hey Everyone,

I spent a few minutes a week over the last month or two working on compiling a list of Release Engineering work areas. Included in that list is identifying which repositories we “own” and work in, as well as where these repositories are mirrored. (We have copies in hg.m.o git.m.o and github, some exclusively in their home).

While we transition to a more uniform and modern design style and philosphy.

My major takeaway here is we have A LOT of things that we do. (this list is explicitly excluding repositories that are obsolete and unused)

So without further ado, I present our page ReleaseEngineering/Repositories

repositoriesYou’ll notice a few things about this, we have a column for Mirrors, and RoR (Repository of Record), “Committable Location” was requested by Hal and is explicitly for cases where “Where we consider our important location the RoR, it may not necessarily be where we allow commits to”

The other interesting thing is we have automatic population of travis and coveralls urls/status icons. This is for free using some magic wiki templates I did.

The other piece of note here, is the table is generated by a list of pages, using “SemanticMediaWiki” so the links to the repositories can be populated with things like “where are the docs” “what applications use this repo”, “who are suitable reviewers” etc. (all those are TODO on the releng side so far).

I’m hoping to be putting together a blog post at some point about how I chose to do much of this with mediawiki, however in the meantime should any team at Mozilla find this enticing and wish to have one for themselves, much of the work I did here can be easily replicated for your team, even if you don’t need/like the multiple repo location magic of our table. I can help get you setup to add your own repos to the mix.

Remember the only fields that are necessary is a repo name, the repo location, and owner(s). The last field can even be automatically filled in by a form on your page (see the end of Release Engineerings page for an example of that form)

Reach out to me on IRC or E-mail (information is on my mozillians profile) if you desire this for your team and we can talk. If you don’t have a need for your team, you can stare at all the stuff Releng is doing and remember to thank one of us next time you see us. (or inquire about what we do, point contributors our way, we’re a friendly group, I promise.)

Categorieën: Mozilla-nl planet

Hannah Kane: A new online home for those who #teachtheweb

di, 27/01/2015 - 23:22

We’ve recently begun work on a new website that will serve the mentors in our Webmaker community—a gathering place for anyone who is teaching the Web. They’ll find activity kits, trainings, badges, the Web Literacy Map, and more. It will also be an online clubhouse for Webmaker Clubs, and will showcase the work of Hives to the broader network.

Our vision for the site is that it will provide pathways for sustained involvement in teaching the Web. Imagine a scenario where, after hosting a Maker Party, a college student in Pune wants to build on the momentum, but doesn’t know how. Or imagine a librarian in Seattle who is looking for activities for her weekly teen drop-in hours. Or a teacher in Buenos Aires who is looking to level up his own digital literacy skills. In each of these scenarios, we hope the person will look to this new site to find what they need.

We’re in the very early stages of building out the site. One of our first challenges is to figure out the best way to organize all of the content.

Fortunately, we were able to find 14 members of the community who were willing to participate in a “virtual card-sorting” activity. We gave each of the volunteers a list of 22 content areas (e.g. “Find a Teaching Kit,” “Join a Webmaker Club,” “Participate in a community discussion”), and asked them to organize the items into groups that made sense to them.

The results were fascinating. Some grouped the content by specific programs, concepts, or offerings. Others grouped by function (e.g “Participate,” “Learn,” “Lead”). Others organized by identity (e.g. “Learner” or “Mentor”). Still others grouped by level of expertise needed.

We owe a debt of gratitude to those who participated in the research. We were able to better understand the variety of mental models, and we’re currently using those insights to build out some wireframes to test in the next heartbeat.

Once we firm up the information architecture, we’ll build and launch v1 of the site (our goal is to launch it by the end of Q1). From there, we’ll continue to iterate, adding more functionality and resources to meet the needs of our mentor community.

Future iterations will likely include:

  • Improving the way we share and discover curriculum modules
  • Enhancing our online training platform
  • Providing tools for groups to self-organize
  • Making improvements to our badging platform
  • Incorporating the next version of the Web Literacy Map

Stay tuned for more updates and opportunities to provide feedback throughout the process. We’ve also started a Discourse thread for continuing discussion of the platform.


Categorieën: Mozilla-nl planet

Christian Heilmann: Where would people like to see me – some interesting answers

di, 27/01/2015 - 22:50

Will code for Bananas

For pure Shits and Giggles™ I put up a form yesterday asking people where I should try to work now that I left Mozilla. By no means I have approached all the companies I listed (hence an “other” option). I just wanted to see what people see me as and where I could do some good work. Of course, some of the answers disagreed and made a lot of assumptions:

Your ego knows no bounds. Putting companies that have already turned you down is very special behavior.

This is utterly true. I applied at Yahoo in 1997 and didn’t get the job. I then worked at Yahoo for almost five years a few years later. I should not have done that. Companies don’t change and once you have a certain skillset there is no way you could ever learn something different that might make yourself appealing to others. Know your place, and all that.

Sarcasm aside, I am always amazed how lucky we are to have choices in our market. There is not a single day I am not both baffled and very, very thankful for being able to do what I like and make a living with it. I feel like a fraud many a time, and I know many other people who seemingly have a “big ego” doing the same. The trick is to not let that stop you but understand that it makes you a better person, colleague and employee. We should strive to get better all the time, and this means reaching beyond what you think you can achieve.

I’m especially flattered that people thought I had already been contacted by all the companies I listed and asked for people to pick for me. I love working in the open, but that’s a bit too open, even for my taste. I am not that lucky – I don’t think anybody is.

The answers were pretty funny, and of course skewed as I gave a few options rather than leaving it completely open. The final “wish of the people list” is:

  • W3C (108 votes)
  • Canonical (39 votes)
  • Microsoft (38 votes)
  • Google (37 votes)
  • Facebook (14 votes)
  • Twitter (9 votes)
  • Mozilla (7 votes)
  • PubNub (26 votes)

Pubnub’s entries were having exceedingly more exclamation points the more got submitted – I don’t know what happened there.

Other options with multiple votes were Apple, Adobe, CozyCloud, EFF, Futurice, Khan Academy, Opera, Spotify (I know who did that!) and the very charming “Retirement”.

Options labeled “fascinating” were:

  • A Circus
  • Burger King (that might still be a problem as I used to work on McDonalds.co.uk – might be a conflict of interest)
  • BangBros (no idea what that might be – a Nintendo Game?)
  • Catholic Church
  • Derick[SIC] Zoolander’s School for kids who can’t read good
  • Kevin Smith’s Movie
  • “Pizza chef at my local restaurant” (I was Pizza delivery guy for a while, might be possible to level up)
  • Playboy (they did publish Fahrenheit 451, let’s not forget that)
  • Taco Bell (this would have to be remote, or a hell of a commute)
  • The Avengers (I could be persuaded, but it probably will be an advisory role, Tony Stark style)
  • UKIP (erm, no, thanks)
  • Zombocom and
  • Starbucks barista. (this would mean I couldn’t go to Sweden any longer – they really like their coffee and Starbucks is seen as the Antichrist by many)

Some of the answers were giving me super powers I don’t have but show that people would love to have people like me talk to others outside the bubble more:

  • “NASA” (I really, really think I have nothing they need)
  • “A book publisher (they need help to move into the 21st century)”
  • “Data.gov or another country’s open data platform.” (did work with that, might re-visit it)
  • “GCHQ; Be the bridge between our worlds”
  • “Spanish Government – Podemos CTO” (they might realise I am not Spanish)
  • “any bank for a11y online banking :-/”

Some answers showed a need to vent:

  • “Anything but Google or Facebook for God sake!”
  • “OK, and option 2: perhaps Twitter? You might improve their horrible JS code in the website! ;)”

The most confusing answers were “My butthole” which sounds cramped and not a creative working environment and “Who are you?” which begs the answer “Why did you fill this form?”.

Many of the answers showed a lot of trust in me and made me feel all warm and fuzzy and I want to thank whoever gave those:

  • be CTO of awesome startup
  • Enjoy life Chris!
  • Start something of your own. You rock too hard, anyway!
  • you were doing just fine. choose the one where your presence can be heard the loudest. cheers!
  • you’ve left Mozilla for something else, so you are jobless for a week or so! :-)
  • Yourself then hire me and let me tap your Dev knowledge :D

I have a new job, I am starting on Monday and I will announce in probably too much detail here on Thursday. Thanks for everyone who took part in this little exercise. I have an idea what I need to do in my new job, and these ideas listed and the results showed me that I am on the right track.

Categorieën: Mozilla-nl planet

Stormy Peters: Can or Can’t?

di, 27/01/2015 - 22:35
10628746_986665307681_7544861487392315883_o

Can read or can’t eat books?

What I love about open source is that it’s a “can” world by default. You can do anything you think needs doing and nobody will tell you that you can’t. (They may not take your patch but they won’t tell you that you can’t create it!)

It’s often easier to define things by what they are not or what we can’t do. And the danger of that is you create a culture of “can’t”. Any one who has raised kids or animals knows this. “No, don’t jump.” You can’t jump on people. “No, off the sofa.” You can’t be on the furniture. “No, don’t lick!” You can’t slobber on me. And hopefully when you realize it, you can fix it. “You can have this stuffed animal (instead of my favorite shoe). Good dog!”

Often when we aren’t sure how to do something, we fill the world with can’ts. “I don’t know how we should do this, but I know you can’t do that on a proprietary mailing list.” “I don’t know how I should lose weight, but I know you can’t have dessert.” I don’t know. Can’t. Don’t know. Can’t. Unsure. Can’t.

Watch the world around you. Is your world full of can’ts or full of “can do”s? Can you change it for the better?

Related posts:

  1. The new sofa bed: bunk bed or sofa bed?
  2. How do you get furniture into apartment buildings?
  3. Heifer International

Categorieën: Mozilla-nl planet

Nathan Froyd: examples of poor API design, 1/N – pldhash functions

di, 27/01/2015 - 21:39

The other day in the #content IRC channel:

<bz> I have learned so many things about how to not define APIs in my work with Mozilla code ;) <bz> (probably lots more to learn, though)

I, too, am still learning a lot about what makes a good API. Like a lot of other things, it’s easier to point out poor API design than to describe examples of good API design, and that’s what this blog post is about. In particular, the venerable XPCOM datastructure PLDHashTable has been undergoing a number of changes lately, all aimed at bringing it up to date. (The question of why we have our own implementations of things that exist in the C++ standard library is for a separate blog post.)

The whole effort started with noticing that PL_DHashTableOperate is not a well-structured API. It’s necessary to quote some more of the API surface to fully understand what’s going on here:

typedef enum PLDHashOperator { PL_DHASH_LOOKUP = 0, /* lookup entry */ PL_DHASH_ADD = 1, /* add entry */ PL_DHASH_REMOVE = 2, /* remove entry, or enumerator says remove */ PL_DHASH_NEXT = 0, /* enumerator says continue */ PL_DHASH_STOP = 1 /* enumerator says stop */ } PLDHashOperator; typedef PLDHashOperator (* PLDHashEnumerator)(PLDHashTable *table, PLDHashEntryHdr *hdr, uint32_t number, void *arg); uint32_t PL_DHashTableEnumerate(PLDHashTable *table, PLDHashEnumerator etor, void *arg); PLDHashEntryHdr* PL_DHashTableOperate(PLDHashTable* table, const void* key, PLDHashOperator op);

(PL_DHashTableOperate no longer exists in the tree due to other cleanup bugs; the above is approximately what it looked like at the end of 2014.)

There are several problems with the above slice of the API:

  • PL_DHashTableOperate(table, key, PL_DHASH_ADD) is a long way to spell what should have been named PL_DHashTableAdd(table, key)
  • There’s another problem with the above: it’s making a runtime decision (based on the value of op) about what should have been a compile-time decision: this particular call will always and forever be an add operation. We shouldn’t have the (admittedly small) runtime overhead of dispatching on op. It’s worth noting that compiling with LTO and a quality inliner will remove that runtime overhead, but we might as well structure the code so non-LTO compiles benefit and the code at callsites reads better.
  • Given the above definitions, you can say PL_DHashTableOperate(table, key, PL_DHASH_STOP) and nothing will complain. The PL_DHASH_NEXT and PL_DHASH_STOP values are really only for a function of type PLDHashEnumerator to return, but nothing about the above definition enforces that in any way. Similarly, you can return PL_DHASH_LOOKUP from a PLDHashEnumerator function, which is nonsensical.
  • The requirement to always return a PLDHashEntryHdr* from PL_DHashTableOperate means doing a PL_DHASH_REMOVE has to return something; it happens to return nullptr always, but it really should return void. In a similar fashion, PL_DHASH_LOOKUP always returns a non-nullptr pointer (!); one has to check PL_DHASH_ENTRY_IS_{FREE,BUSY} on the returned value. The typical style for an API like this would be to return nullptr if an entry for the given key didn’t exist, and a non-nullptr pointer if such an entry did. The free-ness or busy-ness of a given entry should be a property entirely internal to the hashtable implementation (it’s possible that some scenarios could be slightly more efficient with direct access to the busy-ness of an entry).

We might infer corresponding properties of a good API from each of the above issues:

  • Entry points for the API produce readable code.
  • The API doesn’t enforce unnecessary overhead.
  • The API makes it impossible to talk about nonsensical things.
  • It is always reasonably clear what return values from API functions describe.

Fixing the first two bulleted issues, above, was the subject of bug 1118024, done by Michael Pruett. Once that was done, we really didn’t need PL_DHashTableOperate, and removing PL_DHashTableOperate and related code was done in bug 1121202 and bug 1124920 by Michael Pruett and Nicholas Nethercote, respectively. Fixing the unusual return convention of PL_DHashTableLookup is being done in bug 1124973 by Nicholas Nethercote. Maybe once all this gets done, we can move away from C-style PL_DHashTable* functions to C++ methods on PLDHashTable itself!

Next time we’ll talk about the actual contents of a PL_DHashTable and how improvements have been made there, too.

Categorieën: Mozilla-nl planet

Gregory Szorc: Commit Part Numbers and MozReview

di, 27/01/2015 - 21:17

It is common for commit messages in Firefox to contains strings like Part 1, Part 2, etc. See this push for bug 784841 for an extreme multi-part example.

When code review is conducted in Bugzilla, these identifiers are necessary because Bugzilla orders attachments/patches in the order they were updated or their patch title (I'm not actually sure!). If part numbers were omitted, it could be very confusing trying to figure out which order patches should be applied in.

However, when code review is conducted in MozReview, there is no need for explicit part numbers to convey ordering because the ordering of commits is implicitly defined by the repository history that you pushed to MozReview!

I argue that if you are using MozReview, you should stop writing Part N in your commit messages, as it provides little to no benefit.

I, for one, welcome this new world order: I've previously wasted a lot of time rewriting commit messages to reflect new part ordering after doing history rewriting. With MozReview, that overhead is gone and I barely pay a penalty for rewriting history, something that often produces a more reviewable series of commits and makes reviewing and landing a complex patch series significantly easier.

Categorieën: Mozilla-nl planet

Cameron Kaiser: And now for something completely different: the Pono Player review and Power Macs (plus: who's really to blame for Dropbox?)

di, 27/01/2015 - 20:34
Regular business first: this is now a syndicated blog on Planet Mozilla. I consider this an honour that should also go a long way toward reminding folks that not only are there well-supported community tier-3 ports, but lots of people still use them. In return I promise not to bore the punters too much with vintage technology.

IonPower crossed phase 2 (compilation) yesterday -- it builds and links, and nearly immediately asserts after some brief codegen, but at this phase that's entirely expected. Next, phase 3 is to get it to build a trivial script in Baseline mode ("var i=0") and run to completion without crashing or assertions, and phase 4 is to get it to pass the test suite in Baseline-only mode, which will make it as functional as PPCBC. Phase 5 and 6 are the same, but this time for Ion. IonPower really repays most of our technical debt -- no more fragile glue code trying to keep the JaegerMonkey code generator working, substantially fewer compiler warnings, and a lot less hacks to the JIT to work around oddities of branching and branch optimization. Plus, many of the optimizations I wrote for PPCBC will transfer to IonPower, so it should still be nearly as fast in Baseline-only mode. We'll talk more about the changes required in a future blog post.

Now to the Power Mac scene. I haven't commented on Dropbox dropping PowerPC support (and 10.4/10.5) because that's been repeatedly reported by others in the blogscene and personally I rarely use Dropbox at all, having my own server infrastructure for file exchange. That said, there are many people who rely on it heavily, even a petition (which you can sign) to bring support back. But let's be clear here: do you really want to blame someone? Do you really want to blame the right someone? Then blame Apple. Apple dropped PowerPC compilation from Xcode 4; Apple dropped Rosetta. Unless you keep a 10.6 machine around running Xcode 3, you can't build (true) Universal binaries anymore -- let alone one that compiles against the 10.4 SDK -- and it's doubtful Apple would let such an app (even if you did build it) into the App Store because it's predicated on deprecated technology. Except for wackos like me who spend time building PowerPC-specific applications and/or don't give a flying cancerous pancreas whether Apple finds such work acceptable, this approach already isn't viable for a commercial business and it's becoming even less viable as Apple actively retires 10.6-capable models. So, sure, make your voices heard. But don't forget who screwed us first, and keep your vintage hardware running.

That said, I am personally aware of someoneTM who is working on getting the supported Python interconnect running on OS X Power Macs, and it might be possible to rebuild Finder integration on top of that. (It's not me. Don't ask.) I'll let this individual comment if he or she wants to.

Onto the main article. As many of you may or may not know, my undergraduate degree was actually in general linguistics, and all linguists must have (obviously) some working knowledge of acoustics. I've also been a bit of a poseur audiophile too, and while I enjoy good music I especially enjoy good music that's well engineered (Alan Parsons is a demi-god).

The Por Pono Player, thus, gives me pause. In acoustics I lived and died by the Nyquist-Shannon sampling theorem, and my day job today is so heavily science and research-oriented that I really need to deal with claims in a scientific, reproducible manner. That doesn't mean I don't have an open mind or won't make unusual decisions on a music format for non-auditory reasons. For example, I prefer to keep my tracks uncompressed, even though I freely admit that I'm hard pressed to find any difference in a 256kbit/s MP3 (let alone 320), because I'd like to keep a bitwise exact copy for archival purposes and playback; in fact, I use AIFF as my preferred format simply because OS X rips directly to it, everything plays it, and everything plays it with minimum CPU overhead despite FLAC being lossless and smaller. And hard disks are cheap, and I can convert it to FLAC for my Sansa Fuze if I needed to.

So thus it is with the Por Pono Player. For $400, you can get a player that directly pumps uncompressed, high-quality remastered 24-bit audio at up to 192kHz into your ears with no downsampling and allegedly no funny business. Immediately my acoustics professor cries foul. "Cameron," she says as she writes a big fat F on this blog post, "you know perfectly well that a CD using 44.1kHz as its sampling rate will accurately reproduce sounds up to 22.05kHz without aliasing, and 16-bit audio has indistinguishable quantization error in multiple blinded studies." Yes, I know, I say sheepishly, having tried to create high-bit rate digital playback algorithms on the Commodore 64 and failed because the 6510's clock speed isn't fast enough to pump samples through the SID chip at anything much above telephone call frequencies. But I figured that if there was a chance, if there was anything, that could demonstrate a difference in audio quality that I could uncover it with a Pono Player and a set of good headphones (I own a set of Grado SR125e cans, which are outstanding for the price). So I preordered one and yesterday it arrived, in a fun wooden box:

It includes a MicroUSB charger (and cable), an SDXC MicroSD card (64GB, plus the 64GB internal storage), a fawning missive from Neil Young, the instigator of the original Kickstarter, the yellow triangular unit itself (available now in other colours), and no headphones (it's BYO headset):

My original plan was to do an A-B comparison with Pink Floyd's Dark Side of the Moon because it was originally mastered by the godlike Alan Parsons, I have the SACD 30th Anniversary master, and the album is generally considered high quality in all its forms. When I tried to do that, though, several problems rapidly became apparent:

First, the included card is SDXC, and SDXC support (and exFAT) wasn't added to OS X until 10.6.4. Although you can get exFAT support on 10.5 with OSXFUSE, I don't know how good their support is on PowerPC and it definitely doesn't work on Tiger (and I'm not aware of a module for the older MacFUSE that does run on Tiger). That limits you to SDHC cards up to 32GB at least on 10.4, which really hurts on FLAC or ALAC and especially on AIFF.

Second, the internal storage is not accessible directly to the OS. I plugged in the Pono Player to my iMac G4 and it showed up in System Profiler, but I couldn't do anything with it. The 64GB of internal storage is only accessible to the music store app, which brings us to the third problem:

Third, the Pono Music World app (a skinned version of JRiver Media Center) is Intel-only, 10.6+. You can't download tracks any other way right now, which also means you're currently screwed if you use Linux, even on an Intel Mac. And all they had was Dark Side in 44.1kHz/16 bit ... exactly the same as CD!

So I looked around for other options. HDTracks didn't have Dark Side, though they did have The (weaksauce) Endless River and The Division Bell in 96kHz/24 bit. I own both of these, but 96kHz wasn't really what I had in mind, and when I signed up to try a track it turns out they need a downloader also which is also a reskinned JRiver! And their reasoning for this in the FAQ is total crap.

Eventually I was able to find two sites that offer sample tracks I could download in TenFourFox (I had to downsample one for comparison). The first offers multiple formats in WAV, which your Power Mac actually can play, even in 24-bit (but it may be downsampled for your audio chip; if you go to /Applications/Utilities/Audio MIDI Setup.app you can see the sample rate and quantization for your audio output -- my quad G5 offers up to 24/96kHz but my iMac only has 16/44.1). The second was in FLAC, which Audacity crashed trying to convert, MacAmp Lite X wouldn't even recognize, and XiphQT (via QuickTime) played like it was being held underwater by a chainsaw (sample size mismatch, no doubt); I had to convert this by hand. I then put them onto a SDHC card and installed it in the Pono.

Yuck. I was very disappointed in the interface and LCD. I know that display quality wasn't a major concern, but it looks clunky and ugly and has terrible angles (see for yourself!) and on a $400 device that's not acceptable. The UI is very slow sometimes, even with the hardware buttons (just volume and power, no track controls), and the touch screen is very low quality. But I duly tried the built-in Neil Young track, which being an official Por Pono track turns on a special blue light to tell you it's special, and on my Grados it sounded pretty good, actually. That was encouraging. So I turned off the display and went through a few cycles of A-B testing with a random playlist between the two sets of tracks.

And ... well ... my identification abilities were almost completely statistical chance. In fact, I was slightly worse than chance would predict on the second set of tracks. I can only conclude that Harry Nyquist triumphs. With high quality headphones, presumably high quality DSPs and presumably high quality recordings, it's absolutely bupkis difference for me between CD-quality and Pono-quality.

Don't get me wrong: I am happy to hear that other people are concerned about the deficiencies in modern audio engineering -- and making it a marketable feature. We've all heard the "loudness war," for example, which dramatically compresses the dynamic range of previously luxurious tracks into a bafflingly small amplitude range which the uncultured ear, used only to quantity over quality, apparently prefers. Furthermore, early CD masters used RIAA equalization, which overdrove the treble and was completely unnecessary with digital audio, though that grave error hasn't been repeated since at least 1990 or earlier. Fortunately, assuming you get audio engineers who know what they're doing, a modern CD is every bit as a good to the human ear as a DVD-Audio disc or an SACD. And if modern music makes a return to quality engineering with high quality intermediates (where 24-bit really does make a difference) and appropriate dynamic range, we'll all be better off.

But the Pono Player doesn't live up to the hype in pretty much any respect. It has line out (which does double as a headphone port to share) and it's high quality for what it does play, so it'll be nice for my hi-fi system if I can get anything on it, but the Sansa Fuze is smaller and more convenient as a portable player and the Pono's going back in the wooden box. Frankly, it feels like it was pushed out half-baked, it's problematic if you don't own a modern Mac, and the imperceptible improvements in audio mean it's definitely not worth the money over what you already own. But that's why you read this blog: I just spent $400 so you don't have to.

Categorieën: Mozilla-nl planet

Tarek Ziadé: Charity Python Code Review

di, 27/01/2015 - 20:23

Raising 2500 euros for a charity is hard. That's what I am trying to do for the Berlin Marathon on Alvarum.

Mind you, this is not to get a bib - I was lucky enough to get one from the lottery. It's just that it feels right to take the opportunity of this marathon to raise money for Doctors without Borders. Whatever my marathon result will be. I am not getting any money out of this, I am paying for all my Marathon fees. Every penny donated goes to MSF (Doctors without Borders).

It's the first time I am doing a fundraising for a foundation and I guess that I've exhausted all the potentials donators in my family, friends and colleagues circles.

I guess I've reached the point where I have to give back something to the people that are willing to donate.

So here's a proposal: I have been doing Python coding for quite some time, wrote some books in both English and French on the topic, and working on large scale projects using Python. I have also gave a lot of talks in Python conferences around the world.

I am not an expert of any specific fields like scientific Python, but I am good in "general Python" and in designing stuff that scales.

I am offering one of the following service:

  • Python code review
  • Slides review
  • Documentation review or translation from English to French

The contract (gosh this is probably very incomplete):

  • Your project have to be under an open source license, and available online.
  • I am looking from small reviews, between 30mn and 4 hours of work I guess.
  • You are responsible for the intial guidance. e.g. explain what specific review you want me to do.
  • I am allowed to say no (mostly if by any chance I have tons of proposals, or if I don't feel like I am the right person to review your code.)
  • This is on my free time so I can't really give deadlines - however depending on the project and amount of work I will be able to roughly estimate how long is going to take and when I should be able to do it.
  • If I do the work you can't back off if you don't like the result of my reviews. If you do without a good reason, this is mean and I might cry a little.
  • I won't be responsible for any damage or liability done to your project because of my review.
  • I am not issuing any invoice or anything like that. The fundraising site however will issue a classical invoice when you do the donation. I am not part of that transaction nor responsible for it.
  • Once the work will be done, I will tell you how long it took, and you are free to give wathever you think is fair and I will happily accept whatever you give my fundraising. If you give 1 euro for 4 hours of work I might make a sad face, but I will just accept it.

Interested ? Mail me! tarek@ziade.org

And if you just want to give to the fundraising it's here: http://www.alvarum.com/tarekziade

Categorieën: Mozilla-nl planet

Air Mozilla: Engineering Meeting

di, 27/01/2015 - 20:00

Engineering Meeting The weekly Mozilla engineering meeting.

Categorieën: Mozilla-nl planet

Michael Kaply: What About Firefox Deployment?

di, 27/01/2015 - 18:58

You might have noticed that I spend most of my resources around configuring Firefox and not around deploying Firefox. There are a couple reasons for that:

  1. There really isn’t a "one size fits all" solution for Firefox deployment because there are so many products that can be used to deploy software within different organizations.
  2. Most discussions around deployment devolve into a "I wish Mozilla would do a Firefox MSI" discussion.

That being said, there are some things I can recommend around deploying Firefox on Windows.

If you want to modify the Firefox installer, I’ve done a few posts on this in the past:

If you need to integrate add-ons into that install, I've posted about that as well:

You could also consider asking on the Enterprise Working Group mailing list. There's probably someone that's already figured it out for your software deployment solution.

If you really need an MSI, check out FrontMotion. They've been doing MSI work for quite a while.

And if you really want Firefox to have an official MSI, consider working on bug 598647. That's where an MSI implementation got started but never finished.

Categorieën: Mozilla-nl planet

Byron Jones: happy bmo push day!

di, 27/01/2015 - 18:34

the following changes have been pushed to bugzilla.mozilla.org:

  • [1122269] no longer have access to https://bugzilla.mozilla.org/cvs-update.log
  • [1119184] Securemail incorrectly displays ” You will have to contact bugzilla-admin@foo to reset your password.” for whines
  • [1122565] editversions.cgi should focus the version field on page load to cut down on need for mouse
  • [1124254] form.dev-engagement-event: More changes to default NEEDINFO
  • [1119988] form.dev-engagement-event: disabled accounts causes invalid/incomplete bugs to be created
  • [616197] Wrap long bug summaries in dependency graphs, to avoid horizontal scrolling
  • [1117345] Can’t choose a resolution when trying to resolve a bug (with canconfirm rights)
  • [1125320] form.dev-engagement-event: Two new questions
  • [1121594] Mozilla Recruiting Requisition Opening Process Template
  • [1124437] Backport upstream bug 1090275 to bmo/4.2 to whitelist webservice api methods
  • [1124432] Backport upstream bug 1079065 to bmo/4.2 to fix improper use of open() calls

discuss these changes on mozilla.tools.bmo.


Filed under: bmo, mozilla
Categorieën: Mozilla-nl planet

Pagina's