mozilla

Mozilla Nederland LogoDe Nederlandse
Mozilla-gemeenschap

Ben Hearsum: Signing Software at Scale

Mozilla planet - wo, 28/01/2015 - 17:45

Mozilla produces a lot of builds. We build Firefox for somewhere between 5 to 10 platforms (depending how you count). We release Nightly and Aurora every single day, Beta twice a week, and Release and ESR every 6 weeks (at least). Each release contains an en-US build and nearly a hundred localized repacks. In the past the only builds we signed were Betas (which were once a week at the time), Releases, and ESRs. We had a pretty well established manual for it, but due to being manual it was still error prone and impractical to use for Nightly and Aurora. Signing of Nightly and Aurora became an important issue when background updates were implemented because one of the new security requirements with background updates was signed installers and MARs.

Enter: Signing Server

At this point it was clear that the only practical way to sign all the builds that we need to is to automate it. It sounded crazy to me at first. How can you automate something that depends on secret keys, passphrases, and very unfriendly tools? Well, there's some tricks you need to know, and throughout the development and improvement of our "signing server", we've learned a lot. In the post I'll talk about those tricks and show you how can use them (or even our entire signing server!) to make your signing process faster and easier.

Credit where credit is due: Chris AtLee wrote the core of the signing server and support for some of the signature types. Over time Erick Dransch, Justin Wood, Dustin Mitchell, and I have made some improvements and added support for additional types of signatures.

Tip #1: Collect passphrases at startup

This should be obvious to most, but it's very important not to store the passphrases to your private keys unencrypted. However, because they're needed to unlock the private keys when doing any signing the server needs to have access to them somehow. We've dealt with this by asking for them when launching a signing server instance:

$ bin/python tools/release/signing/signing-server.py signing.ini gpg passphrase: signcode passphrase: mar passphrase:

Because instances are started manually by someone in the small set of people with access to passphrases we're able to ensure that keys are never left unencrypted at rest.

Tip #2: Don't let just any machine request signed files

One of the first problems you run into when you have an API for signing files is how to make sure you don't accidentally sign malicious files. We've dealt with this in a few ways:

  • You need a special token in order to request any type of signing. These tokens are time limited and only a small subset of segregated machines may request them (on behalf of the build machines). Since build jobs can only be created if you're able to push to hg.mozilla.org, random people are unable to submit anything for signing.
  • Only our build machines are allowed to make signing requests. Even if you managed to get hold of a valid signing token, you wouldn't be able to do anything with it without also having access to a build machine. This is a layer of security that helps us protect against a situation where an evil doer may gain access to a loaner machine or other less restricted part of our infrastructure.

We have other layers of security built in too (HTTPS, firewalls, access control, etc.), but these are the key ones built into the signing server itself.

Tip #3: Use input redirection and other tricks to work around unfriendly command line tools

One of the trickiest parts about automating signing is getting all the necessary command line tools to accept input that's not coming from a console. Some of them are relative easy and accept passphrases via stdin:

proc = Popen(command, stdout=stdout, stderr=STDOUT, stdin=PIPE) proc.stdin.write(passphrase) proc.stdin.close()

Others, like OpenSSL, are fussier and require the use of pexpect:

proc = pexpect.spawn("openssl", args) proc.logfile_read = stdout proc.expect('Enter pass phrase') proc.sendline(passphrase)

And it's no surprise at all that OS X is the fussiest of them all. In order to sign you have to unlock the keychain by hand, run the signing command, and relock the keychain yourself:

child = pexpect.spawn("security unlock-keychain" + keychain) child.expect('password to unlock .*') child.sendline(passphrase) check_call(sign_command + [f], cwd=dir_, stdout=stdout, stderr=STDOUT) check_call(["security", "lock-keychain", keychain])

Although the code is simple in the end, a lot of trial, error, and frustration was necessary to arrive at it.

Tip #4: Sign everything you can on Linux (including Windows binaries!)

As fussy as automating tools like openssl can be on Linux, it pales in comparison to trying to automate anything on Windows. In the days before the signing server we had a scripted signing method that ran on Windows. Instead of providing the passphrase directly to the signing tool, it had to typed into a modal window. It was "automated" with an AutoIt script that typed in the password whenever the window popped up. This was hacky, and sometimes lead to issues if someone moved the mouse or pressed a key at the wrong time and changed window focus.

Thankfully there's tools available for Linux that are capable of signing Windows binaries. We started off by using Mono's signcode - a more or less drop in replacement for Microsoft's:

$ signcode -spc MozAuthenticode.spc -v MozAuthenticode.pvk -t http://timestamp.verisign.com/scripts/timestamp.dll -i http://www.mozilla.com -a sha1 -tr 5 -tw 60 /tmp/test.exe Mono SignCode - version 2.4.3.1 Sign assemblies and PE files using Authenticode(tm). Copyright 2002, 2003 Motus Technologies. Copyright 2004-2008 Novell. BSD licensed. Enter password for MozAuthenticode.pvk: Success

This works great for 32-bit binaries - we've been shipping binaries signed with it for years. For some reason that we haven't figured out though, it doesn't sign 64-bit binaries properly. For those we're using "osslsigncode", which is an OpenSSL based tool to do Authenticode signing:

$ osslsigncode -certs MozAuthenticode.spc -key MozAuthenticode.pvk -i http://www.mozilla.com -h sha1 -in /tmp/test64.exe -out /tmp/test64-signed.exe Enter PEM pass phrase: Succeeded $ osslsigncode verify /tmp/test64-signed.exe Signature verification: ok Number of signers: 1 Signer #0: Subject: /C=US/ST=CA/L=Mountain View/O=Mozilla Corporation/CN=Mozilla Corporation Issuer : /C=US/O=DigiCert Inc/OU=www.digicert.com/CN=DigiCert Assured ID Code Signing CA-1 Number of certificates: 3 Cert #0: Subject: /C=US/O=DigiCert Inc/OU=www.digicert.com/CN=DigiCert Assured ID Root CA Issuer : /C=US/O=DigiCert Inc/OU=www.digicert.com/CN=DigiCert Assured ID Root CA Cert #1: Subject: /C=US/O=DigiCert Inc/OU=www.digicert.com/CN=DigiCert Assured ID Code Signing CA-1 Issuer : /C=US/O=DigiCert Inc/OU=www.digicert.com/CN=DigiCert Assured ID Root CA Cert #2: Subject: /C=US/ST=CA/L=Mountain View/O=Mozilla Corporation/CN=Mozilla Corporation Issuer : /C=US/O=DigiCert Inc/OU=www.digicert.com/CN=DigiCert Assured ID Code Signing CA-1

In addition to Authenticode signing we also do GPG, APK, and couple of Mozilla-specific types of signing (MAR, EME Voucher) on Linux. We also sign our Mac builds with the signing server. Unfortunately, the tools needed for that are only available on OS X, so we have to run separate signing servers for these.

Tip #5: Run multiple signing servers Nobody likes a single point of failure, so we've built support our signing client to retry against multiple instances. Even if we lose part of our signing server pool, our infrastructure stays up: $ python signtool.py --cachedir cache -t token -n nonce -c host.cert -H dmgv2:mac-v2-signing1.srv.releng.scl3.mozilla.com:9120 -H dmgv2:mac-v2-signing2.srv.releng.scl3.mozilla.com:9120 -H dmgv2:mac-v2-signing3.srv.releng.scl3.mozilla.com:9120 -H dmgv2:mac-v2-signing4.srv.releng.scl3.mozilla.com:9120 --formats dmgv2 Firefox.app 2015-01-23 06:17:59,112 - ed40176524e7c197f4e23f6065a64dc3c9a62e71: processing Firefox.app.tar.gz on https://mac-v2-signing3.srv.releng.scl3.mozilla.com:9120 2015-01-23 06:17:59,118 - ed40176524e7c197f4e23f6065a64dc3c9a62e71: connection error; trying again soon 2015-01-23 06:18:00,119 - ed40176524e7c197f4e23f6065a64dc3c9a62e71: processing Firefox.app.tar.gz on https://mac-v2-signing4.srv.releng.scl3.mozilla.com:9120 2015-01-23 06:18:00,141 - ed40176524e7c197f4e23f6065a64dc3c9a62e71: uploading for signing 2015-01-23 06:18:10,748 - ed40176524e7c197f4e23f6065a64dc3c9a62e71: processing Firefox.app.tar.gz on https://mac-v2-signing4.srv.releng.scl3.mozilla.com:9120 2015-01-23 06:19:11,848 - ed40176524e7c197f4e23f6065a64dc3c9a62e71: processing Firefox.app.tar.gz on https://mac-v2-signing4.srv.releng.scl3.mozilla.com:9120 2015-01-23 06:19:40,480 - ed40176524e7c197f4e23f6065a64dc3c9a62e71: OK Running your own signing server

It's easy! All of the code you need to run your own signing server is in our tools repository. You'll need to set-up a virtualenv and create your own config file, but once you're ready you can attempt to start it with the following command:

python signing-server.py signing.ini

You'll be prompted for the passphrases to your private keys. If there's any problems with your config file or the passphrases the server will fail to start. Once you've got it up and running you can use try signing! get_token.py has an example of how to generate a signing token, and signtool.py will take your unsigned files and give you back signed versions. Happy signing!

Categorieën: Mozilla-nl planet

Mozilla looking to add a virtual reality feature to its Firefox browser - ChristianToday

Nieuws verzameld via Google - wo, 28/01/2015 - 16:47

ChristianToday

Mozilla looking to add a virtual reality feature to its Firefox browser
ChristianToday
In order to exceed the popularity of Google's Chrome browser, Mozilla has had to come up with some drastic changes. Those changes are focused primarily on augmenting the overall experience on the company's Firefox browser. Last summer, the company ...

Categorieën: Mozilla-nl planet

Robert Longson: New SVG/CSS Filter support in Firefox

Mozilla planet - wo, 28/01/2015 - 16:36

There’s a new specification for filters that replaces the filters module in SVG 1.1. Firefox and Chrome are both implementing new features from this specification.

Firefox 30 was the first version to support feDropShadow As well as being simpler to write, feDropShadow will be faster than the equivalent individual filters as it skips some unnecessary colour conversions that we’d otherwise perform.

Firefox 35 has support for all CSS Filters so for simple cases you no longer need any SVG markup to create a filter. We’ve examples on MDN showing how to use CSS filters.

We’ve also implemented filter chaining, this is we support multiple filter either via URLs or CSS filters on a single element.

As with earlier versions of Firefox you can apply SVG and CSS filters to both SVG and HTML elements.

As part of the rewrite to support SVG filters we’ve improved their performance by using D2D on Windows to render them thus taking advantage of any hardware acceleration possibilities on that platform and on other platforms using SIMD and SSE2 to accelerate rendering so you can now use more filters without slowing your site down.


Categorieën: Mozilla-nl planet

Pete Moore: Weekly review 2015-01-28

Mozilla planet - wo, 28/01/2015 - 16:30

Task Cluster Go Client

This week I have got the task cluster go client talking to the TaskCluster API service end points.

See: https://github.com/petemoore/taskcluster-client-go/blob/master/README.md

I also now have part of the client library auto-generating, e.g. see: https://github.com/petemoore/taskcluster-client-go/blob/master/client/generated-code.go

Shouldn’t be too far from auto-generating the entire client library soon and having it working, tested, documented and published.

Categorieën: Mozilla-nl planet

Mozilla Privacy Blog: How Mozilla Addresses the Privacy Paradox

Mozilla planet - wo, 28/01/2015 - 14:39
Earlier this month, a 20 year old NBA Clippers fan held up a sign in a crowded Washington DC arena with her phone number on it. Seasoned privacy professionals have long lamented the old adage that if you give someone … Continue reading
Categorieën: Mozilla-nl planet

Ryan Kelly: Are we Python yet?

Mozilla planet - wo, 28/01/2015 - 14:01
Categorieën: Mozilla-nl planet

Mozilla celebra il Data Privacy Day con un vademecum per la privacy online - Data manager online

Nieuws verzameld via Google - wo, 28/01/2015 - 10:46

Data manager online

Mozilla celebra il Data Privacy Day con un vademecum per la privacy online
Data manager online
Mozilla ha progettato Firefox in modo da proteggere e rispettare le informazioni private. Ecco perché è orgogliosa di essere riconosciuta dal Ponemon Institute come Most Trusted Internet Company for Privacy. I dati personali appartengono esclusivamente ...
Data Privacy Day 2015 | Mozilla | PrivacyDownload blog.it (Blog)
Una giornata di privacyApogeo Online

alle 7 nieuwsartikelen »
Categorieën: Mozilla-nl planet

Andy McKay: Iron Workers Memorial Bridge

Mozilla planet - wo, 28/01/2015 - 09:00

I pretty much hate the Iron Workers Memorial Bridge (or Second Narrows). I have to cross it each time I cycle into work and its miserable.

Its asymmetric, the climb up from North to South is demoralising. The ride is often windy. Often wet. Usually cold. Everything that nature can throw at you, you'll encounter on the bridge.

And currently its only got one sidewalk, which means everyone has to stop and let each other pass. The east side walk is currently being renovated and will hopefully be slightly wider. This means its harder to get over and has a brutally dangerous off ramp on the north side. I'm surprised no-one has been killed on that yet.

Once the east side walk renovation gets completed, they'll start on the west side. The other part of the renovation is adding in a suicide fence. That will obstruct the view. But just occasionally the bridge gives you a stunning view... and just occasionally there's a break in the bike traffic and I get a photo like this:

Looking west to downtown and Lions Gate from the Second Narrows

I just realised that I might miss that view.

Categorieën: Mozilla-nl planet

Get Smart On International Data Privacy Day

Mozilla Blog - wo, 28/01/2015 - 07:23

Today is International Data Privacy Day. It is a day designed to raise awareness and promote best practices for privacy and data protection. It is a day that looks to the future and recognizes that we can and should do better as an industry. It reminds us that we need to focus on the importance of having the trust of our users. At Mozilla, we start from the baseline that privacy and security on the Web are fundamental and not optional. We are transparent with our users about our data practices and provide them options for choice and control. We seek to build trust so we can collectively create the Web our users want – the Web we all want. Still, we are working to do better.

The term “privacy” means different things to each of us. At Mozilla, we don’t pretend to know what it means to everyone or that we can determine the right course of action for each user. Rather, our goal is to provide options to our users so they can choose what is right for them. Our privacy principles help guide features specifically targeted at user privacy and security — such as Do Not Track and accountless communications through Hello. And, we have other initiatives that are aimed at changing the way industry interacts with users. For example, our Tiles initiative helps prove that advertising and other customized content can be displayed in a manner that respects users. Each of these features has been engineered with privacy in mind.

We are also experimenting with new privacy and security features. In November, we announced an experimental tool — a tracking protection feature — that allows a user to opt-out of cross-site tracking of their Web activities. This month, we’ve conducted user testing to iterate and improve the feature and will further simplify and optimize its operation. We also announced that we would support Tor’s efforts to provide users with a private and secure browsing experience. We’ve now launched Tor relays that allow Tor to expand its network and serve more users. Tor can now spend more time on innovation and less time on scalability. We’re learning through this experimentation and will continue to iterate until we can do better.

We continue to advocate for transparency in our industry with respect to the collection and use of user data, and are committed to proving — through our own actions — that there is a better way. We are excited to begin 2015 by being recognized for the second time as the Most Trusted Internet Company for Privacy by the Ponemon Institute. We want you to help us to create the Web you want. If you have ideas about other steps we can take, please get involved. In the mean time, let’s celebrate International Data Privacy Day! Here are a few quick tips to get smart on privacy. And please join our Twitter Chat on January 28 at 11am PST hosted by @Firefox with guests (including from DuckDuckGo, McAfee, iKeepSafe, Privacy International and the Center for Democracy and Technology, among others).

Categorieën: Mozilla-nl planet

William Lachance: mozregression updates

Mozilla planet - wo, 28/01/2015 - 00:52

Lots of movement in mozregression (a tool for automatically determining when a regression was introduced in Firefox by bisecting builds on ftp.mozilla.org) in the last few months. Here’s some highlights:

  • Support for win64 nightly and inbound builds (Kapil Singh, Vaibhav Agarwal)
  • Support for using an http cache to reduce time spent downloading builds (Sam Garrett)
  • Way better logging and printing of remaining time to finish bisection (Julien Pagès)
  • Much improved performance when bisecting inbound (Julien)
  • Support for automatic determination on whether a build is good/bad via a custom script (Julien)
  • Tons of bug fixes and other robustness improvements (me, Sam, Julien, others…)

Also thanks to Julien, we have a spiffy new website which documents many of these features. If it’s been a while, be sure to update your copy of mozregression to the latest version and check out the site for documentation on how to use the new features described above!

Thanks to everyone involved (especially Julien) for all the hard work. Hopefully the payoff will be a tool that’s just that much more useful to Firefox contributors everywhere. :)

Categorieën: Mozilla-nl planet

VENEZUELA: Mozilla busca llevar la realidad virtual a Firefox - EntornoInteligente

Nieuws verzameld via Google - wo, 28/01/2015 - 00:49

EntornoInteligente

VENEZUELA: Mozilla busca llevar la realidad virtual a Firefox
EntornoInteligente
Mientras Facebook ya adquirió Oculus VR y Microsoft se prepara para HoloLens, Mozilla (de Firefox) se suma al mundo de la realidad virtual, por lo que incorpora ahora a sus ediciones en desarrollo continuo (Nightly) y para desarrolladores (Developer ...

en meer »Google Nieuws
Categorieën: Mozilla-nl planet

Justin Wood: Release Engineering does a lot…

Mozilla planet - wo, 28/01/2015 - 00:11

Hey Everyone,

I spent a few minutes a week over the last month or two working on compiling a list of Release Engineering work areas. Included in that list is identifying which repositories we “own” and work in, as well as where these repositories are mirrored. (We have copies in hg.m.o git.m.o and github, some exclusively in their home).

While we transition to a more uniform and modern design style and philosphy.

My major takeaway here is we have A LOT of things that we do. (this list is explicitly excluding repositories that are obsolete and unused)

So without further ado, I present our page ReleaseEngineering/Repositories

repositoriesYou’ll notice a few things about this, we have a column for Mirrors, and RoR (Repository of Record), “Committable Location” was requested by Hal and is explicitly for cases where “Where we consider our important location the RoR, it may not necessarily be where we allow commits to”

The other interesting thing is we have automatic population of travis and coveralls urls/status icons. This is for free using some magic wiki templates I did.

The other piece of note here, is the table is generated by a list of pages, using “SemanticMediaWiki” so the links to the repositories can be populated with things like “where are the docs” “what applications use this repo”, “who are suitable reviewers” etc. (all those are TODO on the releng side so far).

I’m hoping to be putting together a blog post at some point about how I chose to do much of this with mediawiki, however in the meantime should any team at Mozilla find this enticing and wish to have one for themselves, much of the work I did here can be easily replicated for your team, even if you don’t need/like the multiple repo location magic of our table. I can help get you setup to add your own repos to the mix.

Remember the only fields that are necessary is a repo name, the repo location, and owner(s). The last field can even be automatically filled in by a form on your page (see the end of Release Engineerings page for an example of that form)

Reach out to me on IRC or E-mail (information is on my mozillians profile) if you desire this for your team and we can talk. If you don’t have a need for your team, you can stare at all the stuff Releng is doing and remember to thank one of us next time you see us. (or inquire about what we do, point contributors our way, we’re a friendly group, I promise.)

Categorieën: Mozilla-nl planet

Hannah Kane: A new online home for those who #teachtheweb

Mozilla planet - di, 27/01/2015 - 23:22

We’ve recently begun work on a new website that will serve the mentors in our Webmaker community—a gathering place for anyone who is teaching the Web. They’ll find activity kits, trainings, badges, the Web Literacy Map, and more. It will also be an online clubhouse for Webmaker Clubs, and will showcase the work of Hives to the broader network.

Our vision for the site is that it will provide pathways for sustained involvement in teaching the Web. Imagine a scenario where, after hosting a Maker Party, a college student in Pune wants to build on the momentum, but doesn’t know how. Or imagine a librarian in Seattle who is looking for activities for her weekly teen drop-in hours. Or a teacher in Buenos Aires who is looking to level up his own digital literacy skills. In each of these scenarios, we hope the person will look to this new site to find what they need.

We’re in the very early stages of building out the site. One of our first challenges is to figure out the best way to organize all of the content.

Fortunately, we were able to find 14 members of the community who were willing to participate in a “virtual card-sorting” activity. We gave each of the volunteers a list of 22 content areas (e.g. “Find a Teaching Kit,” “Join a Webmaker Club,” “Participate in a community discussion”), and asked them to organize the items into groups that made sense to them.

The results were fascinating. Some grouped the content by specific programs, concepts, or offerings. Others grouped by function (e.g “Participate,” “Learn,” “Lead”). Others organized by identity (e.g. “Learner” or “Mentor”). Still others grouped by level of expertise needed.

We owe a debt of gratitude to those who participated in the research. We were able to better understand the variety of mental models, and we’re currently using those insights to build out some wireframes to test in the next heartbeat.

Once we firm up the information architecture, we’ll build and launch v1 of the site (our goal is to launch it by the end of Q1). From there, we’ll continue to iterate, adding more functionality and resources to meet the needs of our mentor community.

Future iterations will likely include:

  • Improving the way we share and discover curriculum modules
  • Enhancing our online training platform
  • Providing tools for groups to self-organize
  • Making improvements to our badging platform
  • Incorporating the next version of the Web Literacy Map

Stay tuned for more updates and opportunities to provide feedback throughout the process. We’ve also started a Discourse thread for continuing discussion of the platform.


Categorieën: Mozilla-nl planet

Mozilla releases emergency update of Firefox due crashes - myce.com

Nieuws verzameld via Google - di, 27/01/2015 - 22:59

WCCFtech

Mozilla releases emergency update of Firefox due crashes
myce.com
Mozilla has released an unplanned emergency update with version number 35.0.1 due to all kinds of crashes and other issues. The update comes two weeks after the release of Firefox 35. The update fixes issues where the browser could crash when using ...
Experience Virtual Reality on Your Browser With Firefox Nightly BuildsWCCFtech

alle 2 nieuwsartikelen »
Categorieën: Mozilla-nl planet

Mozilla releases Firefox emergency update due crashes - myce.com

Nieuws verzameld via Google - di, 27/01/2015 - 22:59

Mozilla releases Firefox emergency update due crashes
myce.com
Mozilla has released an unplanned emergency update with version number 35.0.1 due to all kinds of crashes and other issues. The update comes two weeks after the release of Firefox 35. The update fixes issues where the browser could crash when using ...

en meer »Google Nieuws
Categorieën: Mozilla-nl planet

Christian Heilmann: Where would people like to see me – some interesting answers

Mozilla planet - di, 27/01/2015 - 22:50

Will code for Bananas

For pure Shits and Giggles™ I put up a form yesterday asking people where I should try to work now that I left Mozilla. By no means I have approached all the companies I listed (hence an “other” option). I just wanted to see what people see me as and where I could do some good work. Of course, some of the answers disagreed and made a lot of assumptions:

Your ego knows no bounds. Putting companies that have already turned you down is very special behavior.

This is utterly true. I applied at Yahoo in 1997 and didn’t get the job. I then worked at Yahoo for almost five years a few years later. I should not have done that. Companies don’t change and once you have a certain skillset there is no way you could ever learn something different that might make yourself appealing to others. Know your place, and all that.

Sarcasm aside, I am always amazed how lucky we are to have choices in our market. There is not a single day I am not both baffled and very, very thankful for being able to do what I like and make a living with it. I feel like a fraud many a time, and I know many other people who seemingly have a “big ego” doing the same. The trick is to not let that stop you but understand that it makes you a better person, colleague and employee. We should strive to get better all the time, and this means reaching beyond what you think you can achieve.

I’m especially flattered that people thought I had already been contacted by all the companies I listed and asked for people to pick for me. I love working in the open, but that’s a bit too open, even for my taste. I am not that lucky – I don’t think anybody is.

The answers were pretty funny, and of course skewed as I gave a few options rather than leaving it completely open. The final “wish of the people list” is:

  • W3C (108 votes)
  • Canonical (39 votes)
  • Microsoft (38 votes)
  • Google (37 votes)
  • Facebook (14 votes)
  • Twitter (9 votes)
  • Mozilla (7 votes)
  • PubNub (26 votes)

Pubnub’s entries were having exceedingly more exclamation points the more got submitted – I don’t know what happened there.

Other options with multiple votes were Apple, Adobe, CozyCloud, EFF, Futurice, Khan Academy, Opera, Spotify (I know who did that!) and the very charming “Retirement”.

Options labeled “fascinating” were:

  • A Circus
  • Burger King (that might still be a problem as I used to work on McDonalds.co.uk – might be a conflict of interest)
  • BangBros (no idea what that might be – a Nintendo Game?)
  • Catholic Church
  • Derick[SIC] Zoolander’s School for kids who can’t read good
  • Kevin Smith’s Movie
  • “Pizza chef at my local restaurant” (I was Pizza delivery guy for a while, might be possible to level up)
  • Playboy (they did publish Fahrenheit 451, let’s not forget that)
  • Taco Bell (this would have to be remote, or a hell of a commute)
  • The Avengers (I could be persuaded, but it probably will be an advisory role, Tony Stark style)
  • UKIP (erm, no, thanks)
  • Zombocom and
  • Starbucks barista. (this would mean I couldn’t go to Sweden any longer – they really like their coffee and Starbucks is seen as the Antichrist by many)

Some of the answers were giving me super powers I don’t have but show that people would love to have people like me talk to others outside the bubble more:

  • “NASA” (I really, really think I have nothing they need)
  • “A book publisher (they need help to move into the 21st century)”
  • “Data.gov or another country’s open data platform.” (did work with that, might re-visit it)
  • “GCHQ; Be the bridge between our worlds”
  • “Spanish Government – Podemos CTO” (they might realise I am not Spanish)
  • “any bank for a11y online banking :-/”

Some answers showed a need to vent:

  • “Anything but Google or Facebook for God sake!”
  • “OK, and option 2: perhaps Twitter? You might improve their horrible JS code in the website! ;)”

The most confusing answers were “My butthole” which sounds cramped and not a creative working environment and “Who are you?” which begs the answer “Why did you fill this form?”.

Many of the answers showed a lot of trust in me and made me feel all warm and fuzzy and I want to thank whoever gave those:

  • be CTO of awesome startup
  • Enjoy life Chris!
  • Start something of your own. You rock too hard, anyway!
  • you were doing just fine. choose the one where your presence can be heard the loudest. cheers!
  • you’ve left Mozilla for something else, so you are jobless for a week or so! :-)
  • Yourself then hire me and let me tap your Dev knowledge :D

I have a new job, I am starting on Monday and I will announce in probably too much detail here on Thursday. Thanks for everyone who took part in this little exercise. I have an idea what I need to do in my new job, and these ideas listed and the results showed me that I am on the right track.

Categorieën: Mozilla-nl planet

Stormy Peters: Can or Can’t?

Mozilla planet - di, 27/01/2015 - 22:35
10628746_986665307681_7544861487392315883_o

Can read or can’t eat books?

What I love about open source is that it’s a “can” world by default. You can do anything you think needs doing and nobody will tell you that you can’t. (They may not take your patch but they won’t tell you that you can’t create it!)

It’s often easier to define things by what they are not or what we can’t do. And the danger of that is you create a culture of “can’t”. Any one who has raised kids or animals knows this. “No, don’t jump.” You can’t jump on people. “No, off the sofa.” You can’t be on the furniture. “No, don’t lick!” You can’t slobber on me. And hopefully when you realize it, you can fix it. “You can have this stuffed animal (instead of my favorite shoe). Good dog!”

Often when we aren’t sure how to do something, we fill the world with can’ts. “I don’t know how we should do this, but I know you can’t do that on a proprietary mailing list.” “I don’t know how I should lose weight, but I know you can’t have dessert.” I don’t know. Can’t. Don’t know. Can’t. Unsure. Can’t.

Watch the world around you. Is your world full of can’ts or full of “can do”s? Can you change it for the better?

Related posts:

  1. The new sofa bed: bunk bed or sofa bed?
  2. How do you get furniture into apartment buildings?
  3. Heifer International

Categorieën: Mozilla-nl planet

Nathan Froyd: examples of poor API design, 1/N – pldhash functions

Mozilla planet - di, 27/01/2015 - 21:39

The other day in the #content IRC channel:

<bz> I have learned so many things about how to not define APIs in my work with Mozilla code ;) <bz> (probably lots more to learn, though)

I, too, am still learning a lot about what makes a good API. Like a lot of other things, it’s easier to point out poor API design than to describe examples of good API design, and that’s what this blog post is about. In particular, the venerable XPCOM datastructure PLDHashTable has been undergoing a number of changes lately, all aimed at bringing it up to date. (The question of why we have our own implementations of things that exist in the C++ standard library is for a separate blog post.)

The whole effort started with noticing that PL_DHashTableOperate is not a well-structured API. It’s necessary to quote some more of the API surface to fully understand what’s going on here:

typedef enum PLDHashOperator { PL_DHASH_LOOKUP = 0, /* lookup entry */ PL_DHASH_ADD = 1, /* add entry */ PL_DHASH_REMOVE = 2, /* remove entry, or enumerator says remove */ PL_DHASH_NEXT = 0, /* enumerator says continue */ PL_DHASH_STOP = 1 /* enumerator says stop */ } PLDHashOperator; typedef PLDHashOperator (* PLDHashEnumerator)(PLDHashTable *table, PLDHashEntryHdr *hdr, uint32_t number, void *arg); uint32_t PL_DHashTableEnumerate(PLDHashTable *table, PLDHashEnumerator etor, void *arg); PLDHashEntryHdr* PL_DHashTableOperate(PLDHashTable* table, const void* key, PLDHashOperator op);

(PL_DHashTableOperate no longer exists in the tree due to other cleanup bugs; the above is approximately what it looked like at the end of 2014.)

There are several problems with the above slice of the API:

  • PL_DHashTableOperate(table, key, PL_DHASH_ADD) is a long way to spell what should have been named PL_DHashTableAdd(table, key)
  • There’s another problem with the above: it’s making a runtime decision (based on the value of op) about what should have been a compile-time decision: this particular call will always and forever be an add operation. We shouldn’t have the (admittedly small) runtime overhead of dispatching on op. It’s worth noting that compiling with LTO and a quality inliner will remove that runtime overhead, but we might as well structure the code so non-LTO compiles benefit and the code at callsites reads better.
  • Given the above definitions, you can say PL_DHashTableOperate(table, key, PL_DHASH_STOP) and nothing will complain. The PL_DHASH_NEXT and PL_DHASH_STOP values are really only for a function of type PLDHashEnumerator to return, but nothing about the above definition enforces that in any way. Similarly, you can return PL_DHASH_LOOKUP from a PLDHashEnumerator function, which is nonsensical.
  • The requirement to always return a PLDHashEntryHdr* from PL_DHashTableOperate means doing a PL_DHASH_REMOVE has to return something; it happens to return nullptr always, but it really should return void. In a similar fashion, PL_DHASH_LOOKUP always returns a non-nullptr pointer (!); one has to check PL_DHASH_ENTRY_IS_{FREE,BUSY} on the returned value. The typical style for an API like this would be to return nullptr if an entry for the given key didn’t exist, and a non-nullptr pointer if such an entry did. The free-ness or busy-ness of a given entry should be a property entirely internal to the hashtable implementation (it’s possible that some scenarios could be slightly more efficient with direct access to the busy-ness of an entry).

We might infer corresponding properties of a good API from each of the above issues:

  • Entry points for the API produce readable code.
  • The API doesn’t enforce unnecessary overhead.
  • The API makes it impossible to talk about nonsensical things.
  • It is always reasonably clear what return values from API functions describe.

Fixing the first two bulleted issues, above, was the subject of bug 1118024, done by Michael Pruett. Once that was done, we really didn’t need PL_DHashTableOperate, and removing PL_DHashTableOperate and related code was done in bug 1121202 and bug 1124920 by Michael Pruett and Nicholas Nethercote, respectively. Fixing the unusual return convention of PL_DHashTableLookup is being done in bug 1124973 by Nicholas Nethercote. Maybe once all this gets done, we can move away from C-style PL_DHashTable* functions to C++ methods on PLDHashTable itself!

Next time we’ll talk about the actual contents of a PL_DHashTable and how improvements have been made there, too.

Categorieën: Mozilla-nl planet

Gregory Szorc: Commit Part Numbers and MozReview

Mozilla planet - di, 27/01/2015 - 21:17

It is common for commit messages in Firefox to contains strings like Part 1, Part 2, etc. See this push for bug 784841 for an extreme multi-part example.

When code review is conducted in Bugzilla, these identifiers are necessary because Bugzilla orders attachments/patches in the order they were updated or their patch title (I'm not actually sure!). If part numbers were omitted, it could be very confusing trying to figure out which order patches should be applied in.

However, when code review is conducted in MozReview, there is no need for explicit part numbers to convey ordering because the ordering of commits is implicitly defined by the repository history that you pushed to MozReview!

I argue that if you are using MozReview, you should stop writing Part N in your commit messages, as it provides little to no benefit.

I, for one, welcome this new world order: I've previously wasted a lot of time rewriting commit messages to reflect new part ordering after doing history rewriting. With MozReview, that overhead is gone and I barely pay a penalty for rewriting history, something that often produces a more reviewable series of commits and makes reviewing and landing a complex patch series significantly easier.

Categorieën: Mozilla-nl planet

Cameron Kaiser: And now for something completely different: the Pono Player review and Power Macs (plus: who's really to blame for Dropbox?)

Mozilla planet - di, 27/01/2015 - 20:34
Regular business first: this is now a syndicated blog on Planet Mozilla. I consider this an honour that should also go a long way toward reminding folks that not only are there well-supported community tier-3 ports, but lots of people still use them. In return I promise not to bore the punters too much with vintage technology.

IonPower crossed phase 2 (compilation) yesterday -- it builds and links, and nearly immediately asserts after some brief codegen, but at this phase that's entirely expected. Next, phase 3 is to get it to build a trivial script in Baseline mode ("var i=0") and run to completion without crashing or assertions, and phase 4 is to get it to pass the test suite in Baseline-only mode, which will make it as functional as PPCBC. Phase 5 and 6 are the same, but this time for Ion. IonPower really repays most of our technical debt -- no more fragile glue code trying to keep the JaegerMonkey code generator working, substantially fewer compiler warnings, and a lot less hacks to the JIT to work around oddities of branching and branch optimization. Plus, many of the optimizations I wrote for PPCBC will transfer to IonPower, so it should still be nearly as fast in Baseline-only mode. We'll talk more about the changes required in a future blog post.

Now to the Power Mac scene. I haven't commented on Dropbox dropping PowerPC support (and 10.4/10.5) because that's been repeatedly reported by others in the blogscene and personally I rarely use Dropbox at all, having my own server infrastructure for file exchange. That said, there are many people who rely on it heavily, even a petition (which you can sign) to bring support back. But let's be clear here: do you really want to blame someone? Do you really want to blame the right someone? Then blame Apple. Apple dropped PowerPC compilation from Xcode 4; Apple dropped Rosetta. Unless you keep a 10.6 machine around running Xcode 3, you can't build (true) Universal binaries anymore -- let alone one that compiles against the 10.4 SDK -- and it's doubtful Apple would let such an app (even if you did build it) into the App Store because it's predicated on deprecated technology. Except for wackos like me who spend time building PowerPC-specific applications and/or don't give a flying cancerous pancreas whether Apple finds such work acceptable, this approach already isn't viable for a commercial business and it's becoming even less viable as Apple actively retires 10.6-capable models. So, sure, make your voices heard. But don't forget who screwed us first, and keep your vintage hardware running.

That said, I am personally aware of someoneTM who is working on getting the supported Python interconnect running on OS X Power Macs, and it might be possible to rebuild Finder integration on top of that. (It's not me. Don't ask.) I'll let this individual comment if he or she wants to.

Onto the main article. As many of you may or may not know, my undergraduate degree was actually in general linguistics, and all linguists must have (obviously) some working knowledge of acoustics. I've also been a bit of a poseur audiophile too, and while I enjoy good music I especially enjoy good music that's well engineered (Alan Parsons is a demi-god).

The Por Pono Player, thus, gives me pause. In acoustics I lived and died by the Nyquist-Shannon sampling theorem, and my day job today is so heavily science and research-oriented that I really need to deal with claims in a scientific, reproducible manner. That doesn't mean I don't have an open mind or won't make unusual decisions on a music format for non-auditory reasons. For example, I prefer to keep my tracks uncompressed, even though I freely admit that I'm hard pressed to find any difference in a 256kbit/s MP3 (let alone 320), because I'd like to keep a bitwise exact copy for archival purposes and playback; in fact, I use AIFF as my preferred format simply because OS X rips directly to it, everything plays it, and everything plays it with minimum CPU overhead despite FLAC being lossless and smaller. And hard disks are cheap, and I can convert it to FLAC for my Sansa Fuze if I needed to.

So thus it is with the Por Pono Player. For $400, you can get a player that directly pumps uncompressed, high-quality remastered 24-bit audio at up to 192kHz into your ears with no downsampling and allegedly no funny business. Immediately my acoustics professor cries foul. "Cameron," she says as she writes a big fat F on this blog post, "you know perfectly well that a CD using 44.1kHz as its sampling rate will accurately reproduce sounds up to 22.05kHz without aliasing, and 16-bit audio has indistinguishable quantization error in multiple blinded studies." Yes, I know, I say sheepishly, having tried to create high-bit rate digital playback algorithms on the Commodore 64 and failed because the 6510's clock speed isn't fast enough to pump samples through the SID chip at anything much above telephone call frequencies. But I figured that if there was a chance, if there was anything, that could demonstrate a difference in audio quality that I could uncover it with a Pono Player and a set of good headphones (I own a set of Grado SR125e cans, which are outstanding for the price). So I preordered one and yesterday it arrived, in a fun wooden box:

It includes a MicroUSB charger (and cable), an SDXC MicroSD card (64GB, plus the 64GB internal storage), a fawning missive from Neil Young, the instigator of the original Kickstarter, the yellow triangular unit itself (available now in other colours), and no headphones (it's BYO headset):

My original plan was to do an A-B comparison with Pink Floyd's Dark Side of the Moon because it was originally mastered by the godlike Alan Parsons, I have the SACD 30th Anniversary master, and the album is generally considered high quality in all its forms. When I tried to do that, though, several problems rapidly became apparent:

First, the included card is SDXC, and SDXC support (and exFAT) wasn't added to OS X until 10.6.4. Although you can get exFAT support on 10.5 with OSXFUSE, I don't know how good their support is on PowerPC and it definitely doesn't work on Tiger (and I'm not aware of a module for the older MacFUSE that does run on Tiger). That limits you to SDHC cards up to 32GB at least on 10.4, which really hurts on FLAC or ALAC and especially on AIFF.

Second, the internal storage is not accessible directly to the OS. I plugged in the Pono Player to my iMac G4 and it showed up in System Profiler, but I couldn't do anything with it. The 64GB of internal storage is only accessible to the music store app, which brings us to the third problem:

Third, the Pono Music World app (a skinned version of JRiver Media Center) is Intel-only, 10.6+. You can't download tracks any other way right now, which also means you're currently screwed if you use Linux, even on an Intel Mac. And all they had was Dark Side in 44.1kHz/16 bit ... exactly the same as CD!

So I looked around for other options. HDTracks didn't have Dark Side, though they did have The (weaksauce) Endless River and The Division Bell in 96kHz/24 bit. I own both of these, but 96kHz wasn't really what I had in mind, and when I signed up to try a track it turns out they need a downloader also which is also a reskinned JRiver! And their reasoning for this in the FAQ is total crap.

Eventually I was able to find two sites that offer sample tracks I could download in TenFourFox (I had to downsample one for comparison). The first offers multiple formats in WAV, which your Power Mac actually can play, even in 24-bit (but it may be downsampled for your audio chip; if you go to /Applications/Utilities/Audio MIDI Setup.app you can see the sample rate and quantization for your audio output -- my quad G5 offers up to 24/96kHz but my iMac only has 16/44.1). The second was in FLAC, which Audacity crashed trying to convert, MacAmp Lite X wouldn't even recognize, and XiphQT (via QuickTime) played like it was being held underwater by a chainsaw (sample size mismatch, no doubt); I had to convert this by hand. I then put them onto a SDHC card and installed it in the Pono.

Yuck. I was very disappointed in the interface and LCD. I know that display quality wasn't a major concern, but it looks clunky and ugly and has terrible angles (see for yourself!) and on a $400 device that's not acceptable. The UI is very slow sometimes, even with the hardware buttons (just volume and power, no track controls), and the touch screen is very low quality. But I duly tried the built-in Neil Young track, which being an official Por Pono track turns on a special blue light to tell you it's special, and on my Grados it sounded pretty good, actually. That was encouraging. So I turned off the display and went through a few cycles of A-B testing with a random playlist between the two sets of tracks.

And ... well ... my identification abilities were almost completely statistical chance. In fact, I was slightly worse than chance would predict on the second set of tracks. I can only conclude that Harry Nyquist triumphs. With high quality headphones, presumably high quality DSPs and presumably high quality recordings, it's absolutely bupkis difference for me between CD-quality and Pono-quality.

Don't get me wrong: I am happy to hear that other people are concerned about the deficiencies in modern audio engineering -- and making it a marketable feature. We've all heard the "loudness war," for example, which dramatically compresses the dynamic range of previously luxurious tracks into a bafflingly small amplitude range which the uncultured ear, used only to quantity over quality, apparently prefers. Furthermore, early CD masters used RIAA equalization, which overdrove the treble and was completely unnecessary with digital audio, though that grave error hasn't been repeated since at least 1990 or earlier. Fortunately, assuming you get audio engineers who know what they're doing, a modern CD is every bit as a good to the human ear as a DVD-Audio disc or an SACD. And if modern music makes a return to quality engineering with high quality intermediates (where 24-bit really does make a difference) and appropriate dynamic range, we'll all be better off.

But the Pono Player doesn't live up to the hype in pretty much any respect. It has line out (which does double as a headphone port to share) and it's high quality for what it does play, so it'll be nice for my hi-fi system if I can get anything on it, but the Sansa Fuze is smaller and more convenient as a portable player and the Pono's going back in the wooden box. Frankly, it feels like it was pushed out half-baked, it's problematic if you don't own a modern Mac, and the imperceptible improvements in audio mean it's definitely not worth the money over what you already own. But that's why you read this blog: I just spent $400 so you don't have to.

Categorieën: Mozilla-nl planet

Pagina's