mozilla

Mozilla Nederland LogoDe Nederlandse
Mozilla gemeenschap

Mozilla Fesses up to Accidental Data Breach - Infosecurity Magazine

Nieuws verzameld via Google - wo, 06/08/2014 - 09:02

Infosecurity Magazine

Mozilla Fesses up to Accidental Data Breach
Infosecurity Magazine
“Starting on about June 23, for a period of 30 days, a data sanitization process of the Mozilla Developer Network (MDN) site database had been failing, resulting in the accidental disclosure of MDN email addresses of about 76,000 users and encrypted ...

Categorieën: Mozilla-nl planet

Marco Zehe: Started a 30 days with Android experiment

Mozilla planet - wo, 06/08/2014 - 07:54

After I revisited the results of my Switching to Android experiment, and finding most (like 99.5%) items in order now, I decided on Tuesday to conduct a serious 30 days with Android endeavor. I have handed in my iPhone to my wife, and she’s keeping (confiscating) it for me.

I will also keep a diary of experiences. However, since that is very specific and not strictly Mozilla-related, and since this blog is syndicated on Planet Mozilla, I have decided to create a new blog especially for that occasion. You’re welcome to follow along there if you’d like, and do feel free to comment! Day One is already posted.

Also, if you follow me on Twitter, the hashtag #30DaysWithAndroid will be used to denote things around this topic, and announce the new posts as I write them, hopefully every day.

It’s exciting, and I still feel totally crazy about it! :) Well, here goes!

Categorieën: Mozilla-nl planet

Joshua Cranmer: Why email is hard, part 7: email security and trust

Mozilla planet - wo, 06/08/2014 - 05:39
This post is part 7 of an intermittent series exploring the difficulties of writing an email client. Part 1 describes a brief history of the infrastructure. Part 2 discusses internationalization. Part 3 discusses MIME. Part 4 discusses email addresses. Part 5 discusses the more general problem of email headers. Part 6 discusses how email security works in practice. This part discusses the problem of trust.

At a technical level, S/MIME and PGP (or at least PGP/MIME) use cryptography essentially identically. Yet the two are treated as radically different models of email security because they diverge on the most important question of public key cryptography: how do you trust the identity of a public key? Trust is critical, as it is the only way to stop an active, man-in-the-middle (MITM) attack. MITM attacks are actually easier to pull off in email, since all email messages effectively have to pass through both the sender's and the recipients' email servers [1], allowing attackers to be able to pull off permanent, long-lasting MITM attacks [2].

S/MIME uses the same trust model that SSL uses, based on X.509 certificates and certificate authorities. X.509 certificates effectively work by providing a certificate that says who you are which is signed by another authority. In the original concept (as you might guess from the name "X.509"), the trusted authority was your telecom provider, and the certificates were furthermore intended to be a part of the global X.500 directory—a natural extension of the OSI internet model. The OSI model of the internet never gained traction, and the trusted telecom providers were replaced with trusted root CAs.

PGP, by contrast, uses a trust model that's generally known as the Web of Trust. Every user has a PGP key (containing their identity and their public key), and users can sign others' public keys. Trust generally flows from these signatures: if you trust a user, you know the keys that they sign are correct. The name "Web of Trust" comes from the vision that trust flows along the paths of signatures, building a tight web of trust.

And now for the controversial part of the post, the comparisons and critiques of these trust models. A disclaimer: I am not a security expert, although I am a programmer who revels in dreaming up arcane edge cases. I also don't use PGP at all, and use S/MIME to a very limited extent for some Mozilla work [3], although I did try a few abortive attempts to dogfood it in the past. I've attempted to replace personal experience with comprehensive research [4], but most existing critiques and comparisons of these two trust models are about 10-15 years old and predate several changes to CA certificate practices.

A basic tenet of development that I have found is that the average user is fairly ignorant. At the same time, a lot of the defense of trust models, both CAs and Web of Trust, tends to hinge on configurability. How many people, for example, know how to add or remove a CA root from Firefox, Windows, or Android? Even among the subgroup of Mozilla developers, I suspect the number of people who know how to do so are rather few. Or in the case of PGP, how many people know how to change the maximum path length? Or even understand the security implications of doing so?

Seen in the light of ignorant users, the Web of Trust is a UX disaster. Its entire security model is predicated on having users precisely specify how much they trust other people to trust others (ultimate, full, marginal, none, unknown) and also on having them continually do out-of-band verification procedures and publicly reporting those steps. In 1998, a seminal paper on the usability of a GUI for PGP encryption came to the conclusion that the UI was effectively unusable for users, to the point that only a third of the users were able to send an encrypted email (and even then, only with significant help from the test administrators), and a quarter managed to publicly announce their private keys at some point, which is pretty much the worst thing you can do. They also noted that the complex trust UI was never used by participants, although the failure of many users to get that far makes generalization dangerous [5]. While newer versions of security UI have undoubtedly fixed many of the original issues found (in no small part due to the paper, one of the first to argue that usability is integral, not orthogonal, to security), I have yet to find an actual study on the usability of the trust model itself.

The Web of Trust has other faults. The notion of "marginal" trust it turns out is rather broken: if you marginally trust a user who has two keys who both signed another person's key, that's the same as fully trusting a user with one key who signed that key. There are several proposals for different trust formulas [6], but none of them have caught on in practice to my knowledge.

A hidden fault is associated with its manner of presentation: in sharp contrast to CAs, the Web of Trust appears to not delegate trust, but any practical widespread deployment needs to solve the problem of contacting people who have had no prior contact. Combined with the need to bootstrap new users, this implies that there needs to be some keys that have signed a lot of other keys that are essentially default-trusted—in other words, a CA, a fact sometimes lost on advocates of the Web of Trust.

That said, a valid point in favor of the Web of Trust is that it more easily allows people to distrust CAs if they wish to. While I'm skeptical of its utility to a broader audience, the ability to do so for is crucial for a not-insignificant portion of the population, and it's important enough to be explicitly called out.

X.509 certificates are most commonly discussed in the context of SSL/TLS connections, so I'll discuss them in that context as well, as the implications for S/MIME are mostly the same. Almost all criticism of this trust model essentially boils down to a single complaint: certificate authorities aren't trustworthy. A historical criticism is that the addition of CAs to the main root trust stores was ad-hoc. Since then, however, the main oligopoly of these root stores (Microsoft, Apple, Google, and Mozilla) have made their policies public and clear [7]. The introduction of the CA/Browser Forum in 2005, with a collection of major CAs and the major browsers as members, and several [8] helps in articulating common policies. These policies, simplified immensely, boil down to:

  1. You must verify information (depending on certificate type). This information must be relatively recent.
  2. You must not use weak algorithms in your certificates (e.g., no MD5).
  3. You must not make certificates that are valid for too long.
  4. You must maintain revocation checking services.
  5. You must have fairly stringent physical and digital security practices and intrusion detection mechanisms.
  6. You must be [externally] audited every year that you follow the above rules.
  7. If you screw up, we can kick you out.

I'm not going to claim that this is necessarily the best policy or even that any policy can feasibly stop intrusions from happening. But it's a policy, so CAs must abide by some set of rules.

Another CA criticism is the fear that they may be suborned by national government spy agencies. I find this claim underwhelming, considering that the number of certificates acquired by intrusions that were used in the wild is larger than the number of certificates acquired by national governments that were used in the wild: 1 and 0, respectively. Yet no one complains about the untrustworthiness of CAs due to their ability to be hacked by outsiders. Another attack is that CAs are controlled by profit-seeking corporations, which misses the point because the business of CAs is not selling certificates but selling their access to the root databases. As we will see shortly, jeopardizing that access is a great way for a CA to go out of business.

To understand issues involving CAs in greater detail, there are two CAs that are particularly useful to look at. The first is CACert. CACert is favored by many by its attempt to handle X.509 certificates in a Web of Trust model, so invariably every public discussion about CACert ends up devolving into an attack on other CAs for their perceived capture by national governments or corporate interests. Yet what many of the proponents for inclusion of CACert miss (or dismiss) is the fact that CACert actually failed the required audit, and it is unlikely to ever pass an audit. This shows a central failure of both CAs and Web of Trust: different people have different definitions of "trust," and in the case of CACert, some people are favoring a subjective definition (I trust their owners because they're not evil) when an objective definition fails (in this case, that the root signing key is securely kept).

The other CA of note here is DigiNotar. In July 2011, some hackers managed to acquire a few fraudulent certificates by hacking into DigiNotar's systems. By late August, people had become aware of these certificates being used in practice [9] to intercept communications, mostly in Iran. The use appears to have been caught after Chromium updates failed due to invalid certificate fingerprints. After it became clear that the fraudulent certificates were not limited to a single fake Google certificate, and that DigiNotar had failed to notify potentially affected companies of its breach, DigiNotar was swiftly removed from all of the trust databases. It ended up declaring bankruptcy within two weeks.

DigiNotar indicates several things. One, SSL MITM attacks are not theoretical (I have seen at least two or three security experts advising pre-DigiNotar that SSL MITM attacks are "theoretical" and therefore the wrong target for security mechanisms). Two, keeping the trust of browsers is necessary for commercial operation of CAs. Three, the notion that a CA is "too big to fail" is false: DigiNotar played an important role in the Dutch community as a major CA and the operator of Staat der Nederlanden. Yet when DigiNotar screwed up and lost its trust, it was swiftly kicked out despite this role. I suspect that even Verisign could be kicked out if it manages to screw up badly enough.

This isn't to say that the CA model isn't problematic. But the source of its problems is that delegating trust isn't a feasible model in the first place, a problem that it shares with the Web of Trust as well. Different notions of what "trust" actually means and the uncertainty that gets introduced as chains of trust get longer both make delegating trust weak to both social engineering and technical engineering attacks. There appears to be an increasing consensus that the best way forward is some variant of key pinning, much akin to how SSH works: once you know someone's public key, you complain if that public key appears to change, even if it appears to be "trusted." This does leave people open to attacks on first use, and the question of what to do when you need to legitimately re-key is not easy to solve.

In short, both CAs and the Web of Trust have issues. Whether or not you should prefer S/MIME or PGP ultimately comes down to the very conscious question of how you want to deal with trust—a question without a clear, obvious answer. If I appear to be painting CAs and S/MIME in a positive light and the Web of Trust and PGP in a negative one in this post, it is more because I am trying to focus on the positions less commonly taken to balance perspective on the internet. In my next post, I'll round out the discussion on email security by explaining why email security has seen poor uptake and answering the question as to which email security protocol is most popular. The answer may surprise you!

[1] Strictly speaking, you can bypass the sender's SMTP server. In practice, this is considered a hole in the SMTP system that email providers are trying to plug.
[2] I've had 13 different connections to the internet in the same time as I've had my main email address, not counting all the public wifis that I have used. Whereas an attacker would find it extraordinarily difficult to intercept all of my SSH sessions for a MITM attack, intercepting all of my email sessions is clearly far easier if the attacker were my email provider.
[3] Before you read too much into this personal choice of S/MIME over PGP, it's entirely motivated by a simple concern: S/MIME is built into Thunderbird; PGP is not. As someone who does a lot of Thunderbird development work that could easily break the Enigmail extension locally, needing to use an extension would be disruptive to workflow.
[4] This is not to say that I don't heavily research many of my other posts, but I did go so far for this one as to actually start going through a lot of published journals in an attempt to find information.
[5] It's questionable how well the usability of a trust model UI can be measured in a lab setting, since the observer effect is particularly strong for all metrics of trust.
[6] The web of trust makes a nice graph, and graphs invite lots of interesting mathematical metrics. I've always been partial to eigenvectors of the graph, myself.
[7] Mozilla's policy for addition to NSS is basically the standard policy adopted by all open-source Linux or BSD distributions, seeing as OpenSSL never attempted to produce a root database.
[8] It looks to me that it's the browsers who are more in charge in this forum than the CAs.
[9] To my knowledge, this is the first—and so far only—attempt to actively MITM an SSL connection.

Categorieën: Mozilla-nl planet

Joshua Cranmer: Why email is hard, part 7: email security and trust

Thunderbird - wo, 06/08/2014 - 05:39
This post is part 7 of an intermittent series exploring the difficulties of writing an email client. Part 1 describes a brief history of the infrastructure. Part 2 discusses internationalization. Part 3 discusses MIME. Part 4 discusses email addresses. Part 5 discusses the more general problem of email headers. Part 6 discusses how email security works in practice. This part discusses the problem of trust.

At a technical level, S/MIME and PGP (or at least PGP/MIME) use cryptography essentially identically. Yet the two are treated as radically different models of email security because they diverge on the most important question of public key cryptography: how do you trust the identity of a public key? Trust is critical, as it is the only way to stop an active, man-in-the-middle (MITM) attack. MITM attacks are actually easier to pull off in email, since all email messages effectively have to pass through both the sender's and the recipients' email servers [1], allowing attackers to be able to pull off permanent, long-lasting MITM attacks [2].

S/MIME uses the same trust model that SSL uses, based on X.509 certificates and certificate authorities. X.509 certificates effectively work by providing a certificate that says who you are which is signed by another authority. In the original concept (as you might guess from the name "X.509"), the trusted authority was your telecom provider, and the certificates were furthermore intended to be a part of the global X.500 directory—a natural extension of the OSI internet model. The OSI model of the internet never gained traction, and the trusted telecom providers were replaced with trusted root CAs.

PGP, by contrast, uses a trust model that's generally known as the Web of Trust. Every user has a PGP key (containing their identity and their public key), and users can sign others' public keys. Trust generally flows from these signatures: if you trust a user, you know the keys that they sign are correct. The name "Web of Trust" comes from the vision that trust flows along the paths of signatures, building a tight web of trust.

And now for the controversial part of the post, the comparisons and critiques of these trust models. A disclaimer: I am not a security expert, although I am a programmer who revels in dreaming up arcane edge cases. I also don't use PGP at all, and use S/MIME to a very limited extent for some Mozilla work [3], although I did try a few abortive attempts to dogfood it in the past. I've attempted to replace personal experience with comprehensive research [4], but most existing critiques and comparisons of these two trust models are about 10-15 years old and predate several changes to CA certificate practices.

A basic tenet of development that I have found is that the average user is fairly ignorant. At the same time, a lot of the defense of trust models, both CAs and Web of Trust, tends to hinge on configurability. How many people, for example, know how to add or remove a CA root from Firefox, Windows, or Android? Even among the subgroup of Mozilla developers, I suspect the number of people who know how to do so are rather few. Or in the case of PGP, how many people know how to change the maximum path length? Or even understand the security implications of doing so?

Seen in the light of ignorant users, the Web of Trust is a UX disaster. Its entire security model is predicated on having users precisely specify how much they trust other people to trust others (ultimate, full, marginal, none, unknown) and also on having them continually do out-of-band verification procedures and publicly reporting those steps. In 1998, a seminal paper on the usability of a GUI for PGP encryption came to the conclusion that the UI was effectively unusable for users, to the point that only a third of the users were able to send an encrypted email (and even then, only with significant help from the test administrators), and a quarter managed to publicly announce their private keys at some point, which is pretty much the worst thing you can do. They also noted that the complex trust UI was never used by participants, although the failure of many users to get that far makes generalization dangerous [5]. While newer versions of security UI have undoubtedly fixed many of the original issues found (in no small part due to the paper, one of the first to argue that usability is integral, not orthogonal, to security), I have yet to find an actual study on the usability of the trust model itself.

The Web of Trust has other faults. The notion of "marginal" trust it turns out is rather broken: if you marginally trust a user who has two keys who both signed another person's key, that's the same as fully trusting a user with one key who signed that key. There are several proposals for different trust formulas [6], but none of them have caught on in practice to my knowledge.

A hidden fault is associated with its manner of presentation: in sharp contrast to CAs, the Web of Trust appears to not delegate trust, but any practical widespread deployment needs to solve the problem of contacting people who have had no prior contact. Combined with the need to bootstrap new users, this implies that there needs to be some keys that have signed a lot of other keys that are essentially default-trusted—in other words, a CA, a fact sometimes lost on advocates of the Web of Trust.

That said, a valid point in favor of the Web of Trust is that it more easily allows people to distrust CAs if they wish to. While I'm skeptical of its utility to a broader audience, the ability to do so for is crucial for a not-insignificant portion of the population, and it's important enough to be explicitly called out.

X.509 certificates are most commonly discussed in the context of SSL/TLS connections, so I'll discuss them in that context as well, as the implications for S/MIME are mostly the same. Almost all criticism of this trust model essentially boils down to a single complaint: certificate authorities aren't trustworthy. A historical criticism is that the addition of CAs to the main root trust stores was ad-hoc. Since then, however, the main oligopoly of these root stores (Microsoft, Apple, Google, and Mozilla) have made their policies public and clear [7]. The introduction of the CA/Browser Forum in 2005, with a collection of major CAs and the major browsers as members, and several [8] helps in articulating common policies. These policies, simplified immensely, boil down to:

  1. You must verify information (depending on certificate type). This information must be relatively recent.
  2. You must not use weak algorithms in your certificates (e.g., no MD5).
  3. You must not make certificates that are valid for too long.
  4. You must maintain revocation checking services.
  5. You must have fairly stringent physical and digital security practices and intrusion detection mechanisms.
  6. You must be [externally] audited every year that you follow the above rules.
  7. If you screw up, we can kick you out.

I'm not going to claim that this is necessarily the best policy or even that any policy can feasibly stop intrusions from happening. But it's a policy, so CAs must abide by some set of rules.

Another CA criticism is the fear that they may be suborned by national government spy agencies. I find this claim underwhelming, considering that the number of certificates acquired by intrusions that were used in the wild is larger than the number of certificates acquired by national governments that were used in the wild: 1 and 0, respectively. Yet no one complains about the untrustworthiness of CAs due to their ability to be hacked by outsiders. Another attack is that CAs are controlled by profit-seeking corporations, which misses the point because the business of CAs is not selling certificates but selling their access to the root databases. As we will see shortly, jeopardizing that access is a great way for a CA to go out of business.

To understand issues involving CAs in greater detail, there are two CAs that are particularly useful to look at. The first is CACert. CACert is favored by many by its attempt to handle X.509 certificates in a Web of Trust model, so invariably every public discussion about CACert ends up devolving into an attack on other CAs for their perceived capture by national governments or corporate interests. Yet what many of the proponents for inclusion of CACert miss (or dismiss) is the fact that CACert actually failed the required audit, and it is unlikely to ever pass an audit. This shows a central failure of both CAs and Web of Trust: different people have different definitions of "trust," and in the case of CACert, some people are favoring a subjective definition (I trust their owners because they're not evil) when an objective definition fails (in this case, that the root signing key is securely kept).

The other CA of note here is DigiNotar. In July 2011, some hackers managed to acquire a few fraudulent certificates by hacking into DigiNotar's systems. By late August, people had become aware of these certificates being used in practice [9] to intercept communications, mostly in Iran. The use appears to have been caught after Chromium updates failed due to invalid certificate fingerprints. After it became clear that the fraudulent certificates were not limited to a single fake Google certificate, and that DigiNotar had failed to notify potentially affected companies of its breach, DigiNotar was swiftly removed from all of the trust databases. It ended up declaring bankruptcy within two weeks.

DigiNotar indicates several things. One, SSL MITM attacks are not theoretical (I have seen at least two or three security experts advising pre-DigiNotar that SSL MITM attacks are "theoretical" and therefore the wrong target for security mechanisms). Two, keeping the trust of browsers is necessary for commercial operation of CAs. Three, the notion that a CA is "too big to fail" is false: DigiNotar played an important role in the Dutch community as a major CA and the operator of Staat der Nederlanden. Yet when DigiNotar screwed up and lost its trust, it was swiftly kicked out despite this role. I suspect that even Verisign could be kicked out if it manages to screw up badly enough.

This isn't to say that the CA model isn't problematic. But the source of its problems is that delegating trust isn't a feasible model in the first place, a problem that it shares with the Web of Trust as well. Different notions of what "trust" actually means and the uncertainty that gets introduced as chains of trust get longer both make delegating trust weak to both social engineering and technical engineering attacks. There appears to be an increasing consensus that the best way forward is some variant of key pinning, much akin to how SSH works: once you know someone's public key, you complain if that public key appears to change, even if it appears to be "trusted." This does leave people open to attacks on first use, and the question of what to do when you need to legitimately re-key is not easy to solve.

In short, both CAs and the Web of Trust have issues. Whether or not you should prefer S/MIME or PGP ultimately comes down to the very conscious question of how you want to deal with trust—a question without a clear, obvious answer. If I appear to be painting CAs and S/MIME in a positive light and the Web of Trust and PGP in a negative one in this post, it is more because I am trying to focus on the positions less commonly taken to balance perspective on the internet. In my next post, I'll round out the discussion on email security by explaining why email security has seen poor uptake and answering the question as to which email security protocol is most popular. The answer may surprise you!

[1] Strictly speaking, you can bypass the sender's SMTP server. In practice, this is considered a hole in the SMTP system that email providers are trying to plug.
[2] I've had 13 different connections to the internet in the same time as I've had my main email address, not counting all the public wifis that I have used. Whereas an attacker would find it extraordinarily difficult to intercept all of my SSH sessions for a MITM attack, intercepting all of my email sessions is clearly far easier if the attacker were my email provider.
[3] Before you read too much into this personal choice of S/MIME over PGP, it's entirely motivated by a simple concern: S/MIME is built into Thunderbird; PGP is not. As someone who does a lot of Thunderbird development work that could easily break the Enigmail extension locally, needing to use an extension would be disruptive to workflow.
[4] This is not to say that I don't heavily research many of my other posts, but I did go so far for this one as to actually start going through a lot of published journals in an attempt to find information.
[5] It's questionable how well the usability of a trust model UI can be measured in a lab setting, since the observer effect is particularly strong for all metrics of trust.
[6] The web of trust makes a nice graph, and graphs invite lots of interesting mathematical metrics. I've always been partial to eigenvectors of the graph, myself.
[7] Mozilla's policy for addition to NSS is basically the standard policy adopted by all open-source Linux or BSD distributions, seeing as OpenSSL never attempted to produce a root database.
[8] It looks to me that it's the browsers who are more in charge in this forum than the CAs.
[9] To my knowledge, this is the first—and so far only—attempt to actively MITM an SSL connection.

Categorieën: Mozilla-nl planet

Benjamin Kerensa: Dogfooding: Flame on Nightly

Mozilla planet - ti, 05/08/2014 - 22:20

 Flame on NightlyJust about two weeks ago, I got a Flame and have decided to use it as my primary phone and put away my Nexus 5. I’m running Firefox OS Nightly on it and so far have not run into any bugs so critical that I have needed to go back to Android.

I have however found some bugs and have some thoughts on things that need improvement to make the Firefox OS experience even better.

 

Keyboard

One thing that has really irked me while using Firefox OS 2.1 is the keyboard always shows capital letters even if you don’t have caps lock on and are typing lower case letters. This experience is not what I have grown used to while using Android for years and iOS before that. When typing my password into web apps it is confusing to not see a visual cue on the keyboard to let me know what casing I am inputting.

For whatever reason this was an intended feature and I honestly think this UX decision needs to be rethought because it just doesn’t feel natural.

 

Call Volume

On Firefox OS 2.1, at least in my experience, the max volume is totally inadequate because indoors, it sounds like the person on the other end of a call is whispering or talking quietly. Making or receiving a call outdoors or in a noisy environment is impossible. I filed a bug on this so hopefully it will be sorted.

 

Mail App

I like that the mail app is fast and nimble but the lack of having threaded e-mail conversations just leaves me wanting more. It would also be nice to be able to have a smart inbox like the GMail app on Android has since that is also my mail provider.

 

Social Media Apps

The Facebook and Twitter app both work better and luckily the Facebook web app has avoided many of the UI and feature changes Facebook has landed to its Android and iOS app so the experience is still good. One thing I do miss though is that I got notices on Android where currently the Facebook (Apparently Facebook is working on adding support) and Twitter apps on Firefox OS do not check for new messages on either platform and send notifications to your phone.

Additionally, it would really be nice to have a standalone Google+ packaged app in the marketplace since the current one I can add to the home screen seems a bit hit or miss and has some weird grey bar above the app which limits the app using the full screen resolution.

 

Overall Impression

I’m still using my Flame so none of the issues I have pointed out are making my phone unusable and I know that I am running a nightly unreleased version of Firefox OS so stability and issues are expected on Nightly. That being said, I’m really impressed by how polished the UI is compared to previous versions. Each time I get an OTA, I see more and more polish and improvements coming to certified apps.

I am really happy with the fact the Flame had dual SIM since I am traveling to Europe in a few weeks and can buy a SIM for a few euros if my T-Mobile International Roaming for some reason fails me.

I also know others who are not involved in the Mozilla community that are also using the Flame as their daily driver in North America, so that’s very optimistic to see people buying the Flame and dogfooding it on their own simply because they want a platform that puts the Open Web while also being Open Source.

Categorieën: Mozilla-nl planet

Ben Kero: The ‘Try’ repository and its evolution

Mozilla planet - ti, 05/08/2014 - 20:38

As the primary maintainers of the back end of the Try repository (in addition to the rest of Mercurial infrastructure) we are responsible for its care and feeding, making sure it is available, and a safe place to put your code before integration into trees.

Recently (the past few years actually) we’ve been experiencing that Mercurial has problems scaling to it’s activity. Here are some statistics for example:

  • 24550 Mercurial heads (this is reset every few months)
  • Head count correlated with the degraded performance
  • 4.3 GB in size, 203509 files without a working copy

One of the methods we’re attempting is to modify try so that each push is not a head, but is instead a bundle that can be applied cleanly to any mozilla-central tree.

By default when issuing a ‘hg clone’ from and to a local disk, it will create a hardlinked clone:

$ hg clone --time --debug --noupdate mozilla-central/ mozilla-central2/ linked 164701 files listing keys for "bookmarks" time: real 3.030 secs (user 1.080+0.000 sys 1.470+0.000) $ du --apparent-size -hsxc mozilla-central* 1.3G mozilla-central 49M mozilla-central2 1.3G total

These lightweight clones are the perfect environment to apply try heads to, because they will all be based off existing revs on mozilla-central anyway. To do that we can apply a try head bundle on top:

$ cd mozilla-central2/ $ hg unbundle $HOME/fffe1fc3a4eea40b47b45480b5c683fea737b00f.bundle adding changesets adding manifests adding file changes added 14 changesets with 55 changes to 52 files (+1 heads) (run 'hg heads .' to see heads, 'hg merge' to merge) $ hg heads|grep fffe1fc changeset: 213681:fffe1fc3a4ee

This is great. Imagine if instead of ‘mozilla-central2′ this were named fffe1*, and could be stored in portable bundle format, and used to create repositories whenever they were needed. There is one problem though:

$ du -hsx --apparent-size mozilla-central* 1.3G mozilla-central 536M mozilla-central2

Our lightweight 49MB copy has turned into 536MB. This would be fine for just a few repositories, but we have tens of thousands. That means we’ll need to keep them in bundle format and turn them into repositories on demand. Thankfully this operation only takes about three seconds.

I’ve written a little bash script to go through the backlog of 24,553 try heads and generate bundles for each of them. Here is the script and some stats for the bundles:

$ ls *bundle|wc -l 24553 $ du -hsx 39G . $ cat makebundles.sh #!/bin/bash /usr/bin/parallel --gnu --jobs 16 \ "test ! -f {}.bundle && \ /usr/bin/hg -R /repo/hg/mozilla/try bundle --rev {} {}.bundle ::: \ $(/usr/bin/hg -R /repo/hg/mozilla/try heads --template "{node} ")

If this tooling works well I’d like to start using this as the future method of submitting requests to try. Additionally if developers wanted, I could create a Mercurial extension to automate the bundling process and create a bundle submission engine for try.

Categorieën: Mozilla-nl planet

Nick Hurley: HTTP/2 draft 14 in Firefox

Mozilla planet - ti, 05/08/2014 - 19:43

As many of you reading this probably know, over the last two and a half years (or so) the HTTP Working Group has been hard at work on HTTP/2. During that time there have been 14 iterations of the draft spec (paired with 9 iterations of the draft spec for HPACK, the header compression scheme for HTTP/2), many in-person meetings (both at larger IETF meetings and in smaller HTTP-only interim meetings), and thousands of mailing list messages. On the first of this month, we took our next big step towards the standardization of this protocol - working group last call (WGLC).

There is lots of good information on the web about HTTP/2, so I won’t go into major details here. Suffice it to say, HTTP/2 is going to be a great advance in the speed and responsiveness of the web, as well as (hopefully!) increasing the amount of TLS on the web.

Throughout all these changes, I have kept Firefox up to date with the latest draft versions; not always an easy task, with some of the changes that have gone into the spec! Through all that time, the most recent version of HTTP/2 has been available in Nightly, disabled by default, behind the network.http.spdy.enabled.http2draft preference.

Today, however, Firefox took its next big step in HTTP/2 development. Not only have I landed draft 14 (the most recent version of the spec), but I also landed a patch to enable HTTP/2 by default. We felt comfortable enabling this by default given the WGLC status of the spec - there should be no major, incompatible changes to the wire format of the protocol unless some new information about, for example, major security flaws in the protocol come to light. This means that all of you running Nightly will now be able to speak HTTP/2 with any sites that support it. To find out if you’re speaking HTTP/2 to a website, just open up the developer tools console (Cmd-Alt-K or Ctrl-Alt-K), and make a network request. If the status line says “HTTP/2.0”, then congratulations, you’ve found a site that runs HTTP/2! Unfortunately, there aren’t a whole lot of these right now, mostly small test sites for developers of the protocol. That number should start to increase as the spec continues to move through the IETF process.

The current plan is to allow draft14 to ride the trains through to release, enabled by default. If during the course of your normal web surfing, you see any bugs and the connection is HTTP/2 (as reported by the developer tools, see above) and you can fix the bug by disabling HTTP/2, then please file a bug under Core :: Networking:HTTP.

Categorieën: Mozilla-nl planet

Just Browsing: Going Functional: Three Tiny Useful Functions – Some, Every, Pluck

Mozilla planet - ti, 05/08/2014 - 15:54
Some, Every

Every now and then in your everyday programmer’s life you come across an array and you need to determine something, usually whether all the elements in the array pass a certain test.

You know, something like:

var allPass = true; for (var i = 0; i < arr.length; ++i) { if ( ! passesTest(arr[i])) { allPass = false; break; } }

Or the mirror image: figure out whether any element in the set passes a test.

Now, the for loop above (or a similar construct for the “any” problem) is not too bad. But hey, it’s still six lines of code. And the purpose is not too clear at first glance. So let’s not reinvent the wheel and just use every:

var allPass = arr.every(function(item) { return passesTest(item); });

or better yet:

var allPass = arr.every(passesTest);

Much nicer, isn’t it? Similar for some, which returns true if any element matches.

Those two functions have been available in the all major browsers for quite a while now. If you’re unlucky enough to have to deal with IE8 or worse, you can use Lo-Dash or Underscore libraries.

Pluck

Pluck is useful when you have an array of objects and you’re only interested in one of their properties. It constructs an array comprising the values of said property. Say you want to know the average height of an NBA player.

var nbaPlayers = [ {age: 37, name: "Manu Ginóbili", height: 1.98}, {age: 36, name: "Dirk Nowitzki", height: 2.13}, //... ]; var averageHeight = avg(_.pluck(nbaPlayers, 'height')); original1

Well, so much for low variance

Another way to look at that is that pluck is merely shorthand for map. It is, but it’s still very handy. pluck is not a standard JS function but you can find it in any good functional JS library (be it Lo-Dash, Underscore or some other).

Categorieën: Mozilla-nl planet

Pages