mozilla

Mozilla Nederland LogoDe Nederlandse
Mozilla gemeenschap

Daniel Stenberg: Changing networks on Mac with Firefox

Mozilla planet - to, 30/10/2014 - 22:46

Not too long ago I blogged about my work to better deal with changing networks while Firefox is running. That job was basically two parts.

A) generic code to handle receiving such a network-changed event and then

B) a platform specific part that was for Windows that detected such a network change and sent the event

Today I’ve landed yet another fix for part B called bug 1079385, which detects network changes for Firefox on Mac OS X.

mac miniI’ve never programmed anything before on the Mac so this was sort of my christening in this environment. I mean, I’ve written countless of POSIX compliant programs including curl and friends that certainly builds and runs on Mac OS just fine, but I never before used the Mac-specific APIs to do things.

I got a mac mini just two weeks ago to work on this. Getting it up, prepared and my first Firefox built from source took all-in-all less than three hours. Learning the details of the mac API world was much more trouble and can’t say that I’m mastering it now either but I did find myself at least figuring out how to detect when IP addresses on the interfaces change and a changed address is a pretty good signal that the network changed somehow.

Categorieën: Mozilla-nl planet

Mozilla patcht crashprobleem in Firefox 33 - Security.nl

Nieuws verzameld via Google - to, 30/10/2014 - 22:28

Mozilla patcht crashprobleem in Firefox 33
Security.nl
Mozilla heeft een nieuwe versie van Firefox uitgebracht die crashproblemen bij sommige installaties moet verhelpen. Een combinatie van bepaalde hardware en drivers zorgde ervoor dat de browser bij het opstarten crashte. Firefox 33.0.2 volgt op Firefox ...

Categorieën: Mozilla-nl planet

Nathan Froyd: porting rr to x86-64

Mozilla planet - to, 30/10/2014 - 21:47

(TL;DR: rr from git can record and replay 64-bit programs.  Try it for yourself!)

Over the last several months, I’ve been devoting an ever-increasing amount of my time to making rr able to trace x86-64 programs.  I’ve learned a lot along the way and thought I’d lay out all the major pieces of work that needed to be done to make this happen.

Before explaining the major pieces, it will be helpful to define some terms: the host architecture is the architecture that the rr binary itself is compiled for.  The target architecture is the architecture of the binary that rr is tracing.  These are often equivalent, but not necessarily so: you could be tracing a 64-bit binary with a 64-bit rr (host == target), but then the program starts to run a 32-bit subprocess, which rr also begins to trace (host != target).  And you have to handle both cases in a single rr session, with a single rr binary.  (64-bit rr doesn’t handle the host != target case quite yet, but all the infrastructure is there.)

All of the pieces described below are not new ideas: the major programs you use for development (compiler, linker, debugger, etc.) all have done some variation of what I describe below.  However, it’s not every day that one takes a program written without any awareness of host/target distinctions and endows it with the necessary awareness.

Quite often, a program written exclusively for 32-bit hosts has issues when trying to compile for 64-bit hosts, and rr was no exception in this regard.  Making the code 64-bit clean by fixing all the places that triggered compiler warnings on x86-64, but not on i386, was probably the easiest part of the whole porting effort.  Format strings were a big part of this: writing %llx when you wanted to print a uint64_t, for instance, which assumes that uint64_t is implemented as unsigned long long (not necessarily true on 64-bit hosts).  There were several places where long was used instead of uint32_t.  And there were even places that triggered signed/unsigned comparison warnings on 64-bit platforms only.  (Exercise for the reader: construct code where this happens before looking at the solution.)

Once all the host issues are dealt with, removing all the places where rr assumed semantics or conventions of the x86 architecture was the next step.  In short, all of the code assumed host == target: we were compiled on x86, so that must be the architecture of the program we’re debugging.  How many places actually assumed this, though?  Consider what the very simplified pseudo-code of the rr main recording loop looks like:

while (true) { wait for the tracee to make a syscall grab the registers at the point of the syscall extract the syscall number from the registers (1) switch (syscall_number) { case SYS_read: (2) extract pointer to the data read from the registers (3) record contents of data break; case SYS_clock_gettime: extract pointers for argument structures from the registers record contents of those argument structures (4) break; case SYS_mmap: (5) ... case SYS_mmap2: (6) ... case SYS_clone: (7) ... ... default: complain about an unhandled syscall } let the tracee resume }

Every line marked with a number at the end indicates a different instance where host and target differences come into play and/or the code might have assumed x86 semantics.  (And the numbering above is not exhaustive!)  Taking them in order:

  1. You can obtain the registers of your target with a single ptrace call, but the layout of those registers depends on your target.  ptrace returns the registers as a struct user_regs, which differs between targets; the syscall number location obviously differs between different layouts of struct user_regs.
  2. The constant SYS_read refers to the syscall number for read on the host.  If you want to identify the syscall number for the target, you’ll need to do something different.
  3. This instance is a continuation of #1: syscall arguments are passed in different registers for each target, and the locations of those registers differ in size and location between different layouts of struct user_regs.
  4. SYS_clock_gettime takes a pointer to a struct timespec.  How much data should we read from that pointer for recording purposes?  We can’t just use sizeof(struct timespec), since that’s the size for the host, not the target.
  5. Like SYS_read, SYS_mmap refers to the syscall number for mmap on the host, so we need to do something similar to SYS_read here.  But just because two different architectures have a SYS_mmap, it doesn’t mean that the calling conventions for those syscalls at the kernel level are identical.  (This distinction applies to several other syscalls as well.)  SYS_mmap on x86 takes a single pointer argument, pointing to a structure that contains the syscall’s arguments.  The x86-64 version takes its arguments in registers.  We have to extract arguments appropriately for each calling convention.
  6. SYS_mmap2 only exists on x86; x86-64 has no such syscall.  So we have to handle host-only syscalls or target-only syscalls in addition to things like SYS_read.
  7. SYS_clone has four (!) different argument orderings at the kernel level, depending on the architecture, and x86 and x86-64 of course use different argument orderings.  You must take these target differences into account when extracting arguments.  SYS_clone implementations also differ in how they treat the tls parameter, and those differences have to be handled as well.

So, depending on the architecture of our target, we want to use different constants, different structures, and do different things depending on calling conventions or other semantic differences.

The approach rr uses is that the Registers of every rr Task (rr’s name for an operating system thread) has an architecture, along with a few other things like recorded events.  Every structure for which the host/target distinction matters has an arch() accessor.  Additionally, we define some per-architecture classes.  Each class contains definitions for important kernel types and structures, along with enumerations for syscalls and various constants.

Then we try to let C++ templates do most of the heavy lifting.  In code, it looks something like this:

enum SupportedArch { x86, x86_64, }; class X86Arch { /* many typedefs, structures, enums, and constants defined... */ }; class X64Arch { /* many typedefs, structures, enums, and constants defined... */ }; #define RR_ARCH_FUNCTION(f, arch, args...) \ switch (arch) { \ default: \ assert(0 && "Unknown architecture"); \ case x86: \ return f<X86Arch>(args); \ case x86_64: \ return f<X64Arch>(args); \ } class Registers { public: SupportedArch arch() const { ... } intptr_t syscallno() const { switch (arch()) { case x86: return u.x86.eax; case x86_64: return u.x64.rax; } } // And so on for argN accessors and so forth... private: union RegisterUnion { X86Arch::user_regs x86; X64Arch::user_regs x64; } u. }; template <typename Arch> static void process_syscall_arch(Task* t, int syscall_number) { switch (syscall_number) { case Arch::read: remote_ptr buf = t->regs().arg2(); // do stuff with buf break; case Arch::clock_gettime: // We ensure Arch::timespec is defined with the appropriate types so it // is exactly the size |struct timespec| would be on the target arch. remote_ptr tp = t->regs().arg2(); // do stuff with tp break; case Arch::mmap: switch (Arch::mmap_argument_semantics) { case Arch::MmapRegisterArguments: // x86-64 break; case Arch::MmapStructArguments: // x86 break; } break; case Arch::mmap2: // Arch::mmap2 is always defined, but is a negative number on architectures // where SYS_mmap2 isn't defined. // do stuff break; case Arch::clone: switch (Arch::clone_argument_ordering) { case Arch::FlagsStackParentTLSChild: // x86 break; case Arch::FlagsStackParentChildTLS: // x86-64 break; } break; ... } } void process_syscall(Task* t, int syscall_number) { int syscall_number = t->regs().syscallno(); RR_ARCH_FUNCTION(process_syscall_arch, t->arch(), t, syscall_number); }

The definitions of X86Arch and X64Arch also contain static_asserts to try and ensure that we’ve defined structures correctly for at least the host architecture.  And even now the definitions of the structures aren’t completely bulletproof; I don’t think the X86Arch definitions of some structures are robust on a 64-bit host because of differences in structure field alignment between 32-bit and 64-bit, for instance.  So that’s still something to fix in rr.

Templates handle the bulk of target-specific code in rr.  There are a couple of places where we need to care about how the target implements mmap and other syscalls which aren’t amenable to templates (or, at least, we didn’t use them; it’s probably possible to (ab)use templates for these purposes), and so we have code like:

Task* t = ... if (has_mmap2_syscall(t->arch())) { // do something specifically for mmap2 } else { // do something with mmap }

Finally, various bits of rr’s main binary and its testsuite are written in assembly, so of course those needed to be carefully ported over.

That’s all the major source-code related work that needed to be done. I’ll leave the target-specific runtime work required for a future post.

x86-64 support for rr hasn’t been formally released, but the x86-64 support in the github repository is functional: x86-64 rr passes all the tests in rr’s test suite and is able to record and replay Firefox mochitests.  I will note that it’s not nearly as fast as the x86 version; progress is being made in improving performance, but we’re not quite there yet.

If you’re interested in trying 64-bit rr out, you’ll find the build and installation instructions helpful, with one small modification: you need to add the command-line option -Dforce64bit=ON to any cmake invocations.  Therefore, to build with Makefiles, one needs to do:

git clone https://github.com/mozilla/rr.git mkdir obj64 cd obj64 cmake -Dforce64bit=ON ../rr make -j4 make check

Once you’ve done that, the usage instructions will likely be helpful.  Please try it out and report bugs if you find any!

Categorieën: Mozilla-nl planet

Mozilla en Matchstick gaan Chromecast-concurrent op Firefox OS uitbrengen - Tweakers

Nieuws verzameld via Google - to, 30/10/2014 - 18:42

Mozilla en Matchstick gaan Chromecast-concurrent op Firefox OS uitbrengen
Tweakers
Matchstick heeft voor de ontwikkeling van de gelijknamige mediaspeler, die qua concept vergelijkbaar is met de Chromecast van Google, samengewerkt met Mozilla. Laatstgenoemde heeft het Firefox OS-besturingssysteem ontwikkeld, dat als basis is ...

Categorieën: Mozilla-nl planet

Mozilla scores $470000 and 24000 orders from Matchstick Kickstarter - Geek

Nieuws verzameld via Google - to, 30/10/2014 - 18:29

Geek

Mozilla scores $470000 and 24000 orders from Matchstick Kickstarter
Geek
If you had your sights set on a streaming stick that relies on open hardware and software, or if you just like trying new things, the folks at Mozilla have an interesting proposition for you. It's called Matchstick, and the Kickstarter that accompanied ...

Google Nieuws
Categorieën: Mozilla-nl planet

Mozilla gaat voor wereldrecord met Firefox 3 - NU.nl

Nieuws verzameld via Google - to, 30/10/2014 - 17:45

NU.nl

Mozilla gaat voor wereldrecord met Firefox 3
NU.nl
AMSTERDAM - Browsermaker Mozilla heeft een manier bedacht om in het Guinness Book of Records te komen: de release van Firefox 3 moet een recordaantal downloads opleveren. Deze week begon de campagne om gebruikers op te roepen Firefox 3 te ...

en meer »
Categorieën: Mozilla-nl planet

Joel Maher: A case of the weekends?

Mozilla planet - to, 30/10/2014 - 17:29

Case of the Mondays

What was famous 15 years ago as a case of the Mondays has manifested itself in Talos.  In fact, I wonder why I get so many regression alerts on Monday as compared to other days.  It is more to a point of we have less noise in our Talos data on weekends.

Take for example the test case tresize:

linux32,

* in fact we see this on other platforms as well linux32/linux64/osx10.8/windowsXP

30 days of linux tresize

Many other tests exhibit this.  What is different about weekends?  Is there just less data points?

I do know our volume of tests go down on weekends mostly as a side effect of less patches being landed on our trees.

Here are some ideas I have to debug this more:

  • Run massive retrigger scripts for talos on weekends to validate # of samples is/isnot the problem
  • Reduce the volume of talos on weekdays to validate the overall system load in the datacenter is/isnot the problem
  • compare the load of the machines with all branches and wait times to that of the noise we have in certain tests/platforms
  • Look at platforms like windows 7, windows 8, and osx 10.6 as to why they have more noise on weekends or are more stable.  Finding the delta in platforms would help provide answers

If you have ideas on how to uncover this mystery, please speak up.  I would be happy to have this gone and make any automated alerts more useful!


Categorieën: Mozilla-nl planet

Peter Bengtsson: Shout-out to eventlog

Mozilla planet - to, 30/10/2014 - 17:05

If you do things with the Django ORM and want an audit trails of all changes you have two options:

  1. Insert some cleverness into a pre_save signal that writes down all changes some way.

  2. Use eventlog and manually log things in your views.

(you have other options too but I'm trying to make a point here)

eventlog is almost embarrassingly simple. It's basically just a model with three fields:

  • User
  • An action string
  • A JSON dump field

You use it like this:

from eventlog.models import log def someview(request): if request.method == 'POST': form = SomeModelForm(request.POST) if form.is_valid(): new_thing = form.save() log(request.user, 'mymodel.create', { 'id': new_thing.id, 'name': new_thing.name, # You can put anything JSON # compatible in here }) return redirect('someotherview') else: form = SomeModelForm() return render(request, 'view.html', {'form': form})

That's all it does. You then have to do something with it. Suppose you have an admin page that only privileged users can see. You can make a simple table/dashboard with these like this:

from eventlog.models import Log # Log the model, not log the function def all_events(request): all = Log.objects.all() return render(request, 'all_events.html', {'all': all})

And something like this to to all_events.html:

<table> <tr> <th>Who</th><th>When</th><th>What</th><th>Details</th> </tr> {% for event in all %} <tr> <td>{{ event.user.username }}</td> <td>{{ event.timestamp | date:"D d M Y" }}</td> <td>{{ event.action }}</td> <td>{{ event.extra }}</td> </tr> {% endfor %} </table>

What I like about it is that it's very deliberate. By putting it into views at very specific points you're making it an audit log of actions, not of data changes.

Projects with overly complex model save signals tend to dig themselves into holes that make things slow and complicated. And it's not unrealistic that you'll then record events that aren't particularly important to review. For example, a cron job that increments a little value or something. It's more interesting to see what humans have done.

I just wanted to thank the Eldarion guys for eventlog. It's beautifully simple and works perfectly for me.

Categorieën: Mozilla-nl planet

Adam Lofting: Learning about Learning Analytics @ #Mozfest

Mozilla planet - to, 30/10/2014 - 15:53

If I find a moment, I’ll write about many of the fun and inspiring things I saw at Mozfest this weekend, but this post is about a single session I had the pleasure of hosting alongside Andrew, Doug and Simon; Learning Analytics for Good in the Age of Big Data.

We had an hour, no idea if anyone else would be interested, or what angle people would come to the session from. And given that, I think it worked out pretty well.

la_session

We had about 20 participants, and broke into four groups to talk about Learning Analytics from roughly 3 starting points (though all the discussions overlapped):

  1. Practical solutions to measuring learning as it happens online
  2. The ethical complications of tracking (even when you want to optimise for something positive – e.g. Learning)
  3. The research opportunities for publishing and connecting learning data
But, did anyone learn anything in our Learning Analytics session?

Well, I know for sure the answer is yes… as I personally learned things. But did anyone else?

I spoke to people later in the day who told me they learned things. Is that good enough?

As I watched the group during the session I saw conversations that bounced back and forth in a way that rarely happens without people learning something. But how does anyone else who wasn’t there know if our session had an impact?

How much did people learn?

This is essentially the challenge of Learning Analytics. And I did give this some thought before the session…

IMG_0184

As a meta-exercise, everyone who attended the session had a question to answer at the start and end. We also gave them a place to write their email address and to link their ‘learning data’ to them in an identifiable way. It was a little bit silly, but it was something to think about.

This isn’t good science, but it tells a story. And I hope it was a useful cue for the people joining the session.

Response rate:
  • We had about 20 participants
  • 10 returned the survey (i.e. opted in to ‘tracking’), by answering question 1
  • 5 of those answered question 2
  • 5 gave their email address (not exactly the same 5 who answered both questions)
Here is our Learning Analytics data from our session

Screen Shot 2014-10-30 at 13.53.26

Is that demonstrable impact?

Even though this wasn’t a serious exercise. I think we can confidently argue that some people did learn, in much the same way certain newspapers can make a headline out of two data points…

What, and how much they learned, and if it will be useful later in their life is another matter.

Even with the deliberate choice of question which was almost impossible to not show improvement from start to end of the session, one respondent claims to be less sure what the session was about after attending (but let’s not dwell on that!).

Post-it notes and scribbles

If you were at the session, and want to jog your memory about what we talked about. I kind-of documented the various things we captured on paper.

Screen Shot 2014-10-30 at 14.40.57

Click for gallery of bigger images

Into 2015

I’m looking forward to exploring Learning Analytics in the context of Webmaker much more in 2015.

And to think that this was just one hour in a weekend full of the kinds of conversations that repeat in your mind all the way until next Mozfest. It’s exhausting in the best possible way.

Categorieën: Mozilla-nl planet

Tim Taubert: Why including a backup pin in your Public-Key-Pinning header is a good idea

Mozilla planet - to, 30/10/2014 - 14:00

In my last post “Deploying TLS the hard way” I explained how TLS and its extensions (as well as a few HTTP extensions) work and what to watch out for when enabling TLS for your server. One of the HTTP extensions mentioned is HTTP Public-Key-Pinning (HPKP). As a short reminder, the header looks like this:

Public-Key-Pins: pin-sha256="GRAH5Ex+kB4cCQi5gMU82urf+6kEgbVtzfCSkw55AGk="; pin-sha256="lERGk61FITjzyKHcJ89xpc6aDwtRkOPAU0jdnUqzW2s="; max-age=15768000; includeSubDomains

You can see that it specifies two pin-sha256 values, that is the pins of two public keys. One is the public key of your currently valid certificate and the other is a backup key in case you have to revoke your certificate.

I received a few questions as to why I suggest including a backup pin and what the requirements for a backup key would be. I will try to answer those with a more detailed overview of how public key pinning and TLS certificates work.

How are RSA keys represented?

Let us go back to the beginning and start by taking a closer look at RSA keys:

$ openssl genrsa 4096

The above command generates a 4096 bit RSA key and prints it to the console. Although it says -----BEGIN RSA PRIVATE KEY----- it does not only return the private key but an ASN.1 structure that also contains the public key - we thus actually generated an RSA key pair.

A common misconception when learning about keys and certificates is that the RSA key itself for a given certificate expires. RSA keys however never expire - after all they are just three numbers. Only the certificate containing the public key can expire and only the certificate can be revoked. Keys “expire” or are “revoked” as soon as there are no more valid certificates using the public key, and you threw away the keys and stopped using them altogether.

What does the TLS certificate contain?

By submitting the Certificate Signing Request (CSR) containing your public key to a Certificate Authority it will issue a valid certificate. That will again contain the public key of the RSA key pair we generated above and an expiration date. Both the public key and the expiration date will be signed by the CA so that modifications of any of the two would render the certificate invalid immediately.

For simplicity I left out a few other fields that X.509 certificates contain to properly authenticate TLS connections, for example your server’s hostname and other details.

How does public key pinning work?

The whole purpose of public key pinning is to detect when the public key of a certificate for a specific host has changed. That may happen when an attacker compromises a CA such that they are able to issue valid certificates for any domain. A foreign CA might also just be the attacker, think of state-owned CAs that you do not want to be able to {M,W}ITM your site. Any attacker intercepting a connection from a visitor to your server with a forged certificate can only be prevented by detecting that the public key has changed.

After the server sent a TLS certificate with the handshake, the browser will look up any stored pins for the given hostname and check whether any of those stored pins match any of the SPKI fingerprints (the output of applying SHA-256 to the public key information) in the certificate chain. The connection must be terminated immediately if pin validation fails.

If the browser does not find any stored pins for the current hostname then it will directly continue with the usual certificate checks. This might happen if the site does not support public key pinning and does not send any HPKP headers at all, or if this is the first time visiting and the server has not seen the HPKP header yet in a previous visit.

Pin validation should happen as soon as possible and thus before any basic certificate checks are performed. An expired or revoked certificate will be happily accepted at the pin validation stage early in the handshake when any of the SPKI fingerprints of its chain match a stored pin. Only a little later the browser will see that the certificate already expired or was revoked and will reject it.

Pin validation also works for self-signed certificates, but they will of course raise the same warnings as usual as soon as the browser determined they were not signed by a trusted third-party.

What if your certificate was revoked?

If your server was compromised and an attacker obtained your private key you have to revoke your certificate as the attacker can obviously fully intercept any TLS connection to your server and record every conversation. If your HPKP header contained only a single pin-sha256 token you are out of luck until the max-age directive given in the header lets those pins expire in your visitors’ browsers.

Pin validation requires checking the SPKI fingerprints of all certificates in the chain. When for example StartSSL signed your certificate you have another intermediate Class 1 or 2 certificate and their root certificate in the chain. The browser trusts only the root certificate but the intermediate ones are signed by the root certificate. The intermediate certificate in turn signs the certificate deployed on your server and that is called a chain of trust.

To prevent getting stuck after your only pinned key was compromised, you could for example provide the SPKI fingerprint of StartSSL’s Class 1 intermediate certificate. An attacker would now have to somehow get a certificate issued by StartSSL’s Class 1 tier to successfully impersonate you. You are however again out of luck should you decide to upgrade to Class 2 in a month because you decided to start paying for a certificate.

Pinning StartSSL’s root certificate would let you switch Classes any time and the attacker would still have to get a certificate issued by StartSSL for your domain. This is a valid approach as long as you are trusting your CA (really?) and as long as the CA itself is not compromised. In case of a compromise however the attacker would be able to get a valid certificate for your domain that passes pin validation. After the attack was discovered StartSSL would quickly revoke all currently issued certificates, generate a new key pair for their root certificate and issue new certificates. And again we would be out of luck because suddenly pin validation fails and no browser will connect to our site.

Include the pin of a backup key

The safest way to pin your certificate’s public key and be prepared to revoke your certificate when necessary is to include the pin of a second public key: your backup key. This backup RSA key should in no way be related to your first key, just generate a new one.

A good advice is to keep this backup key pair (especially the private key) in a safe place until you need it. Uploading it to the server is dangerous: when your server is compromised you lose both keys at once and have no backup key left.

Generate a pin for the backup key exactly as you did for the current key and include both pin-sha256 values as shown above in the HPKP header. In case the current key is compromised make sure all vulnerabilities are patched and then remove the revoked pin. Generate a CSR for the backup key, let your CA issue a new certificate, and revoke the old one. Upload the new certificate to your server and you are done.

Finally, do not forget to generate a new backup key and include that pin in your HPKP header again. Once a browser successfully establishes a TLS connection the next time, it will see your updated HPKP header and replace any stored pins with the new ones.

Categorieën: Mozilla-nl planet

Tim Taubert: HTTP Public-Key-Pinning explained

Mozilla planet - to, 30/10/2014 - 14:00

In my last post “Deploying TLS the hard way” I explained how TLS and its extensions (as well as a few HTTP extensions) work and what to watch out for when enabling TLS for your server. One of the HTTP extensions mentioned is HTTP Public-Key-Pinning (HPKP). As a short reminder, the header looks like this:

Public-Key-Pins: pin-sha256="GRAH5Ex+kB4cCQi5gMU82urf+6kEgbVtzfCSkw55AGk="; pin-sha256="lERGk61FITjzyKHcJ89xpc6aDwtRkOPAU0jdnUqzW2s="; max-age=15768000; includeSubDomains

You can see that it specifies two pin-sha256 values, that is the pins of two public keys. One is the public key of your currently valid certificate and the other is a backup key in case you have to revoke your certificate.

I received a few questions as to why I suggest including a backup pin and what the requirements for a backup key would be. I will try to answer those with a more detailed overview of how public key pinning and TLS certificates work.

How are RSA keys represented?

Let us go back to the beginning and start by taking a closer look at RSA keys:

$ openssl genrsa 4096

The above command generates a 4096 bit RSA key and prints it to the console. Although it says -----BEGIN RSA PRIVATE KEY----- it does not only return the private key but an ASN.1 structure that also contains the public key - we thus actually generated an RSA key pair.

A common misconception when learning about keys and certificates is that the RSA key itself for a given certificate expires. RSA keys however never expire - after all they are just three numbers. Only the certificate containing the public key can expire and only the certificate can be revoked. Keys “expire” or are “revoked” as soon as there are no more valid certificates using the public key, and you threw away the keys and stopped using them altogether.

What does the TLS certificate contain?

By submitting the Certificate Signing Request (CSR) containing your public key to a Certificate Authority it will issue a valid certificate. That will again contain the public key of the RSA key pair we generated above and an expiration date. Both the public key and the expiration date will be signed by the CA so that modifications of any of the two would render the certificate invalid immediately.

For simplicity I left out a few other fields that X.509 certificates contain to properly authenticate TLS connections, for example your server’s hostname and other details.

How does public key pinning work?

The whole purpose of public key pinning is to detect when the public key of a certificate for a specific host has changed. That may happen when an attacker compromises a CA such that they are able to issue valid certificates for any domain. A foreign CA might also just be the attacker, think of state-owned CAs that you do not want to be able to {M,W}ITM your site. Any attacker intercepting a connection from a visitor to your server with a forged certificate can only be prevented by detecting that the public key has changed.

After the server sent a TLS certificate with the handshake, the browser will look up any stored pins for the given hostname and check whether any of those stored pins match any of the SPKI fingerprints (the output of applying SHA-256 to the public key information) in the certificate chain. The connection must be terminated immediately if pin validation fails.

If the browser does not find any stored pins for the current hostname then it will directly continue with the usual certificate checks. This might happen if the site does not support public key pinning and does not send any HPKP headers at all, or if this is the first time visiting and the server has not seen the HPKP header yet in a previous visit.

Pin validation should happen as soon as possible and thus before any basic certificate checks are performed. An expired or revoked certificate will be happily accepted at the pin validation stage early in the handshake when any of the SPKI fingerprints of its chain match a stored pin. Only a little later the browser will see that the certificate already expired or was revoked and will reject it.

Pin validation also works for self-signed certificates, but they will of course raise the same warnings as usual as soon as the browser determined they were not signed by a trusted third-party.

What if your certificate was revoked?

If your server was compromised and an attacker obtained your private key you have to revoke your certificate as the attacker can obviously fully intercept any TLS connection to your server and record every conversation. If your HPKP header contained only a single pin-sha256 token you are out of luck until the max-age directive given in the header lets those pins expire in your visitors’ browsers.

Pin validation requires checking the SPKI fingerprints of all certificates in the chain. When for example StartSSL signed your certificate you have another intermediate Class 1 or 2 certificate and their root certificate in the chain. The browser trusts only the root certificate but the intermediate ones are signed by the root certificate. The intermediate certificate in turn signs the certificate deployed on your server and that is called a chain of trust.

To prevent getting stuck after your only pinned key was compromised, you could for example provide the SPKI fingerprint of StartSSL’s Class 1 intermediate certificate. An attacker would now have to somehow get a certificate issued by StartSSL’s Class 1 tier to successfully impersonate you. You are however again out of luck should you decide to upgrade to Class 2 in a month because you decided to start paying for a certificate.

Pinning StartSSL’s root certificate would let you switch Classes any time and the attacker would still have to get a certificate issued by StartSSL for your domain. This is a valid approach as long as you are trusting your CA (really?) and as long as the CA itself is not compromised. In case of a compromise however the attacker would be able to get a valid certificate for your domain that passes pin validation. After the attack was discovered StartSSL would quickly revoke all currently issued certificates, generate a new key pair for their root certificate and issue new certificates. And again we would be out of luck because suddenly pin validation fails and no browser will connect to our site.

Include the pin of a backup key

The safest way to pin your certificate’s public key and be prepared to revoke your certificate when necessary is to include the pin of a second public key: your backup key. This backup RSA key should in no way be related to your first key, just generate a new one.

A good advice is to keep this backup key pair (especially the private key) in a safe place until you need it. Uploading it to the server is dangerous: when your server is compromised you lose both keys at once and have no backup key left.

Generate a pin for the backup key exactly as you did for the current key and include both pin-sha256 values as shown above in the HPKP header. In case the current key is compromised make sure all vulnerabilities are patched and then remove the revoked pin. Generate a CSR for the backup key, let your CA issue a new certificate, and revoke the old one. Upload the new certificate to your server and you are done.

Finally, do not forget to generate a new backup key and include that pin in your HPKP header again. Once a browser successfully establishes a TLS connection the next time, it will see your updated HPKP header and replace any stored pins with the new ones.

Categorieën: Mozilla-nl planet

How Google and Mozilla are aiming to make web apps shine offline - TechRepublic

Nieuws verzameld via Google - to, 30/10/2014 - 13:30

How Google and Mozilla are aiming to make web apps shine offline
TechRepublic
For Mozilla, improving these offline capabilities is important for Firefox OS, its open-source mobile operating system designed to run apps built from web technologies such as HTML, CSS and JavaScript (JS). The first phones to run the OS launched in ...

Google Nieuws
Categorieën: Mozilla-nl planet

Gregory Szorc: Soft Launch of MozReview

Mozilla planet - to, 30/10/2014 - 12:15

We performed a soft launch of MozReview: Mozilla's new code review tool yesterday!

What does that mean? How do I use it? What are the features? How do I get in touch or contribute? These are all great questions. The answers to those and more can all be found in the MozReview documentation. If they aren't, it's a bug in the documentation. File a bug or submit a patch. Instructions to do that are in the documentation.

Categorieën: Mozilla-nl planet

Mozilla komt met WiFi-sniffer op Android - Webwereld

Nieuws verzameld via Google - to, 30/10/2014 - 11:16

Mozilla komt met WiFi-sniffer op Android
Webwereld
Ook Mozilla verzamelt data van miljoenen WiFi-routers, een omstreden praktijk sinds Google daar 'per ongeluk' iets te ver mee ging en ook daadwerkelijk internetverkeer onderschepte. Met de nieuwe Android-app Stumbler kunnen gebruikers de Mozilla ...

Categorieën: Mozilla-nl planet

Kev Needham: things that interest me this week – 29 oct 2014

Mozilla planet - to, 30/10/2014 - 03:13

Quick Update: A couple of people mentioned there’s no Mozilla items in here. They’re right, and it’s primarily because the original audience of this type of thing was Mozilla. I’ll make sure I add them where relevant, moving forward.

Every week I put together a bunch of news items I think are interesting to the people I work with, and that’s usually limited to a couple wiki pages a handful of people read. I figured I may as well put it in a couple other places, like here, and see if people are interested. Topics focus on the web, the technologies that power it, and the platforms that make use of it. I work for Mozilla, but these are my own opinions and takes on things.

I try to have three sections:

  • Something to Think About – Something I’m seeing a company doing that I think is important, why I think it’s important, and sometimes what I think should be done about it. Some weeks these won’t be around, because they tend to not show their faces much.
  • Worth a Read – Things I think are worth the time to read if you’re interested in the space as a whole. Limited to three items max, but usually two. If you don’t like what’s in here, tell me why.
  • Notes – Bits and bobs people may or may not be interested in, but that I think are significant, bear watching, or are of general interest.

I’ll throw these out every Wednesday, and standard disclaimers apply – this is what’s in my brain, and isn’t representative of what’s in anyone else’s brain, especially the folks I work with at Mozilla. I’ll also throw a mailing list together if there’s interest, and feedback is always welcome (your comment may get stuck in a spam-catcher, don’t worry, I’ll dig it out).

– k

Something to Think About

Lifehacker posted an article this morning around all the things you can do from within Chrome’s address bar. Firefox can do a number of the same things, but it’s interesting to see the continual improvements the Chrome team has made around search (and service) integration, and also the productivity hacks (like searching your Google drive without actually going there) that people come up with to make a feature more useful than it’s intended design.

Why I think people should care: Chrome’s modifications to the address bar aren’t ground-breaking, nor are they changes that came about overnight. They are a series of iterative changes to a core function that work well with Google’s external services, and focus on increasing utility which, not coincidentally, increases the value and stickiness of the Google experience as a whole. Continued improvements to existing features (and watching how people are riffing on those features) is a good thing, and is something to consider as part of our general product upkeep, particularly around the opportunity to do more with services (both ours, and others) that promote the open web as a platform.

Worth a Read
  • Benedict Evans updated his popular “Mobile Is Eating the World” presentation, and posits that mobile effectively ”is” everything technology today. I think it needs a “Now” at the end, because what he’s describing has happened before, and will happen again. Mobile is a little different currently, mainly because of the gigantic leaps in hardware for fewer dollars that continue to be made as well as carrier subsidies fueling 2-year upgrade cycles. Mobile itself is also not just phones, it’s things other than desktops and laptops that have a network connection. Everything connected is everything. He’s also put together a post on Tablets, PCs and Office that goes a little bit into technology cycles and how things like tablets are evolving to fill more than just media consumption needs, but the important piece he pushes in both places is the concept of network connected screens being the window to your stuff, and the platform under the screen being a commodity (e.g. processing power is improving on every platform to the point the hardware platform is mattering less) that is really simply the interface that better fits the task at hand.
  • Ars Technica has an overview of some of the more interesting changes in Lollipop which focus on unbundling apps and APIs to mitigate fragmentation risk, an enhanced setup process focusing on user experience, and the shift in the Nexus brand from a market-share builder to a premium offering.
  • Google’s Sundar Pichai was promoted last week in a move that solidifies Google’s movement towards a unified, backend-anchored, multi-screen experience. Pichai is a long time Google product person, and has been fronting the Android and Chrome OS (and a couple other related services) teams, and now takes on Google’s most important web properties as well, including Gmail, Search, AdSense, and the infrastructure that runs it. This gives those business units inside Google better alignment around company goals, and shows the confidence Google has in Pichai. Expect further alignment in Google’s unified experience movement through products like Lollipop, Chrome OS, Inbox and moving more Google Account data (and related experiences like notifications and Web Intents) into the cloud, where it doesn’t rely on a specific client and can be shared/used on any connected screen.

Notes
Categorieën: Mozilla-nl planet

Mozilla Release Management Team: Firefox 34 beta3 to beta4

Mozilla planet - wo, 29/10/2014 - 22:27

  • 38 changesets
  • 64 files changed
  • 869 insertions
  • 625 deletions

ExtensionOccurrences js16 cpp16 jsm9 h9 java4 xml2 jsx2 html2 mn1 mm1 list1 css1

ModuleOccurrences browser19 gfx10 content8 mobile6 services5 layout4 widget3 netwerk3 xpfe2 toolkit2 modules1 accessible1

List of changesets:

Nicolas SilvaBug 1083071 - Backout the additional blacklist entries. r=jmuizelaar, a=sledru - 31acf5dc33fc Jeff MuizelaarBug 1083071 - Disable D3D11 and D3D9 layers on broken drivers. r=bjacob, a=sledru - 618a12c410bb Ryan VanderMeulenBacked out changeset 6c46c21a04f9 (Bug 1074378) - 3e2c92836231 Cosmin MalutanBug 1072244 - Correctly throw the exceptions in TPS framework. r=hskupin a=testonly DONTBUILD - 48e3c2f927d5 Mark BannerBug 1081959 - "Something went wrong" isn't displayed when the call fails in the connection phase. r=dmose, a=lmandel - 8cf65ccdce3d Jared WeinBug 1062335 - Loop panel size increases after switching themes. r=mixedpuppy, a=lmandel - 033942f8f817 Wes JohnstonBug 1055883 - Don't reshow header when hitting the bottom of short pages. r=kats, a=lmandel - 823ecd23138b Patrick McManusBug 1073825 - http2session::cleanupstream failure. r=hurley, a=lmandel - eed6613c5568 Paul AdenotBug 1078354 - Part 1: Make sure we are not waking up an OfflineGraphDriver. r=jesup, a=lmandel - 9d0a16097623 Paul AdenotBug 1078354 - Part 2: Don't try to measure a PeriodicWave size when an OscillatorNode is using a basic waveform. r=erahm, a=lmandel - b185e7a13e18 Gavin SharpBug 1086958 - Back out change to default browser prompting for Beta 34. r=Gijs, a=lmandel - d080a93fd9e1 Yury DelendikBug 1072164 - Fixes pdf.js for CMYK jpegs. r=bdahl, a=lmandel - d1de09f2d1b0 Neil RashbrookBug 1070768 - Move XPFE's autocomplete.css to communicator so it doesn't conflict with toolkit's new global autocomplete.css. r=Ratty, a=lmandel - 78b9d7be1770 Markus StangeBug 1078262 - Only use the fixed epsilon for the translation components. r=roc, a=lmandel - 2c49dc84f1a0 Benjamin ChenBug 1079616 - Dispatch PushBlobRunnable in RequestData function, and remove CreateAndDispatchBlobEventRunnable. r=roc, a=lmandel - d9664db594e9 Brad LasseyBug 1084035 - Add the ability to mirror tabs from desktop to a second screen, don't block browser sources when specified in constraints from chrome code. r=jesup, a=lmandel - 47065beeef20 Gijs KruitboschBug 1074520 - Use CSS instead of hacks to make the forget button lay out correctly. r=jaws, a=lmandel - 46916559304f Markus StangeBug 1085475 - Don't attempt to use vibrancy in 32-bit mode. r=smichaud, a=lmandel - 184b704568ff Mark FinkleBug 1088952 - Disable "Enable wi-fi" toggle on beta due to missing permission. r=rnewman, a=lmandel - 9fd76ad57dbe Yonggang LuoBug 1066459 - Clamp the new top row index to the valid range before assigning it to mTopRowIndex when scrolling. r=kip a=lmandel - 4fd0f4651a61 Mats PalmgrenBug 1085050 - Remove a DEBug assertion. r=kip a=lmandel - 1cd947f5b6d8 Jason OrendorffBug 1042567 - Reflect JSPropertyOp properties more consistently as data properties. r=efaust, a=lmandel - 043c91e3aaeb Margaret LeibovicBug 1075232 - Record which suggestion of the search screen was tapped in telemetry. r=mfinkle, a=lmandel - a627934a0123 Benoit JacobBug 1088858 - Backport ANGLE fixes to make WebGL work on Windows in Firefox 34. r=jmuizelaar, a=lmandel - 85e56f19a5a1 Patrick McManusBug 1088910 - Default http/2 off on gecko 34 after EARLY_BETA. r=hurley, a=lmandel - 74298f48759a Benoit JacobBug 1083071 - Avoid touching D3D11 at all, even to test if it works, if D3D11 layers are blacklisted. r=Bas, r=jmuizelaar, a=sledru - 6268e33e8351 Randall BarkerBug 1080701 - TabMirror needs to be updated to work with the chromecast server. r=wesj, r=mfinkle, a=lmandel - 0811a9056ec4 Xidorn QuanBug 1088467 - Avoid adding space for bullet with list-style: none. r=surkov, a=lmandel - 2e54d90546ce Michal NovotnyBug 1083922 - Doom entry when parsing security info fails. r=mcmanus, a=lmandel - 34988fa0f0d8 Ed LeeBug 1088729 - Only allow http(s) directory links. r=adw, a=sledru - 410afcc51b13 Mark BannerBug 1047410 - Desktop client should display Call Failed if an incoming call - d2ef2bdc90bb Mark BannerBug 1088346 - Handle "answered-elsewhere" on incoming calls for desktop on Loop. r=nperriault a=lmandel - 67d9122b8c98 Mark BannerBug 1088636 - Desktop ToS url should use hello.firefox.com not call.mozilla.com. r=nperriault a=lmandel - 45d717da277d Adam Roach [:abr]Bug 1033579 - Add channel to POST calls for Loop to allow different servers based on the channel. r=dmose a=lmandel - d43a7b8995a6 Ethan HuggBug 1084496 - Update whitelist for screensharing r=jesup a=lmandel - 080cfa7f5d79 Ryan VanderMeulenBacked out changeset 043c91e3aaeb (Bug 1042567) for deBug jsreftest failures. - 15bafc2978d8 Jim ChenBug 1066982 - Try to not launch processes on pre-JB devices because of Android bug. r=snorp, a=lmandel - 5a4dfee44717 Randell JesupBug 1080755 - Push video frames into MediaStreamGraph instead of waiting for pulls. r=padenot, a=lmandel - 22cfde2bf1ce

Categorieën: Mozilla-nl planet

David Boswell: Please complete and share the contributor survey

Mozilla planet - wo, 29/10/2014 - 20:53

We are conducting a research project to learn about the values and motivations of Mozilla’s contributors (both volunteers and staff) and to understand how we can improve their experiences.

Part of this effort is a survey for contributors that has just been launched at:

http://www.surveygizmo.com/s3/1852460/Mozilla-Community-Survey

Please take a few minutes to fill this out and then share this link with the communities you work with. Having more people complete this will give us a more complete understanding of how we can improve the experience for all contributors.

We plan to have results from this survey and the data analysis project available by the time of the Portland work week in December.


Categorieën: Mozilla-nl planet

K Lars Lohn: Judge the Project, Not the Contributors

Mozilla planet - wo, 29/10/2014 - 18:16
I recently read a blog posting titled, The 8 Essential Traits of a Great Open Source Contributor I am disturbed by this posting. While clearly not the intended effect, I feel the posting just told a huge swath of people that they are neither qualified nor welcome to contribute to Open Source. The intent of the posting was to say that there is a wide range of skills needed in Open Source. Even if a potential contributor feels they lack an essential technical skill, here's an enumeration of other skills that are helpful.
Over the years, I’ve talked to many people who have wanted to contribute to open source projects, but think that they don’t have what it takes to make a contribution. If you’re in that situation, I hope this post helps you get out of that mindset and start contributing to the projects that matter to you. See? The author has completely good intentions. My fear is that the posting has the opposite effect. It raises a bar as if it is an ad for a paid technical position. He uses superlatives that say to me, “we are looking for the top people as contributors, not common people”.

Unfortunately, my interpretation of this blog posting is not the need for a wide range of skills, it communicates that if you contribute, you'd better be great at doing so. In fact, if you do not have all these skills, you cannot be considered great. So where is the incentive to participate? It makes Open Source sound as if it an invitation to be judged as either great or inadequate.

Ok, I know this interpretation is through my own jaundiced eyes. So to see if my interpretation was just a reflection of my own bad day, I shared the blog posting with a couple colleagues.  Both colleagues are women that judge their own skills unnecessarily harshly, but, in my judgement are really quite good. I chose these two specifically, because I knew both suffer “imposter syndrome”, a largely unshakable feeling of inadequacy that is quite common among technical people.   Both reacted badly to the posting, one saying that it sounded like a job posting for a position for which there would be no hope of ever landing.

I want to turn this around. Let's not judge the contributors, let's judge the projects instead. In fact, we can take these eight traits and boil them down to one: essential trait of a great open source project:
Essential trait of a great open source project:
Leaders & processes that can advance the project while marshalling imperfect contributors gracefully.
That's a really tall order. By that standard, my own Open Source projects are not great. However, I feel much more comfortable saying that the project is not great, rather than sorting the contributors.

If I were paying people to work on my project, I'd have no qualms about judging their performance any where along a continuum of “great” to “inadequate”. Contributors are NOT employees subject to performance review.  In my projects, if someone contributes, I consider both the contribution and the contributor to be “great”. The contribution may not make it into the project, but it was given to me for free, so it is naturally great by that aspect alone.

Contribution: Voluntary Gift
Perhaps if the original posting had said, "these are the eight gifts we need" rather than saying the the gifts are traits of people we consider "great", I would not have been so uncomfortable.
A great Open Source project is one that produces a successful product and is inclusive. An Open Source project that produces a successful product, but is not inclusive, is merely successful.
Categorieën: Mozilla-nl planet

Tim Taubert: Talk: Keeping secrets with JavaScript - An Introduction to the WebCrypto API

Mozilla planet - wo, 29/10/2014 - 17:00

With the web slowly maturing as a platform the demand for cryptography in the browser has risen, especially in a post-Snowden era. Many of us have heard about the upcoming Web Cryptography API but at the time of writing there seem to be no good introductions available. We will take a look at the proposed W3C spec and its current state of implementation.

Video Slides Code

https://github.com/ttaubert/secret-notes

Categorieën: Mozilla-nl planet

Soledad Penades: Native smooth scrolling with JS

Mozilla planet - wo, 29/10/2014 - 14:50

There’s a new way of invoking the scroll functions in JavaScript where you can specify how do you want the scroll to behave: smoothly, immediately, or auto (whatever the user agent wants, I guess).

window.scrollBy({ top: 100, behavior: 'smooth' });

(note it’s behavior, not behaviour, argggh).

I read this post yesterday saying that it would be available (via this tweet from @FirefoxNightly) and immediately wanted to try it out!

I made sure I had an updated copy of Firefox Nightly—you’ll need a version from the 28th of October or later. Then I enabled the feature by going to about:config and changing layout.css.scroll-behavior.enabled to true. No restart required!

My test looks like this:

native smooth scrolling

(source code)

You can also use it in CSS code:

#myelement {
  scroll-behavior: smooth;
}

but my example doesn’t. Feel like building one yourself? :)

The reason why I’m so excited about this is that I’ve had to implement this behaviour with plug-ins and what nots that tend to interfere with the rendering pipeline many, many times and it’s amazing that this is going to be native to the browser, as it should be smooth and posh. And also because other native platforms have it too and it makes the web look “not cool”. Well, not anymore!

The other cool aspect is that it degrades great—if the option is not recognised by the engine you will just get… a normal abrupt behaviour, but it will still scroll.

I’m guessing that you can still use your not-so-performant plug-ins if you really want your own scroll algorithm (maybe you want it to bounce in a particular way, etc). Just use instant instead of smooth, and you should be good to go!

SCROLL SCROLL SCROLL SCROLL!

flattr this!

Categorieën: Mozilla-nl planet

Pages