Mozilla Nederland LogoDe Nederlandse

Daniel Pocock: A FOSScamp by the beach

Mozilla planet - vr, 30/06/2017 - 10:47

I recently wrote about the great experience many of us had visiting OSCAL in Tirana. Open Labs is doing a great job promoting free, open source software there.

They are now involved in organizing another event at the end of the summer, FOSScamp in Syros, Greece.

Looking beyond the promise of sun and beach, FOSScamp is also just a few weeks ahead of the Outreachy selection deadline so anybody who wants to meet potential candidates in person may find this event helpful.

If anybody wants to discuss the possibilities for involvement in the event then the best place to do that may be on the Open Labs forum topic.

What will tomorrow's leaders look like?

While watching a talk by Joni Baboci, head of Tirana's planning department, I was pleasantly surprised to see this photo of Open Labs board members attending the town hall for the signing of an open data agreement:

It's great to see people finding ways to share the principles of technological freedoms far and wide and it will be interesting to see how this relationship with their town hall grows in the future.

Categorieën: Mozilla-nl planet

Mozilla Open Innovation Team: A Time for Action — Innovating for Diversity & Inclusion in Open Source Communities

Mozilla planet - vr, 30/06/2017 - 02:32
Photo credit: cyberdee via Visual Hunt / CC BY-NC-SA

Another year, another press story letting us know Open Source has a diversity problem. But this isn’t news — women, people of color, parents, non-technical contributors, cs/transgender and other marginalized people and allies have been sharing stories of challenge and overcoming for years. It’s can’t be enough to count who makes it through the gauntlet of tasks and exclusive cultural norms that lead to a first pull request; it’s not enough to celebrate increased diversity on stage at technical conferences — when audience remains homogeneous, and abuse goes unchallenged.

Open source is missing out on diverse perspectives, and experiences that can drive change for a better world because we’re stuck in our ways — continually leaning on long-held assumptions about why we lose people. At Mozilla, we believe that to truly influence positive change in Diversity & Inclusion in our communities, and more broadly in open source, we need to learn, empathize —and innovate. We’re committed building on the good work of our peers to further grow through action — building bridges and collaborating with other communities also investing in D&I.

This year, leading with our organizational strategy for D&I, we are in investing in our communities informed by three months of research. Qualitative research was conducted across the globe, with over 85 interviews as either part of an identity or focus groups, including interviews in the first language of participants, and for areas of low-bandwidth(or those who preferred not to speak on video) we interviewed in Telegram.

Qualitative data was analyzed from various sources including Mozilla Reps portal, Mozillian Sentiment Survey, a series of applications to Global Leadership events, regional meetups, a regional community survey, and various smaller data sources.

For five weeks, beginning July 3rd, this blog series will share key findings — challenges, and experiments we’re investing in for the remainder of the year and into next. As part of this, we intend to build bridges between our work and other open source communities research and work. At the end of this series we’ll post a link to schedule a presentation of this work to your community for input and future collaboration.


A Time for Action — Innovating for Diversity & Inclusion in Open Source Communities was originally published in Mozilla Open Innovation on Medium, where people are continuing the conversation by highlighting and responding to this story.

Categorieën: Mozilla-nl planet

Emma Irwin: A Time for Action — Innovating for Diversity & Inclusion in Open Source Communities

Mozilla planet - vr, 30/06/2017 - 02:28

Cross-posted to our Open Innovation Blog

Another year, another press story letting us know Open Source has a diversity problem. But this isn’t news — women, people of color, parents, non-technical contributors, cs/transgender and other marginalized people and allies have been sharing stories of challenge and overcoming for years. It’s can’t be enough to count who makes it through the gauntlet of tasks and exclusive cultural norms that lead to a first pull request; it’s not enough to celebrate increased diversity on stage at technical conferences — when audience remains homogeneous, and abuse goes unchallenged.

Open source is missing out on diverse perspectives, and experiences that can drive change for a better world because we’re stuck in our ways — continually leaning on long-held assumptions about why, why we lose people. At Mozilla, we believe that to truly influence positive change in Diversity & Inclusion in our communities, and more broadly in open source, we need to learn, empathize —and innovate. We’re committed building on the good work of our peers to further grow through action — building bridges and collaborating with other communities also investing in D&I.

This year, leading with our organizational strategy for D&I, we are in investing in our communities informed by three months of research. Qualitative research was conducted across the globe, with over 85 interviews as either part of an identity or focus groups, including interviews in the first language of participants, and for areas of low-bandwidth(or those who preferred not to speak on video) we interviewed in Telegram.


Qualitative data was analyzed from various sources including Mozilla Reps portal, Mozillian Sentiment Survey, a series of applications to Global Leadership events, regional meetups, a regional community survey, and various smaller data sources.

For five weeks, beginning July 3rd, this blog series will share key findings — challenges, and experiments we’re investing in for the remainder of the year and into next. As part of this, we intend to build bridges between our work and other open source communities research and work. At the end of this series we’ll post a link to schedule a presentation of this work to your community for input and future collaboration.

Cross-posted to our Open Innovation Blog

Feature Image  Photo credit: cyberdee via Visual Hunt / CC BY-NC-SA


Categorieën: Mozilla-nl planet

Robert O'Callahan: Patch On Linux Kernel Stable Branches Breaks rr

Mozilla planet - vr, 30/06/2017 - 01:35

A change in 4.12rc5 breaks rr. We're trying to get it fixed before 4.12 is released, and I think that will be OK. Unfortunately that change has already been backported to 3.18.57, 4.4.72, 4.9.32 and 4.11.5 :-( (all released on June 14, and I guess arriving in distros a bit later). Obviously we'll try to get the 4.12 fix also backported to those branches, but that will take a little while.

The symptoms are that long, complex replays fail with "overshot target ticks=... by " where N is generally a pretty large number (> 1000). If you look in the trace file, the value N will usually be similar to the difference between the target ticks and the previous ticks value for that task --- i.e. we tried to stop after N ticks but we actually stopped after about N*2 ticks. Unfortunately, rr tests don't seem to be affected.

I'm not sure if there's a reasonable workaround we can use in rr, or if there is one, whether it's worth effort to deploy. That may depend on how the conversation with upstream goes.

Categorieën: Mozilla-nl planet

Tim Taubert: Verified Binary Multiplication for GHASH

Mozilla planet - do, 29/06/2017 - 19:45

Previously I introduced some very basic Cryptol and SAWScript, and explained how to reason about the correctness of constant-time integer multiplication written in C/C++.

In this post I will touch on using formal verification as part of the code review process, in particular show how, by using the Software Analysis Workbench, we saved ourselves hours of debugging when rewriting the GHASH implementation for NSS.

What’s GHASH again?

GHASH is part of the Galois/Counter Mode, a mode of operation for block ciphers. AES-GCM for example uses AES as the block cipher for encryption, and appends a tag generated by the GHASH function, thereby ensuring integrity and authenticity.

The core of GHASH is multiplication in GF(2128), a characteristic-two finite field with coefficients in GF(2); they’re either zero or one. Polynomials in GF(2m) can be represented as m-bit numbers, with each bit corresponding to a term’s coefficient. In GF(23) for example, x^2 + 1 may be represented as the binary number 0b101 = 5.

Additions and subtractions in finite fields are “carry-less” because the coefficients must be in GF(p), for any GF(pm). As x * y is equivalent to adding x to itself y times, we can call multiplication in finite fields “carry-less” too. In GF(2) addition is simply XOR, so we can say that multiplication in GF(2m) is equal to binary multiplication without carries.

Note that the term carry-less only makes sense when talking about GF(2m) fields that are easily represented as binary numbers. Otherwise one would rather talk about multiplication in finite fields without comparing it to standard integer multiplication.

Franziskus’ post nicely describes why and how we updated our AES-GCM code in NSS. In case a user’s CPU is not equipped with the Carry-less Multiplication (CLMUL) instruction set, we need to provide a fallback and implement carry-less, constant-time binary multiplication ourselves, using standard integer multiplication with carry.

bmul() for 32-bit machines

The basic implementation of our binary multiplication algorithm is taken straight from Thomas Pornin’s excellent constant-time crypto post. To support 32-bit machines the best we can do is multiply two uint32_t numbers and store the result in a uint64_t.

For the full GHASH, Karatsuba decomposition is used: multiplication of two 128-bit integers is broken down into nine calls to bmul32(x, y, ...). Let’s take a look at the actual implementation:

/* Binary multiplication x * y = r_high << 32 | r_low. */ void bmul32(uint32_t x, uint32_t y, uint32_t *r_high, uint32_t *r_low) { uint32_t x0, x1, x2, x3; uint32_t y0, y1, y2, y3; uint32_t m1 = (uint32_t)0x11111111; uint32_t m2 = (uint32_t)0x22222222; uint32_t m4 = (uint32_t)0x44444444; uint32_t m8 = (uint32_t)0x88888888; uint64_t z0, z1, z2, z3; uint64_t z; /* Apply bitmasks. */ x0 = x & m1; x1 = x & m2; x2 = x & m4; x3 = x & m8; y0 = y & m1; y1 = y & m2; y2 = y & m4; y3 = y & m8; /* Integer multiplication (16 times). */ z0 = ((uint64_t)x0 * y0) ^ ((uint64_t)x1 * y3) ^ ((uint64_t)x2 * y2) ^ ((uint64_t)x3 * y1); z1 = ((uint64_t)x0 * y1) ^ ((uint64_t)x1 * y0) ^ ((uint64_t)x2 * y3) ^ ((uint64_t)x3 * y2); z2 = ((uint64_t)x0 * y2) ^ ((uint64_t)x1 * y1) ^ ((uint64_t)x2 * y0) ^ ((uint64_t)x3 * y3); z3 = ((uint64_t)x0 * y3) ^ ((uint64_t)x1 * y2) ^ ((uint64_t)x2 * y1) ^ ((uint64_t)x3 * y0); /* Merge results. */ z0 &= ((uint64_t)m1 << 32) | m1; z1 &= ((uint64_t)m2 << 32) | m2; z2 &= ((uint64_t)m4 << 32) | m4; z3 &= ((uint64_t)m8 << 32) | m8; z = z0 | z1 | z2 | z3; *r_high = (uint32_t)(z >> 32); *r_low = (uint32_t)z; }

Thomas’ explanation is not too hard to follow. The main idea behind the algorithm are the bitmasks m1 = 0b00010001..., m2 = 0b00100010..., m4 = 0b01000100..., and m8 = 0b10001000.... They respectively have the first, second, third, and fourth bit of every nibble set. This leaves “holes” of three bits between each “data bit”, so that with those applied at most a quarter of the 32 bits are equal to one.

Per standard integer multiplication, eight times eight bits will at most add eight carry bits of value one together, thus we need sufficiently sized holes per digit that can hold the value 8 = 0b1000. Three-bit holes are big enough to prevent carries from “spilling” over, they could even handle up to 15 = 0b1111 data bits in each of the two integer operands.

Review, tests, and verification

The first version of the patch came with a bunch of new tests, the vectors taken from the GCM specification. We previously had no such low-level coverage, all we had were a number of high-level AES-GCM tests.

When reviewing, after looking at the patch itself and applying it locally to see whether it builds and tests succeed, the next step I wanted to try was to write a Cryptol specification to prove the correctness of bmul32(). Thanks to the built-in pmult function that took only a few minutes.

m <- llvm_load_module "bmul.bc"; let {{ bmul32 : [32] -> [32] -> ([32], [32]) bmul32 a b = (take`{32} prod, drop`{32} prod) where prod = pad (pmult a b) pad x = zero # x }};

The SAWScript needed to properly parse the LLVM bitcode and formulate the equivalence proof is straightforward, it’s basically the same as shown in the previous post.

llvm_verify m "bmul32" [] do { x <- llvm_var "x" (llvm_int 32); y <- llvm_var "y" (llvm_int 32); llvm_ptr "r_high" (llvm_int 32); r_high <- llvm_var "*r_high" (llvm_int 32); llvm_ptr "r_low" (llvm_int 32); r_low <- llvm_var "*r_low" (llvm_int 32); let res = {{ bmul32 x y }}; llvm_ensure_eq "*r_high" {{ res.0 }}; llvm_ensure_eq "*r_low" {{ res.1 }}; llvm_verify_tactic abc; };

Compile to bitcode and run SAW. After just a few seconds it will tell us it succeeded in proving equivalency of both implementations.

$ saw bmul.saw Loading module Cryptol Loading file "bmul.saw" Successfully verified @bmul32 bmul() for 64-bit machines

bmul32() is called nine times, each time performing 16 multiplications. That’s 144 multiplications in total for one GHASH evaluation. If we had a bmul64() for 128-bit multiplication with uint128_t we’d need to call it only thrice.

The naive approach taken in the first patch revision was to just double the bitsize of the arguments and variables, and also extend the bitmasks. If you paid close attention to the previous section you might notice a problem here already. If not, it will become clear in a few moments.

typedef unsigned __int128 uint128_t; /* Binary multiplication x * y = r_high << 64 | r_low. */ void bmul64(uint64_t x, uint64_t y, uint64_t *r_high, uint64_t *r_low) { uint64_t x0, x1, x2, x3; uint64_t y0, y1, y2, y3; uint64_t m1 = (uint64_t)0x1111111111111111; uint64_t m2 = (uint64_t)0x2222222222222222; uint64_t m4 = (uint64_t)0x4444444444444444; uint64_t m8 = (uint64_t)0x8888888888888888; uint128_t z0, z1, z2, z3; uint128_t z; /* Apply bitmasks. */ x0 = x & m1; x1 = x & m2; x2 = x & m4; x3 = x & m8; y0 = y & m1; y1 = y & m2; y2 = y & m4; y3 = y & m8; /* Integer multiplication (16 times). */ z0 = ((uint128_t)x0 * y0) ^ ((uint128_t)x1 * y3) ^ ((uint128_t)x2 * y2) ^ ((uint128_t)x3 * y1); z1 = ((uint128_t)x0 * y1) ^ ((uint128_t)x1 * y0) ^ ((uint128_t)x2 * y3) ^ ((uint128_t)x3 * y2); z2 = ((uint128_t)x0 * y2) ^ ((uint128_t)x1 * y1) ^ ((uint128_t)x2 * y0) ^ ((uint128_t)x3 * y3); z3 = ((uint128_t)x0 * y3) ^ ((uint128_t)x1 * y2) ^ ((uint128_t)x2 * y1) ^ ((uint128_t)x3 * y0); /* Merge results. */ z0 &= ((uint128_t)m1 << 64) | m1; z1 &= ((uint128_t)m2 << 64) | m2; z2 &= ((uint128_t)m4 << 64) | m4; z3 &= ((uint128_t)m8 << 64) | m8; z = z0 | z1 | z2 | z3; *r_high = (uint64_t)(z >> 64); *r_low = (uint64_t)z; } Tests and another equivalence proof

The above version of bmul64() passed the GHASH test vectors with flying colors. That tricked reviewers into thinking it looked just fine, even if they just learned about the basic algorithm idea. Fallible humans. Let’s update the proofs and see what happens.

bmul : {n,m} (fin n, n >= 1, m == n*2 - 1) => [n] -> [n] -> ([n], [n]) bmul a b = (take`{n} prod, drop`{n} prod) where prod = pad (pmult a b : [m]) pad x = zero # x

Instead of hardcoding bmul for 32-bit integers we use polymorphic types m and n to denote the size in bits. m is mostly a helper to make it a tad more readable. We can now reason about carry-less n-bit binary multiplication.

Duplicating the SAWScript spec and running :s/32/64 is easy, but certainly nicer is adding a function that takes n as a parameter and returns a spec for n-bit arguments.

let SpecBinaryMul n = do { x <- llvm_var "x" (llvm_int n); y <- llvm_var "y" (llvm_int n); llvm_ptr "r_high" (llvm_int n); r_high <- llvm_var "*r_high" (llvm_int n); llvm_ptr "r_low" (llvm_int n); r_low <- llvm_var "*r_low" (llvm_int n); let res = {{ bmul x y }}; llvm_ensure_eq "*r_high" {{ res.0 }}; llvm_ensure_eq "*r_low" {{ res.1 }}; llvm_verify_tactic abc; }; llvm_verify m "bmul32" [] (SpecBinaryMul 32); llvm_verify m "bmul64" [] (SpecBinaryMul 64);

We use two instances of the bmul spec to prove correctness of bmul32() and bmul64() sequentially. The second verification will take a lot longer before yielding results.

$ saw bmul.saw Loading module Cryptol Loading file "bmul.saw" Successfully verified @bmul32 When verifying @bmul64: Proof of Term *(Term Ident "r_high") failed. Counterexample: %x: 15554860936645695441 %y: 17798150062858027007 lss__alloc0: 262144 lss__alloc1: 8 Term *(Term Ident "r_high") Encountered: 5413984507840984561 Expected: 5413984507840984531 saw: user error ("llvm_verify" (bmul.saw:31:1): Proof failed.)

Proof failed. As you probably expected by now, the bmul64() implementation is erroneous and SAW gives us a specific counterexample to investigate further. It took us a while to understand the failure but it seems very obvious in hindsight.

Fixing the bmul64() bitmasks

As already shown above, bitmasks leaving three-bit holes between data bits can avoid carry-spilling for up to two 15-bit integers. Using every fourth bit of a 64-bit argument however yields 16 data bits each, and carries can thus override data bits. We need bitmasks with four-bit holes.

/* Binary multiplication x * y = r_high << 64 | r_low. */ void bmul64(uint64_t x, uint64_t y, uint64_t *r_high, uint64_t *r_low) { uint128_t x1, x2, x3, x4, x5; uint128_t y1, y2, y3, y4, y5; uint128_t r, z; /* Define bitmasks with 4-bit holes. */ uint128_t m1 = (uint128_t)0x2108421084210842 << 64 | 0x1084210842108421; uint128_t m2 = (uint128_t)0x4210842108421084 << 64 | 0x2108421084210842; uint128_t m3 = (uint128_t)0x8421084210842108 << 64 | 0x4210842108421084; uint128_t m4 = (uint128_t)0x0842108421084210 << 64 | 0x8421084210842108; uint128_t m5 = (uint128_t)0x1084210842108421 << 64 | 0x0842108421084210; /* Apply bitmasks. */ x1 = x & m1; y1 = y & m1; x2 = x & m2; y2 = y & m2; x3 = x & m3; y3 = y & m3; x4 = x & m4; y4 = y & m4; x5 = x & m5; y5 = y & m5; /* Integer multiplication (25 times) and merge results. */ z = (x1 * y1) ^ (x2 * y5) ^ (x3 * y4) ^ (x4 * y3) ^ (x5 * y2); r = z & m1; z = (x1 * y2) ^ (x2 * y1) ^ (x3 * y5) ^ (x4 * y4) ^ (x5 * y3); r |= z & m2; z = (x1 * y3) ^ (x2 * y2) ^ (x3 * y1) ^ (x4 * y5) ^ (x5 * y4); r |= z & m3; z = (x1 * y4) ^ (x2 * y3) ^ (x3 * y2) ^ (x4 * y1) ^ (x5 * y5); r |= z & m4; z = (x1 * y5) ^ (x2 * y4) ^ (x3 * y3) ^ (x4 * y2) ^ (x5 * y1); r |= z & m5; *r_high = (uint64_t)(r >> 64); *r_low = (uint64_t)r; }

m1, …, m5 are the new bitmasks. m1 equals 0b0010000100001..., the others are each shifted by one. As the number of data bits per argument is now 64/5 <= n < 64/4 we need 5*5 = 25 multiplications. With three calls to bmul64() that’s 75 in total.

Run SAW again and, after about an hour, it will tell us it successfully verified @bmul64.

$ saw bmul.saw Loading module Cryptol Loading file "bmul.saw" Successfully verified @bmul32 Successfully verified @bmul64

You might want to take a look at Thomas Pornin’s version of bmul64(). This basically is the faulty version that SAW failed to verify, he however works around the overflow by calling it twice, passing arguments reversed bitwise the second time. He invokes bmul64() six times, which results in a total of 96 multiplications.

Some final thoughts

One of the takeaways is that even an implementation passing all test vectors given by a spec doesn’t need to be correct. That is not too surprising, spec authors can’t possibly predict edge cases from implementation approaches they haven’t thought about.

Using formal verification as part of the review process was definitely a wise decision. We likely saved hours of debugging intermittently failing connections, or random interoperability problems reported by early testers. I’m confident this wouldn’t have made it much further down the release line.

We of course added an extra test that covers that specific flaw but the next step definitely should be proper CI integration. The Cryptol code has already been written and there is no reason to not run it on every push. Verifying the full GHASH implementation would be ideal. The Cryptol code is almost trivial:

ghash : [128] -> [128] -> [128] -> ([64], [64]) ghash h x buf = (take`{64} res, drop`{64} res) where prod = pmod (pmult (reverse h) xor) <|x^^128 + x^^7 + x^^2 + x + 1|> xor = (reverse x) ^ (reverse buf) res = reverse prod

Proving the multiplication of two 128-bit numbers for a 256-bit product will unfortunately take a very very long time, or maybe not finish at all. Even if it finished after a few days that’s not something you want to automatically run on every push. Running it manually every time the code is touched might be an option though.

Categorieën: Mozilla-nl planet

Mozilla Open Design Blog: Join us on our research trip to learn more about Conscious Choosers!

Mozilla planet - do, 29/06/2017 - 19:10

Atlanta! Kansas City! Austin! Are you interested in helping us make the Internet more healthy, open and accessible?

The Mozilla Audience Insights team, with the help of our good friend and researcher Roberta Tassi, and the remote support of information designers Giorgio Uboldi and Matteo Azzi, will be visiting your cities to understand more about Conscious Choosers – a group of people whose commitment to clear values and beliefs are driving their behavior and choices in terms of brands, companies and organizations they support and use. We would like to learn more about their experiences and the role that the internet and technology play in their life.  This is an important group for Mozilla to understand because we believe people who think this way can also help us with our mission to keep the Internet open, accessible and healthy.


We would love to meet folks in person in:

Atlanta: July 11 – July 14, 2017

Kansas City: July 15 – July 19, 2017

Austin: July 20 – July 25, 2017


A few options to participate:

# Are you interested in meeting in person and contributing to the conversation with other people in the community?

Join us for a group discussion on the role of the Internet in our daily life, as individuals and societies, and why it’s important to keep it healthy.

# Are you a group of volunteers, a community or a non-profit organization that promotes some sort of activism and are targeting the Conscious Chooser audience as well?

We would like to come visit you and know more about what you do at the local level.

# Do you know your city and community well and want to learn more about human-centered design?

Join us as a local guide and participate in the whole research process.  You’ll get to experience a real field activity and synthesis with an expert team of designers and researchers.

If you answered yes to any of the above, or if you know anyone who fits the description, please reach us at


One more possibility:

# Are you someone who has strong personal values and beliefs and you expect the companies you support to have the same standards? Have you rejected a brand because you do not believe in their company values and business practices? Do you carefully research a product and company before you purchase?

Join us for individual interviews and help us better understand the values that drive your decisions and the motivations behind them.


We believe a healthy Internet is diverse and inclusive, so we would like to connect with a diverse group of participants. If you’re interested, start by filling out this form, so we can learn a little bit about you.  We will handle your information as described in the our Privacy Policy and will delete your information if you are not selected as a participant. We will contact you by July 22 if you have been selected.

For those who are not located in these US cities, you can participate in the discussion online on GitHub.

If we encounter an interesting nugget during our discussion, we will be sure to post it to try and gather more thoughts from you all!

If you have any questions, please feel free to reach out to us at:

The post Join us on our research trip to learn more about Conscious Choosers! appeared first on Mozilla Open Design.

Categorieën: Mozilla-nl planet

Hacks.Mozilla.Org: Introducing HumbleNet: a cross-platform networking library that works in the browser

Mozilla planet - do, 29/06/2017 - 18:50

HumbleNet started out as a project at Humble Bundle in 2015 to support an initiative to port peer-to-peer multiplayer games at first to asm.js and now to WebAssembly. In 2016, Mozilla’s web games program identified the need to enable UDP (User Datagram Protocol) networking support for web games, and asked if they could work with Humble Bundle to release the project as open source. Humble Bundle graciously agreed, and Mozilla worked with to polish and document HumbleNet. Today we are releasing the 1.0 version of this library to the world!

Why another networking library?

When the idea of HumbleNet first emerged we knew we could use WebSockets to enable multiplayer gaming on the web. This approach would require us to either replace the entire protocol with WebSockets (the approach taken by the asm.js port of Quake 3), or to tunnel UDP traffic through a WebSocket connection to talk to a UDP-based server at a central location.

In order to work, both approaches require a middleman to handle all network traffic between all clients. WebSockets is good for games that require a reliable ordered communication channel, but real-time games require a lower latency solution. And most real-time games care more about receiving the most recent data than getting ALL of the data in order. WebRTC’s UDP-based data channel fills this need perfectly. HumbleNet provides an easy-to-use API wrapper around WebRTC that enables real-time UDP connections between clients using the WebRTC data channel.

What exactly is HumbleNet?

HumbleNet is a simple C API that wraps WebRTC and WebSockets and hides away all the platform differences between browser and non-browser platforms. The current version of the library exposes a simple peer-to-peer API that allows for basic peer discovery and the ability to easily send data (via WebRTC) to other peers. In this manner, you can build a game that runs on Linux, macOS, and Windows, while using any web browser — and they can all communicate in real-time via WebRTC.  This means no central server (except for peer discovery) is needed to handle network traffic for the game. The peers can talk directly to each other.

HumbleNet itself uses a single WebSocket connection to manage peer discovery. This connection only handles requests such as “let me authenticate with you”, and “what is the peer ID for a server named “bobs-game-server”, and “connect me to peer #2345”.  After the peer connection is established, the games communicate directly over WebRTC.

HumbleNet demos

We have integrated HumbleNet into asm.js ports of Quake 2 and Quake 3 and we provide  a simple Unity3D demo as well.

Here is a simple video of me playing Quake 3 against myself. One game running in Firefox 54 (general release), the other in Firefox Developer Edition.

Getting started

You can find pre-built redistributables at These include binaries for Linux, macOS, Windows, a C# wrapper, Unity3D plugin, and emscripten (for targeting asm.js or WebAssembly).

Starting your peer server

Read the documentation about the peer server on the website. In general, for local development, simply starting the peer server is good enough. By default it will run in non-SSL mode on port 8080.

Using the HumbleNet API Initializing the library

To initialize HumbleNet just call humblenet_init() and then later humblnet_p2p_init(). The second call will initiate the connection to the peer server with the specified credentials.

humblenet_init(); // this initializes the P2P portion of the library connecting to the given peer server with the game token/secret (used by the peer server to validate the client). // the 4th parameter is for future use to authenticate the user with the peer server humblenet_p2p_init("ws://localhost:8080/ws", "game token", "game secret", NULL); Getting your local peer id

Before you can send any data to other peers, you need to know what your own peer ID is. This can be done by periodically polling the humblenet_p2p_get_my_peer_id() function.

// initialization loop (getting a peer) static PeerId myPeer = 0; while (myPeer == 0) { // allow the polling to run humblenet_p2p_wait(50); // fetch a peer myPeer = humblenet_p2p_get_my_peer_id(); } Sending data

To send data, we call humblenet_p2p_sendto.  The 3rd parameter is the send mode type. Currently HumbleNet implements 2 modes:SEND_RELIABLE and SEND_RELIABLE_BUFFERED.   The buffered version will attempt to do local buffering of several small messages and send one larger message to the other peer. They will be broken apart on the other end transparently.

void send_message(PeerId peer, MessageType type, const char* text, int size) { if (size > 255) { return; } uint8_t buff[MAX_MESSAGE_SIZE]; buff[0] = (uint8_t)type; buff[1] = (uint8_t)size; if (size > 0) { memcpy(buff + 2, text, size); } humblenet_p2p_sendto(buff, size + 2, peer, SEND_RELIABLE, CHANNEL); } Initial connections to peers

When initially connecting to a peer for the first time you will have to send an initial message several times while the connection is established. The basic approach here is to send a hello message once a second, and wait for an acknowledge response before assuming the peer is connected. Thus, minimally, any application will need 3 message types: HELLO, ACK, and some kind of DATA message type.

if (newPeer.status == PeerStatus::CONNECTING) { time_t now = time(NULL); if (now > newPeer.lastHello) { // try once a second send_message(, MessageType::HELLO, "", 0); startPeerLastHello = now; } } Retrieving data

To actually retrieve data that has been sent to your peer you need to use humblenet_p2p_peek and humblenet_p2p_recvfrom. If you assume that all packages are smaller than a max size, then a simple loop like this can be done to process any pending messages.  Note: Messages larger than your buffer size will be truncated. Using humblenet_p2p_peek you can see the size of the next message for the specified channel.

uint8_t buff[MAX_MESSAGE_SIZE]; bool done = false; while (!done) { PeerId remotePeer = 0; int ret = humblenet_p2p_recvfrom(buff, sizeof(buff), &remotePeer, CHANNEL); if (ret < 0) { if (remotePeer != 0) { // disconnected client } else { // error done = true; } } else if (ret > 0) { // we received data process it process_message(remotePeer, buff, sizeof(buff), ret); } else { // 0 return value means no more data to read done = true; } } Shutting down the library

To disconnect from the peer server, other clients, and shut down the library, simply call humblenet_shutdown.

humblenet_shutdown(); Finding other peers

HumbleNet currently provides a simple “DNS” like method of locating other peers.  To use this you simply register a name with a client, and then create a virtual peer on the other clients. Take the client-server style approach of Quake3 for example – and have your server register its name as “awesome42.”


Then, on your other peers, create a virtual peer for awesome42.

PeerID serverPeer = humblenet_p2p_virtual_peer_for_alias("awesome42");

Now the client can send data to serverPeer and HumbleNet will take care of translating the virtual peer to the actual peer once it resolves the name.

We have two systems on the roadmap that will improve the peer discovery system.  One is an event system that allows you to request a peer to be resolved, and then notifies you when it’s resolved. The second is a proper lobby system that allows you to create, search, and join lobbies as a more generic means of finding open games without needing to know any name up front.

Development Roadmap

We have a roadmap of what we plan on adding now that the project is released. Keep an eye on the HumbleNet site for the latest development.

Future work items include:

  1. Event API
    1. Allows a simple SDL2-style polling event system so that game code can easily check for various events from the peer server in a cleaner way, such as connects, disconnects, etc.
  2. Lobby API
    1. Uses the Event API to build a means of creating lobbies on the peer server in order to locate game sessions (instead of having to register aliases).
  3. WebSocket API
    1. Adds in support to easily connect to any websocket server with a clean simple API.
How can I contribute?

If you want to help out and contribute to the project, HumbleNet is being developed on GitHub: Use the issue tracker and pull requests to contribute code. Be sure to read the guide on how to create a pull request.

Categorieën: Mozilla-nl planet

Anthony Hughes: Some-Hands

Mozilla planet - do, 29/06/2017 - 05:06

[Opinions expressed herein are my own and have not been endorsed by Mozilla]

This week Mozillians from around the world are taking part in the biannual tradition of descending on a city for a week-long all-hands meeting. In this case the location is San Francisco. Unfortunately I’m not participating in this tradition for the first time in 9 years.

My work(fromhome)station for the Mozilla SF All-hands, June 2017 So what, What are you really missing?

On the surface it’s basically a 4-day meeting marathon but to me the meetings are the least valuable part of the week. To me these events are about relationships. All-hands is an opportunity to meet friends new and old, to share stories of survival and to laugh in the face of our failures. It is a time to talk candidly with leadership, to participate in real-time discussions with people I normally speak with over IRC or e-mail, and to talk with people I never get a chance to otherwise. Sharing stories in person is the best thing about these events.

This sounds awesome, why would you not go?

I want to begin by crediting Mozilla for standing up to the cultural regression that is occurring. They came out against Trump’s travel ban and have taken extra steps to ensure legal support during our travels and security at the All-hands. I commend them for taking these steps but it is still not enough to persuade me to risk crossing the border.

First and foremost I know some Mozillians have been barred from going simply because of their background. I chose not to go out in protest of Trump’s draconian policies and in solidarity with those of my peers who haven’t been given the same choice.

Secondly I am concerned that the attitudes in the Whitehouse have given license to bigotry. As someone who identifies as a member of the LGBTQ community I am worried about a regressive, draconian executive order being signed targeting this community. If this were to happen I would fully expect there to be protests in San Francisco. This would become a distraction from everything I’d come to accomplish as I wouldn’t think twice about joining in these protests, possibly resulting in my deportation.

Finally I want to avoid US border guards. I know myself, I respect authority as long as they respect my dignity and treat me as a human-being. I will not hesitate to fight back and make my voice heard if I feel mistreated. The outcome of which is likely to be detainment and/or being escorted out. Statistically the potential of this happening is low but it’s not zero. I have chosen to avoid this situation altogether. I’m just not willing to put myself nor Mozilla in this position.

My experience at the US border has never been a positive one. Whether I was traveling for business and had the proper clearance, or if I was just heading down to the US for vacation, US Border guards have made me feel progressively less welcome in their country (clearly I’m not alone). Under Trump this has reached a point where I can’t be bothered anymore. Life is short and there’s many more welcoming countries I’ve yet to explore.

What about the next all-hands?

I always look forward to these gatherings. It’s often a rollercoaster of emotions and I always leave more tired but more re-energized than when I arrived. I often rediscover my passion to fight for the open web. It pains me to be missing out but I know it’d pain me more if I’d gone knowing some of my peers were being deprived of this opportunity due to an untenable political situation. I hope this is a mere blip and that I can one day join my friends in the US but for now it is terre sans homme, for business and for pleasure.


Categorieën: Mozilla-nl planet

Gervase Markham: My Addons

Mozilla planet - do, 29/06/2017 - 01:59

Firefox Nightly (will be 56) already no longer supports addons which are not multiprocess-compatible. And Firefox 57 will not support “Legacy” addons – those which use XUL, XPCOM or the Addons SDK. I just started using Nightly instead of Aurora as my main browser, at Mark Mayo’s request :-), and this is what I found (after doing “Update Addons”):

  • Addons installed: 37
  • Non-multiprocess-compatible addons (may also be marked Legacy): 21 (57%)
  • Legacy addons: 5 (14%)
  • Addons which will work in 57, if nothing changes: 11 (29%)

Useful addons which no longer work as of now are: 1-Click YouTube Video Downloader, Advertising Cookie Opt-Out, AutoAuth, Expiry Canary (OK, I wrote that one, that’s my fault), Google Translator, Live HTTP Headers, Mass Password Reset, RESTClient, and User Agent Switcher.

Useful addons which will also no longer work in 57 (if nothing changes) include: Adblock Plus, HTTPS Everywhere, JSONView, and Send to Kodi.

I’m sure Adblock Plus is being updated, because it would be sheer madness if we went ahead and it was not being. As for the rest – who knows? There doesn’t seem to be a way of finding out other than researching each one individually.

In the Firefox (I think it was) Town Hall, there was a question asked about addons and whether we felt that we were in a good place in terms of people not having a bad experience with their addons stopping working. The answer came back that we were. I fully admit I may not be a typical user, but it seems like this will not be my experience… :-(

Categorieën: Mozilla-nl planet

Daniel Stenberg: Denied entry

Mozilla planet - wo, 28/06/2017 - 23:47

 – Sorry, you’re not allowed entry to the US on your ESTA.

The lady who delivered this message to me this early Monday morning, worked behind the check-in counter at the Arlanda airport. I was there, trying to check-in to my two-leg trip to San Francisco to the Mozilla “all hands” meeting of the summer of 2017. My chance for a while ahead to meet up with colleagues from all around the world.

This short message prevented me from embarking on one journey, but instead took me on another.

Returning home

I was in a bit of a shock by this treatment really. I mean, I wasn’t treated particularly bad or anything but just the fact that they downright refused to take me on for unspecified reasons wasn’t easy to swallow. I sat down for a few moments trying to gather my thoughts on what to do next. I then sent a few tweets out expressing my deep disappointment for what happened, emailed my manager and some others at Mozilla about what happened and that I can’t come to the meeting and then finally walked out the door again and traveled back home.

This tweet sums up what I felt at the time:

Going back home. To cry. To curse. To write code from home instead. Fricking miserable morning. No #sfallhands for me.

— Daniel Stenberg (@bagder) 26 juni 2017

Then the flood

That Monday passed with some casual conversations with people of what I had experienced, and then…

Someone posted to hacker news about me. That post quickly rose to the top position and it began. My twitter feed suddenly got all crazy with people following me and retweeting my rejection tweets from yesterday. Several well-followed people retweeted me and that caused even more new followers and replies.

By the end of the Tuesday, I had about 2000 new followers and twitter notifications that literally were flying by at a high speed.

I was contacted by writers and reporters. The German Linux Magazine was first out to post about me, and then did the same. I talked to Kate Conger on Gizmodo who wrote Mozilla Employee Denied Entry to the United States. The Register wrote about me. I was for a moment considered for a TV interview, but I think they realized that we had too little facts to actually know why I was denied so maybe it wasn’t really that TV newsworthy.

These articles of course helped boosting my twitter traffic even more.

In the flood of responses, the vast majority were positive and supportive of me. Lots of people highlighted the role of curl and acknowledged that my role in that project has been beneficial for quite a number of internet related software in the world. A whole bunch of the responses offered to help me in various ways. The one most highlighted is probably this one from Microsoft’s Chief Legal Officer Brad Smith:

I’d be happy to have one of our U.S. immigration legal folks talk with you to see if there’s anything we can do to help. Let me know.

— Brad Smith (@BradSmi) 27 juni 2017

I also received a bunch of emails. Some of them from people who offered help – and I must say I’m deeply humbled and grateful by the amount of friends I apparently have and the reach this got.

Some of the emails also echoed the spirit of some of the twitter replies I got: quite a few Americans feel guilty, ashamed or otherwise apologize for what happened to me. However, I personally do not at all think of this setback as something that my American friends are behind. And I have many.

Mozilla legal

Tuesday evening I had a phone call with our (Mozilla’s) legal chief about my situation and I helped to clarify exactly what I had done, what I’ve been told and what had happened. There’s a team working now to help me sort out what happened and why, and what I and we can do about it so that I don’t get to experience this again the next time I want to travel to the US. People are involved both on the US as well as on the Swedish side of things.

Personally I don’t have any plans to travel to the US in the near future so there’s no immediate rush. I had already given up attending this Mozilla all-hands.


Mark Nottingham sent an email on the QUIC working group’s mailing list, and here follows two selected sections from it:

You may have seen reports that someone who participates in this work was recently refused entry to the US*, for unspecified reasons.

We won’t hold any further interim meetings in the US, until there’s a change in this situation. This means that we’ll either need to find suitable hosts in Canada or Mexico, or our meeting rotation will need to change to be exclusively Europe and Asia.

I trust I don’t actually need to point out that I am that “someone” and again I’m impressed and humbled by the support and actions in my community.

Now what?

I’m now (end of Wednesday, 60 hours since the check-in counter) at 3000 more twitter followers than what I started out with this Monday morning. This turned out to be a totally crazy week and it has severally impacted my productivity. I need to get back to write code, I’m getting behind!

I hope we’ll get some answers soon as to why I was denied and what I can do to fix this for the future. When I get that, I will share all the info I can with you all.

So, back to work!

Thanks again

Before I forget: thank you all. Again. With all my heart. The amount of love I’ve received these last two days is amazing.

Categorieën: Mozilla-nl planet

Hacks.Mozilla.Org: Building the Web of Things

Mozilla planet - wo, 28/06/2017 - 20:36

Mozilla is working to create a Web of Things framework of software and services that can bridge the communication gap between connected devices. By providing these devices with web URLs and a standardized data model and API, we are moving towards a more decentralized Internet of Things that is safe, open and interoperable.

The Internet and the World Wide Web are built on open standards which are decentralized by design, with anyone free to implement those standards and connect to the network without the need for a central point of control. This has resulted in the explosive growth of hundreds of millions of personal computers and billions of smartphones which can all talk to each other over a single global network.

As technology advances from personal computers and smartphones to a world where everything around us is connected to the Internet, new types of devices in our homes, cities, cars, clothes and even our bodies are going online every day.

The Internet of Things

The “Internet of Things” (IoT) is a term to describe how physical objects are being connected to the Internet so that they can be discovered, monitored, controlled or interacted with. Like any advancement in technology, these innovations bring with them enormous new opportunities, but also new risks.

At Mozilla our mission is “to ensure the Internet is a global public resource, open and accessible to all. An Internet that truly puts people first, where individuals can shape their own experience and are empowered, safe and independent.”

This mission has never been more important than today, a time when everything around us is being designed to connect to the Internet. As new types of devices come online, they bring with them significant new challenges around security, privacy and interoperability.

Many of the new devices connecting to the Internet are insecure, do not receive software updates to fix vulnerabilities, and raise new privacy questions around the collection, storage, and use of large quantities of extremely personal data.

Additionally, most IoT devices today use proprietary vertical technology stacks which are built around a central point of control and which don’t always talk to each other. When they do talk to each other it requires per-vendor integrations to connect those systems together. There are efforts to create standards, but the landscape is extremely complex and there’s still not yet a single dominant model or market leader.

A chart of leading proprietary IoT stacks

The Web of Things

Using the Internet of Things today is a lot like sharing information on the Internet before the World Wide Web existed. There were competing hypertext systems and proprietary GUIs, but the Internet lacked a unifying application layer protocol for sharing and linking information.

The “Web of Things” (WoT) is an effort to take the lessons learned from the World Wide Web and apply them to IoT. It’s about creating a decentralized Internet of Things by giving Things URLs on the web to make them linkable and discoverable, and defining a standard data model and APIs to make them interoperable.

A table showing Web of Things standards

The Web of Things is not just another vertical IoT technology stack to compete with existing platforms. It is intended as a unifying horizontal application layer to bridge together multiple underlying IoT protocols.

Rather than start from scratch, the Web of Things is built on existing, proven web standards like REST, HTTP, JSON, WebSockets and TLS (Transport Layer Security). The Web of Things will also require new web standards. In particular, we think there is a need for a Web Thing Description format to describe things, a REST style Web Thing API to interact with them, and possibly a new generation of HTTP better optimised for IoT use cases and use by resource constrained devices.

The Web of Things is not just a Mozilla Initiative, there is already a well established Web of Things community and related standardization efforts at the IETF, W3C, OCF and OGC. Mozilla plans to be a participant in this community to help define new web standards and promote best practices around privacy, security and interoperability.

From this existing work three key integration patterns have emerged for connecting things to the web, defined by the point at which a Web of Things API is exposed to the Internet.

Diagram comparing Direct, Gateway, and Cloud Integration Patterns

Direct Integration Pattern

The simplest pattern is the direct integration pattern where a device exposes a Web of Things API directly to the Internet. This is useful for relatively high powered devices which can support TCP/IP and HTTP and can be directly connected to the Internet (e.g. a WiFi camera). This pattern can be tricky for devices on a home network which may need to use NAT or TCP tunneling in order to traverse a firewall. It also more directly exposes the device to security threats from the Internet.

Gateway Integration Pattern

The gateway integration pattern is useful for resource-constrained devices which can’t run an HTTP server themselves and so use a gateway to bridge them to the web. This pattern is particularly useful for devices which have limited power or which use PAN network technologies like Bluetooth or ZigBee that don’t directly connect to the Internet (e.g. a battery powered door sensor). A gateway can also be used to bridge all kinds of existing IoT devices to the web.

Cloud Integration Pattern

In the cloud integration pattern the Web of Things API is exposed by a cloud server which acts as a gateway remotely and the device uses some other protocol to communicate with the server on the back end. This pattern is particularly useful for a large number of devices over a wide geographic area which need to be centrally co-ordinated (e.g. air pollution sensors).

Project Things by Mozilla

In the Emerging Technologies team at Mozilla we’re working on an experimental framework of software and services to help developers connect “things” to the web in a safe, secure and interoperable way.

Things Framework diagram

Project Things will initially focus on developing three components:

  • Things Gateway — An open source implementation of a Web of Things gateway which helps bridge existing IoT devices to the web
  • Things Cloud — A collection of Mozilla-hosted cloud services to help manage a large number of IoT devices over a wide geographic area
  • Things Framework — Reusable software components to help create IoT devices which directly connect to the Web of Things
Things Gateway

Today we’re announcing the availability of a prototype of the first component of this system, the Things Gateway. We’ve made available a software image you can use to build your own Web of Things gateway using a Raspberry Pi.

Things Gateway diagram

So far this early prototype has the following features:

  • Easily discover the gateway on your local network
  • Choose a web address which connects your home to the Internet via a secure TLS tunnel requiring zero configuration on your home network
  • Create a username and password to authorize access to your gateway
  • Discover and connect commercially available ZigBee and Z-Wave smart plugs to the gateway
  • Turn those smart plugs on and off from a web app hosted on the gateway itself

We’re releasing this prototype very early on in its development so that hackers and makers can get their hands on the source code to build their own Web of Things gateway and contribute to the project from an early stage.

This initial prototype is implemented in JavaScript with a NodeJS web server, but we are exploring an adapter add-on system to allow developers to build their own Web of Things adapters using other programming languages like Rust in the future.

Web Thing API

Our goal in building this IoT framework is to lead by example in creating a Web of Things implementation which embodies Mozilla’s values and helps drive IoT standards around security, privacy and interoperability. The intention is not just to create a Mozilla IoT platform but an open source implementation of a Web of Things API which anyone is free to implement themselves using the programming language and operating system of their choice.

To this end, we have started working on a draft Web Thing API specification to eventually propose for standardization. This includes a simple but extensible Web Thing Description format with a default JSON encoding, and a REST + WebSockets Web Thing API. We hope this pragmatic approach will appeal to web developers and help turn them into WoT developers who can help realize our vision of a decentralized Internet of Things.

We encourage developers to experiment with using this draft API in real life use cases and provide feedback on how well it works so that we can improve it.

Web Thing API spec - Member Submission

Get Involved

There are many ways you can contribute to this effort, some of which are:

  • Build a Web Thing — build your own IoT device which uses the Web Thing API
  • Create an adapter — Create an adapter to bridge an existing IoT protocol or device to the web
  • Hack on Project Things — Help us develop Mozilla’s Web of Things implementation

You can find out more at and all of our source code is on GitHub. You can find us in #iot on or on our public mailing list.

Categorieën: Mozilla-nl planet

Mozilla Security Blog: Analysis of the Alexa Top 1M sites

Mozilla planet - wo, 28/06/2017 - 18:47

Prior to the release of the Mozilla Observatory a year ago, I ran a scan of the Alexa Top 1M websites. Despite being available for years, the usage rates of modern defensive security technologies was frustratingly low. A lack of tooling combined with poor and scattered documentation had led to there being little awareness around countermeasures such as Content Security Policy (CSP), HTTP Strict Transport Security (HSTS), and Subresource Integrity (SRI).

A few months after the Observatory’s release — and 1.5M Observatory scans later — I reassessed the Top 1M websites. The situation appeared as if it was beginning to improve, with the use of HSTS and CSP up by approximately 50%. But were those improvements simply low-hanging fruit, or has the situation continued to improve over the following months?

Technology April 2016 October 2016 June 2017 % Change Content Security Policy (CSP) .005%1
.012%2 .008%1
.021%2 .018%1
.043%2 +125% Cookies (Secure/HttpOnly)3 3.76% 4.88% 6.50% +33% Cross-origin Resource Sharing (CORS)4 93.78% 96.21% 96.55% +.4% HTTPS 29.64% 33.57% 45.80% +36% HTTP → HTTPS Redirection 5.06%5
8.91%6 7.94%5
13.29%6 14.38%5
22.88%6 +57% Public Key Pinning (HPKP) 0.43% 0.50% 0.71% +42%  — HPKP Preloaded7 0.41% 0.47% 0.43% -9% Strict Transport Security (HSTS)8 1.75% 2.59% 4.37% +69%  — HSTS Preloaded7 .158% .231% .337% +46% Subresource Integrity (SRI) 0.015%9 0.052%10 0.113%10 +117% X-Content-Type-Options (XCTO) 6.19% 7.22% 9.41% +30% X-Frame-Options (XFO)11 6.83% 8.78% 10.98% +25% X-XSS-Protection (XXSSP)12 5.03% 6.33% 8.12% +28%

The pace of improvement across the web appears to be continuing at an astounding rate. Although a 36% increase in the number of sites that support HTTPS might seem small, the absolute numbers are quite large — it represents over 119,000 websites.

Not only that, but 93,000 of those websites have chosen to be HTTPS by default, with 18,000 of them forbidding any HTTP access at all through the use of HTTP Strict Transport Security.

The sharp jump in the rate of Content Security Policy (CSP) usage is similarly surprising. It can be difficult to implement for a new website, and often requires extensive rearchitecting to retrofit to an existing site, as most of the Alexa Top 1M sites are. Between increasingly improving documentation, advances in CSP3 such as ‘strict-dynamic’, and CSP policy generators such as the Mozilla Laboratory, it appears that we might be turning a corner on CSP usage around the web.

Observatory Grading

Despite this progress, the vast majority of large websites around the web continue to not use Content Security Policy and Subresource Integrity. As these technologies — when properly used — can nearly eliminate huge classes of attacks against sites and their users, they are given a significant amount of weight in Observatory scans.

As a result of their low usage rates amongst established websites, they typically receive failing grades from the Observatory. Nevertheless, I continue to see improvements across the board:

Grade April 2016 October 2016 June 2017 % Change  A+ .003% .008% .013% +62% A .006% .012% .029% +142% B .202% .347% .622% +79% C .321% .727% 1.38% +90% D 1.87% 2.82% 4.51% +60% F 97.60% 96.09% 93.45% -2.8%

As 969,924 scans were successfully completed in the last survey, a decrease in failing grades by 2.8% implies that over 27,000 of the largest sites in the world have improved from a failing grade in the last eight months alone.

In fact, my research indicates that over 50,000 websites around the web have directly used the Mozilla Observatory to improve their grades, indicated by scanning their website, making an improvement, and then scanning their website again. Of these 50,000 websites, over 2,500 have improved all the way from a failing grade to an A or A+ grade.

When I first built the Observatory a year ago at Mozilla, I had never imagined that it would see such widespread use. 3.8M scans across 1.55M unique domains later, it seems to have made a significant difference across the internet. I feel incredibly lucky to work at a company like Mozilla that has provided me with a unique opportunity to work on a tool designed solely to make internet a better place.

Please share the Mozilla Observatory and the Web Security Guidelines so that the web can continue to see improvements over the years to come!



  1. Allows 'unsafe-inline' in neither script-src nor style-src
  2. Allows 'unsafe-inline' in style-src only
  3. Amongst sites that set cookies
  4. Disallows foreign origins from reading the domain’s contents within user’s context
  5. Redirects from HTTP to HTTPS on the same domain, which allows HSTS to be set
  6. Redirects from HTTP to HTTPS, regardless of the final domain
  7. As listed in the Chromium preload list
  8. max-age set to at least six months
  9. Percentage is of sites that load scripts from a foreign origin
  10. Percentage is of sites that load scripts
  11. CSP frame-ancestors directive is allowed in lieu of an XFO header
  12. Strong CSP policy forbidding 'unsafe-inline' is allowed in lieu of an XXSSP header

The post Analysis of the Alexa Top 1M sites appeared first on Mozilla Security Blog.

Categorieën: Mozilla-nl planet

Support.Mozilla.Org: Important Platform Update

Mozilla planet - wo, 28/06/2017 - 15:47

Hello, SUMO Mozillians!

We have an important update regarding our site to share with you, so grab something cold/hot to drink (depending on your climate), sit down, and give us your attention for the next few minutes.

As you know, we have been hard at work for quite some time now migrating the site over to a new platform. You were a part of the process from day one (since we knew we needed to find a replacement for Kitsune) and we would like to once more thank you for your participation throughout that challenging and demanding period. Many of you have given us feedback or lent a hand with testing, checking, cleaning up, and generally supporting our small team before, during, and after the migration.

Over time and due to technical difficulties beyond our team’s direct control, we decided to ‘roll back’ to Kitsune to better support the upcoming releases of Firefox and related Mozilla products.

The date of ‘rolling forward’ to Lithium was to be decided based on the outcome of leadership negotiations of contract terms and the solving of technical issues (such as redirects, content display, and localization flows, for example) by teams from both sides working together.

In the meantime, we have been using Kitsune to serve content to users and provide forum support.

We would like to inform you that a decision has been made on Mozilla’s side to keep using Kitsune for the foreseeable future. Our team will investigate alternative options to improve and update Mozilla’s support for our users and ways to empower your contributions in that area.

What are the reasons behind this decision?

  1. Technical challenges in shaping Lithium’s platform to meet all of Mozilla’s user support needs.
  2. The contributor community’s feedback and requirements for contributing comfortably.
  3. The upcoming major releases for Firefox (and related products) requiring a smooth and uninterrupted user experience while accessing support resources.

What are the immediate implications of this decision?

  1. Mozilla will not be proceeding with a full ‘roll forward’ of SUMO to Lithium at this time. All open Lithium-related Bugzilla requests will be re-evaluated and may be closed as part of our next sprint (after the San Francisco All Hands).
  2. SUMO is going to remain on Kitsune for both support forum and knowledge base needs for now. Social support will continue on Respond.
  3. The SUMO team is going to kick off a reevaluation process for Kitsune’s technical status and requirements with the help of Mozilla’s IT team. This will include evaluating options of using Kitsune in combination with other tools/platforms to provide support for our users and contribution opportunities for Mozillians.

If you have questions about this update or want to discuss it, please use our community forums.

We are, as always, relying on your time and effort in successfully supporting millions of Mozilla’s software users and fans around the world. Thank you for your ongoing participation in making the open web better!

Sincerely yours,

The SUMO team

P.S. Watch the video from the first day of the SFO All Hands if you want to see us discuss the above (and not only).


Categorieën: Mozilla-nl planet

Chris Lord: Goodbye Mozilla

Mozilla planet - wo, 28/06/2017 - 13:16

Today is effectively my last day at Mozilla, before I start at Impossible on Monday. I’ve been here for 6 years and a bit and it’s been quite an experience. I think it’s worth reflecting on, so here we go; Fair warning, if you have no interest in me or Mozilla, this is going to make pretty boring reading.

I started on June 6th 2011, several months before the (then new, since moved) London office opened. Although my skills lay (lie?) in user interface implementation, I was hired mainly for my graphics and systems knowledge. Mozilla was in the region of 500 or so employees then I think, and it was an interesting time. I’d been working on the code-base for several years prior at Intel, on a headless backend that we used to build a Clutter-based browser for Moblin netbooks. I wasn’t completely unfamiliar with the code-base, but it still took a long time to get to grips with. We’re talking several million lines of code with several years of legacy, in a language I still consider myself to be pretty novice at (C++).

I started on the mobile platform team, and I would consider this to be my most enjoyable time at the company. The mobile platform team was a multi-discipline team that did general low-level platform work for the mobile (Android and Meego) browser. When we started, the browser was based on XUL and was multi-process. Mobile was often the breeding ground for new technologies that would later go on to desktop. It wasn’t long before we started developing a new browser based on a native Android UI, removing XUL and relegating Gecko to page rendering. At the time this felt like a disappointing move. The reason the XUL-based browser wasn’t quite satisfactory was mainly due to performance issues, and as a platform guy, I wanted to see those issues fixed, rather than worked around. In retrospect, this was absolutely the right decision and lead to what I’d still consider to be one of Android’s best browsers.

Despite performance issues being one of the major driving forces for making this move, we did a lot of platform work at the time too. As well as being multi-process, the XUL browser had a compositor system for rendering the page, but this wasn’t easily portable. We ended up rewriting this, first almost entirely in Java (which was interesting), then with the rendering part of the compositor in native code. The input handling remained in Java for several years (pretty much until FirefoxOS, where we rewrote that part in native code, then later, switched Android over).

Most of my work during this period was based around improving performance (both perceived and real) and fluidity of the browser. Benoit Girard had written an excellent tiled rendering framework that I polished and got working with mobile. On top of that, I worked on progressive rendering and low precision rendering, which combined are probably the largest body of original work I’ve contributed to the Mozilla code-base. Neither of them are really active in the code-base at the moment, which shows how good a job I didn’t do maintaining them, I suppose.

Although most of my work was graphics-focused on the platform team, I also got to to do some layout work. I worked on some over-invalidation issues before Matt Woodrow’s DLBI work landed (which nullified that, but I think that work existed in at least one release). I also worked a lot on fixed position elements staying fixed to the correct positions during scrolling and zooming, another piece of work I was quite proud of (and probably my second-biggest contribution). There was also the opportunity for some UI work, when it intersected with platform. I implemented Firefox for Android’s dynamic toolbar, and made sure it interacted well with fixed position elements (some of this work has unfortunately been undone with the move from the partially Java-based input manager to the native one). During this period, I was also regularly attending and presenting at FOSDEM.

I would consider my time on the mobile platform team a pretty happy and productive time. Unfortunately for me, those of us with graphics specialities on the mobile platform team were taken off that team and put on the graphics team. I think this was the start in a steady decline in my engagement with the company. At the time this move was made, Mozilla was apparently trying to consolidate teams around products, and this was the exact opposite happening. The move was never really explained to me and I know I wasn’t the only one that wasn’t happy about it. The graphics team was very different to the mobile platform team and I don’t feel I fit in as well. It felt more boisterous and less democratic than the mobile platform team, and as someone that generally shies away from arguments and just wants to get work done, it was hard not to feel sidelined slightly. I was also quite disappointed that people didn’t seem particular familiar with the graphics work I had already been doing and that I was tasked, at least initially, with working on some very different (and very boring) desktop Linux work, rather than my speciality of mobile.

I think my time on the graphics team was pretty unproductive, with the exception of the work I did on b2g, improving tiled rendering and getting graphics memory-mapped tiles working. This was particularly hard as the interface was basically undocumented, and its implementation details could vary wildly depending on the graphics driver. Though I made a huge contribution to this work, you won’t see me credited in the tree unfortunately. I’m still a little bit sore about that. It wasn’t long after this that I requested to move to the FirefoxOS systems front-end team. I’d been doing some work there already and I’d long wanted to go back to doing UI. It felt like I either needed a dramatic change or I needed to leave. I’m glad I didn’t leave at this point.

Working on FirefoxOS was a blast. We had lots of new, very talented people, a clear and worthwhile mission, and a new code-base to work with. I worked mainly on the home-screen, first with performance improvements, then with added features (app-grouping being the major one), then with a hugely controversial and probably mismanaged (on my part, not my manager – who was excellent) rewrite. The rewrite was good and fixed many of the performance problems of what it was replacing, but unfortunately also removed features, at least initially. Turns out people really liked the app-grouping feature.

I really enjoyed my time working on FirefoxOS, and getting a nice clean break from platform work, but it was always bitter-sweet. Everyone working on the project was very enthusiastic to see it through and do a good job, but it never felt like upper management’s focus was in the correct place. We spent far too much time kowtowing to the desires of phone carriers and trying to copy Android and not nearly enough time on basic features and polish. Up until around v2.0 and maybe even 2.2, the experience of using FirefoxOS was very rough. Unfortunately, as soon as it started to show some promise and as soon as we had freedom from carriers to actually do what we set out to do in the first place, the project was cancelled, in favour of the whole Connected Devices IoT debacle.

If there was anything that killed morale for me more than my unfortunate time on the graphics team, and more than having FirefoxOS prematurely cancelled, it would have to be the Connected Devices experience. I appreciate it as an opportunity to work on random semi-interesting things for a year or so, and to get some entrepreneurship training, but the mismanagement of that whole situation was pretty epic. To take a group of hundreds of UI-focused engineers and tell them that, with very little help, they should organised themselves into small teams and create IoT products still strikes me as an idea so crazy that it definitely won’t work. Certainly not the way we did it anyway. The idea, I think, was that we’d be running several internal start-ups and we’d hopefully get some marketable products out of it. What business a not-for-profit company, based primarily on doing open-source, web-based engineering has making physical, commercial products is questionable, but it failed long before that could be considered.

The process involved coming up with an idea, presenting it and getting approval to run with it. You would then repeat this approval process at various stages during development. It was, however, very hard to get approval for enough resources (both time and people) to finesse an idea long enough to make it obviously a good or bad idea. That aside, I found it very demoralising to not have the opportunity to write code that people could use. I did manage it a few times, in spite of what was happening, but none of this work I would consider myself particularly proud of. Lots of very talented people left during this period, and then at the end of it, everyone else was laid off. Not a good time.

Luckily for me and the team I was on, we were moved under the umbrella of Emerging Technologies before the lay-offs happened, and this also allowed us to refocus away from trying to make an under-featured and pointless shopping-list assistant and back onto the underlying speech-recognition technology. This brings us almost to present day now.

The DeepSpeech speech recognition project is an extremely worthwhile project, with a clear mission, great promise and interesting underlying technology. So why would I leave? Well, I’ve practically ended up on this team by a series of accidents and random happenstance. It’s been very interesting so far, I’ve learnt a lot and I think I’ve made a reasonable contribution to the code-base. I also rewrote python_speech_features in C for a pretty large performance boost, which I’m pretty pleased with. But at the end of the day, it doesn’t feel like this team will miss me. I too often spend my time finding work to do, and to be honest, I’m just not interested enough in the subject matter to make that work long-term. Most of my time on this project has been spent pushing to open it up and make it more transparent to people outside of the company. I’ve added model exporting, better default behaviour, a client library, a native client, Python bindings (+ example client) and most recently, Node.js bindings (+ example client). We’re starting to get noticed and starting to get external contributions, but I worry that we still aren’t transparent enough and still aren’t truly treating this as the open-source project it is and should be. I hope the team can push further towards this direction without me. I think it’ll be one to watch.

Next week, I start working at a new job doing a new thing. It’s odd to say goodbye to Mozilla after 6 years. It’s not easy, but many of my peers and colleagues have already made the jump, so it feels like the right time. One of the big reasons I’m moving, and moving to Impossible specifically, is that I want to get back to doing impressive work again. This is the largest regret I have about my time at Mozilla. I used to blog regularly when I worked at OpenedHand and Intel, because I was excited about the work we were doing and I thought it was impressive. This wasn’t just youthful exuberance (he says, realising how ridiculous that sounds at 32), I still consider much of the work we did to be impressive, even now. I want to be doing things like that again, and it feels like Impossible is a great opportunity to make that happen. Wish me luck!

Categorieën: Mozilla-nl planet

Daniel Pocock: How did the world ever work without Facebook?

Mozilla planet - di, 27/06/2017 - 21:29

Almost every day, somebody tells me there is no way they can survive without some social media like Facebook or Twitter. Otherwise mature adults fearful that without these dubious services, they would have no human contact ever again, they would die of hunger and the sky would come crashing down too.

It is particularly disturbing for me to hear this attitude from community activists and campaigners. These are people who aspire to change the world, but can you really change the system using the tools the system gives you?

Revolutionaries like Gandhi and the Bolsheviks don't have a lot in common: but both of them changed the world and both of them did so by going against the system. Gandhi, of course, relied on non-violence while the Bolsheviks continued to rely on violence long after taking power. Neither of them needed social media but both are likely to be remembered far longer than any viral video clip you have seen recently.

With US border guards asking visitors for their Facebook profiles and Mark Zuckerberg being a regular participant at secretive Bilderberg meetings, it should be clear that Facebook and conventional social media is not on your side, it's on theirs.

Kettling has never been easier

When street protests erupt in major cities such as London, the police build fences around the protesters, cutting them off from the rest of the world. They become an island in the middle of the city, like a construction site or broken down bus that everybody else goes around. The police then set about arresting one person at a time, taking their name and photograph and then slowly letting them leave in different directions. This strategy is called kettling.

Facebook helps kettle activists in their arm chair. The police state can gather far more data about them, while their impact is even more muted than if they ventured out of their home.

You are more likely to win the lottery than make a viral campaign

Every week there is news about some social media campaign that has gone viral. Every day, marketing professionals, professional campaigners and motivated activists sit at their computer spending hours trying to replicate this phenomenon.

Do the math: how many of these campaigns can really be viral success stories? Society can only absorb a small number of these campaigns at any one time. For most of the people trying to ignite such campaigns, their time and energy is wasted, much like money spent buying lottery tickets and with odds that are just as bad.

It is far better to focus on the quality of your work in other ways than to waste any time on social media. If you do something that is truly extraordinary, then other people will pick it up and share it for you and that is how a viral campaign really begins. The time and effort you put into trying to force something to become viral is wasting the energy and concentration you need to make something that is worthy of really being viral.

An earthquake and an escaped lion never needed to announce themselves on social media to become an instant hit. If your news isn't extraordinary enough for random people to spontaneously post, share and tweet it in the first place, how can it ever go far?

The news media deliberately over-rates social media

News media outlets, including TV, radio and print, gain a significant benefit crowd-sourcing live information, free of charge, from the public on social media. It is only logical that they will cheer on social media sites and give them regular attention. Have you noticed that whenever Facebook's publicity department makes an announcement, the media are quick to publish it ahead of more significant stories about social or economic issues that impact our lives? Why do you think the media puts Facebook up on a podium like this, ahead of all other industries, if the media aren't getting something out of it too?

The tail doesn't wag the dog

One particular example is the news media's fascination with Donald Trump's Twitter account. Some people have gone as far as suggesting that this billionaire could have simply parked his jet and spent the whole of 2016 at one of his golf courses sending tweets and he would have won the presidency anyway. Suggesting that Trump's campaign revolved entirely around Twitter is like suggesting the tail wags the dog.

The reality is different: Trump has been a prominent public figure for decades, both in the business and entertainment world. During his presidential campaign, he had at least 220 major campaign rallies attended by over 1.2 million people in the real world. Without this real-world organization and history, the Twitter account would have been largely ignored like the majority of Twitter accounts.

On the left of politics, the media have been just as quick to suggest that Bernie Sanders and Jeremy Corbyn have been supported by the "Facebook generation". This label is superficial and deceiving. The reality, again, is a grass roots movement that has attracted young people to attend local campaign meetings in pubs up and down the country. Getting people to get out and be active is key. Social media is incidental to their campaign, not indispensible.

Real-world meetings, big or small, are immensely more powerful than a social media presence. Consider the Trump example again: if 100,000 people receive one of his tweets, how many even notice it in the non-stop stream of information we are bombarded with today? On the other hand, if 100,000 bellow out a racist slogan at one of his rallies, is there any doubt whether each and every one of those people is engaged with the campaign at that moment? If you could choose between 100 extra Twitter followers or 10 extra activists attending a meeting every month, which would you prefer?

Do we need this new definition of a Friend?

Facebook is redefining what it means to be a friend.

Is somebody who takes pictures of you and insists on sharing them with hundreds of people, tagging your face for the benefit of biometric profiling systems, really a friend?

If you want to find out what a real friend is and who your real friends really are, there is no better way to do so then blowing away your Facebook and Twitter account and waiting to see who contacts you personally about meeting up in the real world.

If you look at a profile on Facebook or Twitter, one of the most prominent features is the number of friends or followers they have. Research suggests that humans can realistically cope with no more than about 150 stable relationships. Facebook, however, has turned Friending people into something like a computer game.

This research is also given far more attention then it deserves though: the number of really meaningful friendships that one person can maintain is far smaller. Think about how many birthdays and spouse's names you can remember and those may be the number of real friendships you can manage well. In his book Busy, Tony Crabbe suggests between 10-20 friendships are in this category and you should spend all your time with these people rather than letting your time be spread thinly across superficial Facebook "friends".

This same logic can be extrapolated to activism and marketing in its many forms: is it better for a campaigner or publicist to have fifty journalists following him on Twitter (where tweets are often lost in the blink of an eye) or three journalists who he meets for drinks from time to time?

Facebook alternatives: the ultimate trap?

Numerous free, open source projects have tried to offer an equivalent to Facebook and Twitter. GNU social, Diaspora and are some of the more well known examples.

Trying to persuade people to move from Facebook to one of these platforms rarely works. In most cases, Metcalfe's law suggests the size of Facebook will suck them back in like the gravity of a black hole.

To help people really beat these monstrosities, the most effective strategy is to help them live without social media, whether it is proprietary or not. The best way to convince them may be to give it up yourself and let them see how much you enjoy life without it.

Share your thoughts

The FSFE community has recently been debating the use of propriety software and services. Please feel free to join the list and click here to reply on the thread.

Categorieën: Mozilla-nl planet

Tarek Ziadé: Advanced Molotov example

Mozilla planet - vr, 23/06/2017 - 00:00

Last week, I blogged about how to drive Firefox from a Molotov script using Arsenic.

It is pretty straightforward if you are doing some isolated interactions with Firefox and if each worker in Molotov lives its own life.

However, if you need to have several "users" (==workers in Molotov) running in a coordinated way on the same web page, it gets a little bit tricky.

Each worker is its coroutine and triggers the execution of one scenario by calling the coroutine that was decorated with @scenario.

Let's consider this simple use case: we want to run five workers in parallel that all visit the same etherpad lite page with their own Firefox instance through Arsenic.

One of them is adding some content in the pad and all the others are waiting on the page to check that it is updated with that content.

So we want four workers to wait on a condition (=pad written) before they make sure and check that they can see it.

Moreover, since Molotov can call a scenario many times in a row, we need to make sure that everything was done in the previous round before changing the pad content again. That is, four workers did check the content of the pad.

To do all that synchronization, Python's asyncio offers primitives that are similar to the one you would use with threads. asyncio.Event can be used for instance to have readers waiting for the writer and vice-versa.

In the example below, a class wraps two Events and exposes simple methods to do the syncing by making sure readers and writer are waiting for each other:

class Notifier(object): def __init__(self, readers=5): self._current = 1 self._until = readers self._readers = asyncio.Event() self._writer = asyncio.Event() def _is_set(self): return self._current == self._until async def wait_for_writer(self): await self._writer.wait() async def one_read(self): if self._is_set(): return self._current += 1 if self._current == self._until: self._readers.set() def written(self): self._writer.set() async def wait_for_readers(self): await self._readers.wait()

Using this class, the writer can call written() once it has filled the pad and the readers can wait for that event by calling wait_for_writer() which blocks until the write event is set.

one_read() is then called for each read. This second event is used by the next writer to make sure it can change the pad content after every reader did read it.

So how do we use this class in a Molotov test? There are several options and the simplest one is to create one Notifier instance per run and set it in a variable:

@molotov.scenario(1) async def example(session): get_var = molotov.get_var notifier = get_var('notifier' + str(session.step), factory=Notifier) wid = session.worker_id if wid != 4: # I am NOT worker 4! I read the pad # wait for worker #4 to edit the pad await notifier.wait_for_writer() # <.. pad reading here...> # notify that we've read it await notifier.one_read() else: # I am worker 4! I write in the pad if session.step > 1: # waiting for the previous readers to have finished # before we start a new round previous_notifier = get_var('notifier' + str(session.step)) await previous_notifier.wait_for_readers() # <... writes in the pad...> # informs that the write task was done notifier.written()

A lot is going on in this scenario. Let's look at each part in detail. First of all, the notifier is created as a var via set_var(). Its name contains the session step.

The step value is incremented by Molotov every time a worker is running a scenario, and we can use that value to create one distinct Notifier instance per run. It starts at 1.

Next, the session.worker_id value gives each distinct worker a unique id. If you run molotov with 5 workers, you will get values from 0 to 4.

We are making the last worker (worker id== 4) the one that will be in charge of writing in the pad.

For the other workers (=readers), they just use wait_for_writer() to sit and wait for worker 4 to write the pad. worker 4 notifies them with a call to written().

The last part of the script allows Molotov to run the script several times in a row using the same workers. When the writer starts its work, if the step value is superior to one, it means that we have already run the test at least one time.

The writer, in that case, gets back the Notifier from the previous run and verifies that all the readers did their job before changing the pad.

All of this syncing work sound complicated, but once you understand the pattern, it let you run advanced scenario in Molotov where several concurrent "users" need to collaborate.

You can find the full script at

Categorieën: Mozilla-nl planet

Firefox UX: Let‘s tackle the same challenge again, and again.

Mozilla planet - do, 22/06/2017 - 21:12
Actually, let’s not!

The products we build get more design attention as our Firefox UX team has grown from about 15 to 45 people. Designers can now continue to focus on their product after the initial design is finished, instead of having to move to the next project. This is great as it helps us improve our products step by step. But this also leads to increasing efforts to keep this growing team in sync and able to timely answer all questions posed to us.

Scaling communication from small to big teams leads to massive effort for a few.

Especially for engineers and new designers it is often difficult to get timely answers to simple questions. Those answers are often in the original spec, which too often is hard to locate. Or worse, it may be in the mind of the designer, who may have left, or receives too many questions to respond timely.

In a survey we ran in early 2017, developers reported to feel they

  • spend too much time identifying the right specs to build from,
  • spend too much time waiting for feedback from designers, and
  • spend too much time mapping new designs to existing UI elements.

In the same survey designers reported to feel they

  • spend too much time identifying current UI to re-use in their designs, and
  • spend too much time re-building current UI to use in their designs.

All those repetitive tasks people feel they spend too much time on ultimately keep us from tackling newer and bigger challenges. ‒ So, actually, let‘s not spend our time on those.

Let’s help people spend time on what they love to do.

Shifting some communication to a central tool can reduce load on people and lower the barrier for entry.

Let’s build tools that help developers know what a given UI should look like, without them needing to wait for feedback from designers. And let’s use that system for designers to identify UI we already built, and to learn how they can re-use it.

We call this the Photon Design System,
and its first beta version is ready to be used:

We are happy to receive feedback and contributions on the current content of the system, as well as on what content to add next.

Photon Design System

Based on what we learned from people, we are building our design system to help people:

  • find what they are looking for easily,
  • understand the context of that quickly, and
  • more deeply understand Firefox Design.

Currently the Photon Design System covers fundamental design elements like icons, colors, typography and copy-writing as well as our design principles and guidelines on how to design for scale. Defining those already helped designers better align across products and features, and developers have a definitive source to fall back to when a design does not specify a color, icon or other.


With all the design fundamentals in place we are starting to combine them into defined components that can easily be reused to create consistent Firefox UI across all platforms, from mobile to desktop, and from web-based to native. This will add value for people working on Firefox products, as well as help people working on extensions for Firefox.

If you are working on Firefox UI

We would love to learn from you what principles, patterns & components your team’s work touches, and what you feel is worth documenting for others to learn from, and use in their UI.

Share your principle/pattern/component with us!

And if you haven’t yet, ask yourself where you could use what’s already documented in the Photon Design System and help us find more and more synergies across our products to utilize.

If you are working on a Firefox extension

We would love to learn about where you would have wanted design support when building your extension, and when you had to spend more time on design then you intended to.

Share with us!

Let‘s tackle the same challenge again, and again. was originally published in Firefox User Experience on Medium, where people are continuing the conversation by highlighting and responding to this story.

Categorieën: Mozilla-nl planet

Mozilla Open Design Blog: MDN’s new design is in Beta

Mozilla planet - do, 22/06/2017 - 19:46

Change is coming to MDN. In a recent post, we talked about updates to the MDN brand, and this time we want to focus on the upcoming design changes for MDN. MDN started as a repository for all Mozilla documentation, but today MDN’s mission is to provide developers with the information they need to build things on the open Web. We want to more clearly represent that mission in the naming and branding of MDN.

New MDN logo

MDN’s switch to new branding reflects an update of Mozilla’s overall brand identity, and we are taking this opportunity to update MDN’s visual design to match Mozilla’s design language and clean new look. For MDN that means bold typography that highlights the structure of the page, more contrast, and a reduction to the essentials. Color in particular is more sparingly used, so that the code highlighting stands out.

Here’s what you can expect from the first phase:

screenshot of new MDN design

New MDN design

The core idea behind MDN’s brand identity change is that MDN is a resources for web developers. We realize that MDN is a critical resource for many web developers and we want to make sure that this update is an upgrade for all users. Instead of one big update, we will make incremental changes to the design in several phases. For the initial launch, we will focus on applying the design language to the header, footer and typography. The second phase will see changes to landing pages such as the web platform, learning area, and MDN start page. The last part of the redesign will cover the article pages themselves, and prepare us for any functional changes we’ve got coming in the future.

Today, we are launching the first phase of the redesign to our beta users. Over the next few weeks we’ll collect feedback, and fix potential issues before releasing it to all MDN users in July. Become a beta tester on MDN and be among the first to see these updates, track the progress, and provide us with feedback to make the whole thing even better for the official launch.

The post MDN’s new design is in Beta appeared first on Mozilla Open Design.

Categorieën: Mozilla-nl planet