mozilla

Mozilla Nederland LogoDe Nederlandse
Mozilla-gemeenschap

Daniel Pocock: Free and open WebRTC for the Fedora Community

Mozilla planet - ma, 18/05/2015 - 19:48

In January 2014, we launched the rtc.debian.org service for the Debian community. An equivalent service has been in testing for the Fedora community at FedRTC.org.

Some key points about the Fedora service:

  • The web front-end is just HTML, CSS and JavaScript. PHP is only used for account creation, the actual WebRTC experience requires no server-side web framework, just a SIP proxy.
  • The web code is all available in a Github repository so people can extend it.
  • Anybody who can authenticate against the FedOAuth OpenID is able to get a fedrtc.org test account immediately.
  • The server is built entirely with packages from CentOS 7 + EPEL 7, except for the SIP proxy itself. The SIP proxy is reSIProcate, which is available as a Fedora package and builds easily on RHEL / CentOS.
Testing it with WebRTC

Create an RTC password and then log in. Other users can call you. It is federated, so people can also call from rtc.debian.org or from freephonebox.net.

Testing it with other SIP softphones

You can use the RTC password to connect to the SIP proxy from many softphones, including Jitsi or Lumicall on Android.

Copy it

The process to replicate the server for another domain is entirely described in the Real-Time Communications Quick Start Guide.

Discuss it

The FreeRTC mailing list is a great place to discuss any issues involving this site or free RTC in general.

WebRTC opportunities expanding

Just this week, the first batch of Firefox OS televisions are hitting the market. Every one of these is a potential WebRTC client that can interact with free communications platforms.

Categorieën: Mozilla-nl planet

Mozilla Firefox vs Google Chrome – 2015 Edition - Gun Shy Assassin

Nieuws verzameld via Google - ma, 18/05/2015 - 18:45

Mashable

Mozilla Firefox vs Google Chrome – 2015 Edition
Gun Shy Assassin
The browser battle is on and the biggest players are Firefox and Chrome. These two rivals however are not equals. Firefox was released in 2002 while Chrome was launched in 2008. It seems that Mozilla has a lot more experience when it comes to surfing ...
Google Working On A RAM Consumption Fix For Chrome BrowserWCCFtech
Back to the Basics: Surfing the InternetAboutMyArea
Google Chrome Free Download Browser Blocks All Extensions Not Listed in ...Ordoh

alle 18 nieuwsartikelen »
Categorieën: Mozilla-nl planet

Mozilla Reps Community: New council members – Spring 2015

Mozilla planet - ma, 18/05/2015 - 18:27

We are happy to announce that three new members of the Council have been elected.

Welcome Michael, Shahid and Christos! They bring with them skills they have picked up as Reps mentors, and as community leaders both inside Mozilla and in other fields. A HUGE thank you to the outgoing council members – Arturo, Emma and Raj. We are hoping you continue to use your talents and experience to continue in a leadership role in Reps and Mozilla.

The new members will be gradually on boarding during the following 3 weeks.

The Mozilla Reps Council is the governing body of the Mozilla Reps Program. It provides the general vision of the program and oversees day-to-day operations globally. Currently, 7 volunteers and 2 paid staff sit on the council. Find out more on the ReMo wiki.

Congratulate new Council members on this Discourse topic!

Categorieën: Mozilla-nl planet

Air Mozilla: Firefox OS Tricoder

Mozilla planet - ma, 18/05/2015 - 16:11

Firefox OS Tricoder Reading device sensor data in JavaScript

Categorieën: Mozilla-nl planet

Mozilla-støttet språk er klar for bruk - digi.no

Nieuws verzameld via Google - ma, 18/05/2015 - 14:12

Mozilla-støttet språk er klar for bruk
digi.no
Forsøk på å lese eller skrive større datamengder enn det en minneblokk kan inneholde, eller forsøk på å lese eller skrive data fra eller til en minneblokk som har blitt frigjort, er typiske eksempler på feil som kan oppstå når utviklere skriver ...

Google Nieuws
Categorieën: Mozilla-nl planet

Tim Taubert: Implementing a PBKDF2-based Password Storage Scheme for Firefox OS

Mozilla planet - ma, 18/05/2015 - 14:06

My esteemed colleague Frederik Braun recently took on to rewrite the module responsible for storing and checking passcodes that unlock Firefox OS phones. While we are still working on actually landing it in Gaia I wanted to seize the chance to talk about this great use case of the WebCrypto API in the wild and highlight a few important points when using password-based key derivation (PBKDF2) to store passwords.

The Passcode Module

Let us take a closer look at not the verbatim implementation but at a slightly simplified version. The API offers the only two operations such a module needs to support: setting a new passcode and verifying that a given passcode matches the stored one.

let Passcode = { store(code) { // ... }, verify(code) { // ... } };

When setting up the phone for the first time - or when changing the passcode later - we call Passcode.store() to write a new code to disk. Passcode.verify() will help us determine whether we should unlock the phone. Both methods return a Promise as all operations exposed by the WebCrypto API are asynchronous.

Passcode.store("1234").then(() => { return Passcode.verify("1234"); }).then(valid => { console.log(valid); }); // Output: true Make the passcode look “random”

The module should absolutely not store passcodes in the clear. We will use PBKDF2 as a pseudorandom function (PRF) to retrieve a result that looks random. An attacker with read access to the part of the disk storing the user’s passcode should not be able to recover the original input, assuming limited computational resources.

The function deriveBits() is a PRF that takes a passcode and returns a Promise resolving to a random looking sequence of bytes. To be a little more specific, it uses PBKDF2 to derive pseudorandom bits.

function deriveBits(code) { // Convert string to a TypedArray. let bytes = new TextEncoder("utf-8").encode(code); // Create the base key to derive from. let importedKey = crypto.subtle.importKey( "raw", bytes, "PBKDF2", false, ["deriveBits"]); return importedKey.then(key => { // Salt should be at least 64 bits. let salt = crypto.getRandomValues(new Uint8Array(8)); // All required PBKDF2 parameters. let params = {name: "PBKDF2", hash: "SHA-1", salt, iterations: 5000}; // Derive 160 bits using PBKDF2. return crypto.subtle.deriveBits(params, key, 160); }); } Choosing PBKDF2 parameters

As you can see above PBKDF2 takes a whole bunch of parameters. Choosing good values is crucial for the security of our passcode module so it is best to take a detailed look at every single one of them.

Select a cryptographic hash function

PBKDF2 is a big PRF that iterates a small PRF. The small PRF, iterated multiple times (more on why this is done later), is fixed to be an HMAC construction; you are however allowed to specify the cryptographic hash function used inside HMAC itself. To understand why you need to select a hash function it helps to take a look at HMAC’s definition, here with SHA-1 at its core:

HMAC-SHA-1(k, m) = SHA-1((k ⊕ opad) + SHA-1((k ⊕ ipad) + m))

The outer and inner padding opad and ipad are static values that can be ignored for our purpose, the important takeaway is that the given hash function will be called twice, combining the message m and the key k. Whereas HMAC is usually used for authentication PBKDF2 makes use of its PRF properties, that means its output is computationally indistinguishable from random.

deriveBits() as defined above uses SHA-1 as well, and although it is considered broken as a collision-resistant hash function it is still a safe building block in the HMAC-SHA-1 construction. HMAC only relies on a hash function’s PRF properties, and while finding SHA-1 collisions is considered feasible it is still believed to be a secure PRF.

That said, it would not hurt to switch to a secure cryptographic hash function like SHA-256. Chrome supports other hash functions for PBKDF2 today, Firefox unfortunately has to wait for an NSS fix before those can be unlocked for the WebCrypto API.

Pass a random salt

The salt is a random component that PBKDF2 feeds into the HMAC function along with the passcode. This prevents an attacker from simply computing the hashes of for example all 8-character combinations of alphanumerics (~5.4 PetaByte of storage for SHA-1) and use a huge lookup table to quickly reverse a given password hash. Specify 8 random bytes as the salt and the poor attacker will have to suddenly compute (and store!) 264 of those lookup tables and face 8 additional random characters in the input. Even without the salt the effort to create even one lookup table would be hard to justify because chances are high you cannot reuse it to attack another target, they might be using a different hash function or combine two or more of them.

The same goes for Rainbow Tables. A random salt included with the password would have to be incorporated when precomputing the hash chains and the attacker is back to square one where she has to compute a Rainbow Table for every possible salt value. That certainly works ad-hoc for a single salt value but preparing and storing 264 of those tables is impossible.

The salt is public and will be stored in the clear along with the derived bits. We need the exact same salt to arrive at the exact same derived bits later again. We thus have to modify deriveBits() to accept the salt as an argument so that we can either generate a random one or read it from disk.

function deriveBits(code, salt) { // Convert string to a TypedArray. let bytes = new TextEncoder("utf-8").encode(code); // Create the base key to derive from. let importedKey = crypto.subtle.importKey( "raw", bytes, "PBKDF2", false, ["deriveBits"]); return importedKey.then(key => { // All required PBKDF2 parameters. let params = {name: "PBKDF2", hash: "SHA-1", salt, iterations: 5000}; // Derive 160 bits using PBKDF2. return crypto.subtle.deriveBits(params, key, 160); }); }

Keep in mind though that Rainbow tables today are mainly a thing from the past where password hashes were smaller and shittier. Salts are the bare minimum a good password storage scheme needs, but they merely protect against a threat that is largely irrelevant today.

Specify a number of iterations

As computers became faster and Rainbow Table attacks infeasible due to the prevalent use of salts everywhere, people started attacking password hashes with dictionaries, simply by taking the public salt value and passing that combined with their educated guess to the hash function until a match was found. Modern password schemes thus employ a “work factor” to make hashing millions of password guesses unbearably slow.

By specifying a sufficiently high number of iterations we can slow down PBKDF2’s inner computation so that an attacker will have to face a massive performance decrease and be able to only try a few thousand passwords per second instead of millions.

For a single-user disk or file encryption it might be acceptable if computing the password hash takes a few seconds; for a lock screen 300-500ms might be the upper limit to not interfere with user experience. Take a look at this great StackExchange post for more advice on what might be the right number of iterations for your application and environment.

A much more secure version of a lock screen would allow to not only use four digits but any number of characters. An additional delay of a few seconds after a small number of wrong guesses might increase security even more, assuming the attacker cannot access the PRF output stored on disk.

Determine the number of bits to derive

PBKDF2 can output an almost arbitrary amount of pseudo-random data. A single execution yields the number of bits that is equal to the chosen hash function’s output size. If the desired number of bits exceeds the hash function’s output size PBKDF2 will be repeatedly executed until enough bits have been derived.

function getHashOutputLength(hash) { switch (hash) { case "SHA-1": return 160; case "SHA-256": return 256; case "SHA-384": return 384; case "SHA-512": return 512; } throw new Error("Unsupported hash function"); }

Choose 160 bits for SHA-1, 256 bits for SHA-256, and so on. Slowing down the key derivation even further by requiring more than one round of PBKDF2 will not increase the security of the password storage.

Do not hard-code parameters

Hard-coding PBKDF2 parameters - the name of the hash function to use in the HMAC construction, and the number of HMAC iterations - is tempting at first. We however need to be flexible if for example it turns out that SHA-1 can no longer be considered a secure PRF, or you need to increase the number of iterations to keep up with faster hardware.

To ensure future code can verify old passwords we store the parameters that were passed to PBKDF2 at the time, including the salt. When verifying the passcode we will read the hash function name, the number of iterations, and the salt from disk and pass those to deriveBits() along with the passcode itself. The number of bits to derive will be the hash function’s output size.

function deriveBits(code, salt, hash, iterations) { // Convert string to a TypedArray. let bytes = new TextEncoder("utf-8").encode(code); // Create the base key to derive from. let importedKey = crypto.subtle.importKey( "raw", bytes, "PBKDF2", false, ["deriveBits"]); return importedKey.then(key => { // Output length in bits for the given hash function. let hlen = getHashOutputLength(hash); // All required PBKDF2 parameters. let params = {name: "PBKDF2", hash, salt, iterations}; // Derive |hlen| bits using PBKDF2. return crypto.subtle.deriveBits(params, key, hlen); }); } Storing a new passcode

Now that we are done implementing deriveBits(), the heart of the Passcode module, completing the API is basically a walk in the park. For the sake of simplicity we will use localforage as the storage backend. It provides a simple, asynchronous, and Promise-based key-value store.

// <script src="localforage.min.js"/> const HASH = "SHA-1"; const ITERATIONS = 4096; Passcode.store = function (code) { // Generate a new random salt for every new passcode. let salt = crypto.getRandomValues(new Uint8Array(8)); return deriveBits(code, salt, HASH, ITERATIONS).then(bits => { return Promise.all([ localforage.setItem("digest", bits), localforage.setItem("salt", salt), localforage.setItem("hash", HASH), localforage.setItem("iterations", ITERATIONS) ]); }); };

We generate a new random salt for every new passcode. The derived bits are stored along with the salt, the hash function name, and the number of iterations. HASH and ITERATIONS are constants that provide default values for our PBKDF2 parameters and can be updated whenever desired. The Promise returned by Passcode.store() will resolve when all values have been successfully stored in the backend.

Verifying a given passcode

To verify a passcode all values and parameters stored by Passcode.store() will have to be read from disk and passed to deriveBits(). Comparing the derived bits with the value stored on disk tells whether the passcode is valid.

Passcode.verify = function (code) { let loadValues = Promise.all([ localforage.getItem("digest"), localforage.getItem("salt"), localforage.getItem("hash"), localforage.getItem("iterations") ]); return loadValues.then(([digest, salt, hash, iterations]) => { return deriveBits(code, salt, hash, iterations).then(bits => { return compare(bits, digest); }); }); }; Should compare() be a constant-time operation?

compare() does not have to be constant-time. Even if the attacker learns the first byte of the final digest stored on disk she cannot easily produce inputs to guess the second byte - the opposite would imply knowing the pre-images of all those two-byte values. She cannot do better than submitting simple guesses that become harder the more bytes are known. For a successful attack all bytes have to be recovered, which in turns means a valid pre-image for the full final digest needs to be found.

If it makes you feel any better, you can of course implement compare() as a constant-time operation. This might be tricky though given that all modern JavaScript engines optimize code heavily.

What about bcrypt or scrypt?

Both bcrypt and scrypt are probably better alternatives to PBKDF2. Bcrypt automatically embeds the salt and cost factor into its output, most APIs are clever enough to parse and use those parameters when verifying a given password.

Scrypt implementations can usually securely generate a random salt, that is one less thing for you to care about. The most important aspect of scrypt though is that it allows consuming a lot of memory when computing the password hash which makes cracking passwords using ASICs or FPGAs close to impossible.

The Web Cryptography API does unfortunately support neither of the two algorithms and currently there are no proposals to add those. In the case of scrypt it might also be somewhat controversial to allow a website to consume arbitrary amounts of memory.

Categorieën: Mozilla-nl planet

Mozilla udostępnia tajemniczego Firefoksa 38.0.5 beta. Co nowego? - Komputer Świat

Nieuws verzameld via Google - ma, 18/05/2015 - 11:45

Komputer Świat

Mozilla udostępnia tajemniczego Firefoksa 38.0.5 beta. Co nowego?
Komputer Świat
Na początku zeszłego tygodnia Mozilla udostępniła stabilne wydanie Firefoksa z numerem 38.0, o którym więcej przeczytacie tutaj. Osobiście po premierze zacząłem wypatrywać liska 39.0 beta, który powinien pojawić się na serwerach Mozilli po dniu lub ...

Google Nieuws
Categorieën: Mozilla-nl planet

Gregory Szorc: Firefox Mercurial Repository with CVS History

Mozilla planet - ma, 18/05/2015 - 10:40

When Firefox made the switch from CVS to Mercurial in March 2007, the CVS history wasn't imported into Mercurial. There were good reasons for this at the time. But it's a decision that continues to have side-effects. I am surprised how often I hear of engineers wanting to access blame and commit info from commits now more than 9 years old!

When individuals created a Git mirror of the Firefox repository a few years ago, they correctly decided that importing CVS history would be a good idea. They also correctly decided to combine the logically same but physically separate release and integration repositories into a unified Git repository. These are things we can't easily do to the canonical Mercurial repository because it would break SHA-1 hashes, breaking many systems, and it would require significant changes in process, among other reasons.

While Firefox developers do have access to a single Firefox repository with full CVS history (the Git mirror), they still aren't satisfied.

Running git blame (or hg blame for that matter) can be very expensive. For this reason, the blame interface is disabled on many web-based source viewers by default. On GitHub, some blame URLs for the Firefox repository time out and cause GitHub to display an error message. No matter how hard you try, you can't easily get blame results (running a local Git HTTP/HTML interface is still difficult compared to hg serve).

Another reason developers aren't satisfied with the Git mirror is that Git's querying tools pale in comparison to Mercurial's. I've said it before and I'll say it again: Mercurial's revision sets and templates are incredibly useful features that enable advanced repository querying and reporting. Git's offerings come nowhere close. (I really wish Git would steal these awesome features from Mercurial.)

Anyway, enough people were complaining about the lack of a Mercurial Firefox repository with full CVS history that I decided to create one. If you point your browsers or Mercurial clients to https://hg.mozilla.org/users/gszorc_mozilla.com/gecko-full, you'll be able to access it.

The process used for the conversion was the simplest possible: I used hg-git to convert the Git mirror back to Mercurial.

Unlike the Git mirror, I didn't include all heads in this new repository. Instead, there is only mozilla-central's head (the current development tip). If I were doing this properly, I'd include all heads, like gecko-aggregate.

I'm well aware there are oddities in the Git mirror and they now exist in this new repository as well. My goal for this conversion was to deliver something: it wasn't a goal to deliver the most correct result possible.

At this time, this repository should be considered an unstable science experiment. By no means should you rely on this repository. But if you find it useful, I'd appreciate hearing about it. If enough people ask, we could probably make this more official.

Categorieën: Mozilla-nl planet

Gervase Markham: Eurovision Bingo

Mozilla planet - ma, 18/05/2015 - 10:20

Some people say that all Eurovision songs are the same. That’s probably not quite true, but there is perhaps a hint of truth in the suggestion that some themes tend to recur from year to year. Hence, I thought, Eurovision Bingo.

I wrote some code to analyse a directory full of lyrics, normally those from the previous year of the competition, and work out the frequency of occurrence of each word. It will then generate Bingo cards, with sets of words of different levels of commonness. You can then use them to play Bingo while watching this year’s competition (which is on Saturday).

There’s a Github repo, or if you want to go straight to pre-generated cards for this year, they are here.

Here’s a sample card from the 2014 lyrics:

fell cause rising gonna rain world believe dancing hold once every mean LOVE something chance hey show or passed say because light hard home heart

Have fun :-)

Categorieën: Mozilla-nl planet

Air Mozilla: OuiShare Labs Camp #3

Mozilla planet - ma, 18/05/2015 - 10:00

OuiShare Labs Camp #3 OuiShare Labs Camp #3 is a participative conference dedicated to decentralization, IndieWeb, semantic web and open source community tools.

Categorieën: Mozilla-nl planet

Mozilla Pushes Web Sites To Adopt Encryption - CIO Today

Nieuws verzameld via Google - ma, 18/05/2015 - 06:41

Mozilla Pushes Web Sites To Adopt Encryption
CIO Today
The organization behind the Firefox Web browser wants to see Web site encryption become standard practice, and it has laid out a two-part plan to help that happen. Mozilla said it plans to set a date by which all new features for its browser will be ...

Categorieën: Mozilla-nl planet

Mozilla Pushes Web Sites To Adopt Encryption - CIO Today

Nieuws verzameld via Google - ma, 18/05/2015 - 06:41

CultureMob

Mozilla Pushes Web Sites To Adopt Encryption
CIO Today
The organization behind the Firefox Web browser wants to see Web site encryption become standard practice, and it has laid out a two-part plan to help that happen. Mozilla said it plans to set a date by which all new features for its browser will be ...
Best 5 Mozilla Firefox Add-ons to Improve your Browsing ExperienceCultureMob

alle 2 nieuwsartikelen »
Categorieën: Mozilla-nl planet

This Week In Rust: This Week in Rust 81

Mozilla planet - ma, 18/05/2015 - 06:00

Hello and welcome to another issue of This Week in Rust! Rust is a systems language pursuing the trifecta: safety, concurrency, and speed. This is a weekly summary of its progress and community. Want something mentioned? Send me an email! Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub. If you find any errors or omissions in this week's issue, please submit a PR.

What's cooking on master?

273 pull requests were merged in the last two weeks, and 4 RFC PRs.

Now you can follow breaking changes as they happen!

Breaking Changes Other Changes New Contributors
  • らいどっと
  • Aaron Gallagher
  • Alexander Polakov
  • Alex Burka
  • Andrei Oprea
  • Andrew Kensler
  • Andrew Straw
  • Ben Gesoff
  • Chris Hellmuth
  • Cole Reynolds
  • Colin Walters
  • David Reid
  • Don Petersen
  • Emilio Cobos Álvarez
  • Franziska Hinkelmann
  • Garming Sam
  • Hika Hibariya
  • Isaac Ge
  • Jan Andersson
  • Jan-Erik Rediger
  • Jannis Redmann
  • Jason Yeo
  • Jeremy Schlatter
  • Johann
  • Johann Hofmann
  • Lee Jeffery
  • leunggamciu
  • Marin Atanasov Nikolov
  • Mário Feroldi
  • Mathieu Rochette
  • Michael Park
  • Michael Wu
  • Michał Czardybon
  • Mike Sampson
  • Nick Platt
  • parir
  • Paul Banks
  • Paul Faria
  • Paul Quint
  • peferron
  • Pete Hunt
  • robertfoss
  • Rob Young
  • Russell Johnston
  • Shmuale Mark
  • Simon Kern
  • Sindre Johansen
  • sumito3478
  • Swaroop C H
  • Tincan
  • Wei-Ming Yang
  • Wilfred Hughes
  • Will Engler
  • Wojciech Ogrodowczyk
  • XuefengWu
  • Z1
Approved RFCs New RFCs Betawatch!

The current beta is 1.1.0-beta (cd7d89af9 2015-05-16) (built 2015-05-16).

Notable Links Project Updates Upcoming Events

If you are running a Rust event please add it to the calendar to get it mentioned here. Email Erick Tryzelaar or Brian Anderson for access.

Quote of the Week

"Yes, because laundry eating has evolved to be a specific design goal now; and the initial portions of the planned laundry eating API have been landed behind the #![feature(no_laundry)] gate. no_laundry should become stable in 6-8 weeks, though the more complicated portions, including DRY cleaning, Higher Kinded T-shirts, Opt-in Builtin Detergent, and Rinse Time Optimization will not be stabilized until much later."

"We hope this benefits the Laundry as a Service community immensely."

Manish explains Rust's roadmap for laundry-eating.

Thanks to filsmick for the tip.

And since there were so many quotables in the last two weeks, here's one from Evan Miller's evaluation of Rust:

"Rust is a systems language. I’m not sure what that term means, but it seems to imply some combination of native code compilation, not being Fortran, and making no mention of category theory in the documentation."

Thanks to ruudva for the tip. Submit your quotes for next week!.

Categorieën: Mozilla-nl planet

Mark Côté: Integration

Mozilla planet - ma, 18/05/2015 - 04:37

The other day I read about another new Mozilla project that decided to go with GitHub issues instead of our Bugzilla installation (BMO). The author’s arguments make a lot of sense: GitHub issues are much simpler and faster, and if you keep your code in GitHub, you get tighter integration. The author notes that a downside is the inability to file security or confidential bugs, for which Bugzilla has a fine-grained permission system, and that he’d just put those (rare) issues on BMO.

The one downside he doesn’t mention is interdependencies with other Mozilla projects, e.g. the Depends On/Blocks fields. This is where Bugzilla gets into project, product, and perhaps even program management by allowing people to easily track dependency chains, which is invaluable in planning. Many people actually file bugs solely as trackers for a particular feature or project, hanging all the work items and bugs off of it, and sometimes that work crosses product boundaries. There are also a number of tracking flags and fields that managers use to prioritize work and decide which releases to target.

If I had to rebut my own point, I would argue that the projects that use GitHub issues are relatively isolated, and so dependency tracking is not particularly important. Why clutter up and slow down the UI with lots of features that I don’t need for my project? In particular, most of the tracking features are currently used only by, and thus designed for, the Firefox products (aside: this is one reason the new modal UI hides most of these fields by default if they have never been set).

This seems hard to refute, and I certainly wouldn’t want to force an admittedly complex tool on anyone who had much simpler needs. But something still wasn’t sitting right with me, and it took a while to figure out what it was. As usual, it was that a different question was going unasked, leading to unspoken assumptions: why do we have so many isolated projects, and what are we giving up by having such loose (or even no) integration amongst all our work?

Working on projects in isolation is comforting because you don’t have to think about all the other things going on in your organization—in other words, you don’t have to communicate with very many people. A lack of communication, however, leads to several problems:

  • low visibility: what is everyone working on?
  • redundancy: how many times are we solving the same problem?
  • barriers to coordination: how can we become greater than the sum of our parts by delivering inter-related features and products?

By working in isolation, we can’t leverage each other’s strengths and accomplishments. We waste effort and lose great opportunities to deliver amazing things. We know that places like Twitter use monorepos to get some of these benefits, like a single build/test/deploy toolchain and coordination of breaking changes. This is what facilitates architectures like microservices and SOAs. Even if we don’t want to go down those paths, there is still a clear benefit to program management by at least integrating the tracking and planning of all of our various endeavours and directions. We need better organization-wide coordination.

We’re already taking some steps in this direction, like moving Firefox and Cloud Services to one division. But there are many other teams that could benefit from better integration, many teams that are duplicating effort and missing out on chances to work together. It’s a huge effort, but maybe we need to form a team to define a strategy and process—a Strategic Integration Team perhaps?

Categorieën: Mozilla-nl planet

Mark Côté: Project Isolation

Mozilla planet - ma, 18/05/2015 - 04:37

The other day I read about another new Mozilla project that decided to go with GitHub issues instead of our Bugzilla installation (BMO). The author’s arguments make a lot of sense: GitHub issues are much simpler and faster, and if you keep your code in GitHub, you get tighter integration. The author notes that a downside is the inability to file security or confidential bugs, for which Bugzilla has a fine-grained permission system, and that he’d just put those (rare) issues on BMO.

The one downside he doesn’t mention is interdependencies with other Mozilla projects, e.g. the Depends On/Blocks fields. This is where Bugzilla gets into project, product, and perhaps even program management by allowing people to easily track dependency chains, which is invaluable in planning. Many people actually file bugs solely as trackers for a particular feature or project, hanging all the work items and bugs off of it, and sometimes that work crosses product boundaries. There are also a number of tracking flags and fields that managers use to prioritize work and decide which releases to target.

If I had to rebut my own point, I would argue that the projects that use GitHub issues are relatively isolated, and so dependency tracking is not particularly important. Why clutter up and slow down the UI with lots of features that I don’t need for my project? In particular, most of the tracking features are currently used only by, and thus designed for, the Firefox products (aside: this is one reason the new modal UI hides most of these fields by default if they have never been set).

This seems hard to refute, and I certainly wouldn’t want to force an admittedly complex tool on anyone who had much simpler needs. But something still wasn’t sitting right with me, and it took a while to figure out what it was. As usual, it was that a different question was going unasked, leading to unspoken assumptions: why do we have so many isolated projects, and what are we giving up by having such loose (or even no) integration amongst all our work?

Working on projects in isolation is comforting because you don’t have to think about all the other things going on in your organization—in other words, you don’t have to communicate with very many people. A lack of communication, however, leads to several problems:

  • low visibility: what is everyone working on?
  • redundancy: how many times are we solving the same problem?
  • barriers to coordination: how can we become greater than the sum of our parts by delivering inter-related features and products?

By working in isolation, we can’t leverage each other’s strengths and accomplishments. We waste effort and lose great opportunities to deliver amazing things. We know that places like Twitter use monorepos to get some of these benefits, like a single build/test/deploy toolchain and coordination of breaking changes. This is what facilitates architectures like microservices and SOAs. Even if we don’t want to go down those paths, there is still a clear benefit to program management by at least integrating the tracking and planning of all of our various endeavours and directions. We need better organization-wide coordination.

We’re already taking some steps in this direction, like moving Firefox and Cloud Services to one division. But there are many other teams that could benefit from better integration, many teams that are duplicating effort and missing out on chances to work together. It’s a huge effort, but maybe we need to form a team to define a strategy and process—a Strategic Integration Team perhaps?

Categorieën: Mozilla-nl planet

Mike Conley: The Joy of Coding (Ep. 14): More OS X Printing

Mozilla planet - ma, 18/05/2015 - 01:09

In this episode, I kept working on the same bug as last week – proxying the print dialog from the content process on OS X. We actually finished the serialization bit, and started doing deserialization!

Hopefully, next episode we can polish off the deserialization and we’l be done. Fingers crossed!

Note that this episode was about 2 hours and 10 minutes, but the standard-definition recording up on Air Mozilla only plays for about 13 minutes and 5 seconds. Not too sure what’s going on there – we’ve filed a bug with the people who’ve encoded it. Hopefully, we’ll have the full episode up for standard-definition soon.

In the meantime, if you’d like to watch the whole episode, you can go to the Air Mozilla page and watch it in HD, or you can go to the YouTube mirror.

Episode Agenda

References

Bug 1091112 – Print dialog doesn’t get focus automatically, if e10s is enabled – Notes

Categorieën: Mozilla-nl planet

Panasonic e Mozilla se unem para lançar primeira TV com Firefox OS - NE10

Nieuws verzameld via Google - zo, 17/05/2015 - 18:53

NE10

Panasonic e Mozilla se unem para lançar primeira TV com Firefox OS
NE10
A Panasonic em parceria com a Fundação Mozilla anunciou a primeira SmartTV com Firefox OS. A SmarTV VIERA, da Panasonic, já está disponível na Europa e estará à venda no resto do mundo nos próximos meses. Ela é otimizada pelo HTML5 para ...
Panasonic e Mozilla anunciam lançamento de SmartTV com Firefox OSAdministradores
Mozilla anuncia primeira SmartTV com Firefox OSTribuna Hoje
Panasonic e Mozilla lançam primeira SmartTV com Firefox OSMundo do Marketing
IT Forum 365
alle 12 nieuwsartikelen »Google Nieuws
Categorieën: Mozilla-nl planet

Panasonic, Mozilla Team Up To Roll Out World's First Firefox-based TV - Tech News Today

Nieuws verzameld via Google - zo, 17/05/2015 - 15:57

Tech News Today

Panasonic, Mozilla Team Up To Roll Out World's First Firefox-based TV
Tech News Today
Panasonic has partnered up with Mozilla to enter the world of futuristic Smart TV's, based on the Firefox OS. The Smart TV lineup was presented recently at CES, and the TV's will begin rolling out to consumers worldwide in a few months. Mozilla's ...
Panasonic Begins Selling The First Firefox OS-Powered Smart TVs, Initially In ...TechCrunch
Panasonic Now Selling Firefox-Powered TVsPC Magazine
Panasonic Outs World's First Firefox Smart TVTech Times
Ubergizmo (blog) -Focus Taiwan News Channel -The Verge
alle 33 nieuwsartikelen »
Categorieën: Mozilla-nl planet

Manish Goregaokar: The Problem With Single-threaded Shared Mutability

Mozilla planet - zo, 17/05/2015 - 13:26

This is a post that I’ve been meaning to write for a while now; and the release of Rust 1.0 gives me the perfect impetus to go ahead and do it.

Whilst this post discusses a choice made in the design of Rust; and uses examples in Rust; the principles discussed here apply to other languages for the most part. I’ll also try to make the post easy to understand for those without a Rust background; please let me know if some code or terminology needs to be explained.

What I’m going to discuss here is the choice made in Rust to disallow having multiple mutable aliases to the same data (or a mutable alias when there are active immutable aliases), even from the same thread. In essence, it disallows one from doing things like:

let mut x = Vec::new(); { let ptr = &mut x; // Take a mutable reference to `x` ptr.push(1); // Allowed let y = x[0]; // Not allowed (will not compile): as long as `ptr` is active, // x cannot be read from ... x.push(1); // .. or written to } // alternatively, let mut x = Vec::new(); x.push(1); // Allowed { let ptr = &x; // Create an immutable reference let y = ptr[0]; // Allowed, nobody can mutate let y = x[0]; // Similarly allowed x.push(1); // Not allowed (will not compile): as long as `ptr` is active, // `x` is frozen for mutation }

This is essentially the “Read-Write lock” (RWLock) pattern, except it’s not being used in a threaded context, and the “locks” are done via static analysis (compile time “borrow checking”).

Newcomers to the language have the recurring question as to why this exists. Ownership semantics and immutable borrows can be grasped because there are concrete examples from languages like C++ of problems that these concepts prevent. It makes sense that having only one “owner” and then multiple “borrowers” who are statically guaranteed to not stick around longer than the owner will prevent things like use-after-free.

But what could possibly be wrong with having multiple handles for mutating an object? Why do we need an RWLock pattern? 1

It causes memory unsafety

This issue is specific to Rust, and I promise that this will be the only Rust-specific answer.

Rust enums provide a form of algebraic data types. A Rust enum is allowed to “contain” data, for example you can have the enum

enum StringOrInt { Str(String), Int(i64) }

which gives us a type that can either be a variant Str, with an associated string, or a variant Int2, with an associated integer.

With such an enum, we could cause a segfault like so:

let x = Str("Hi!".to_string()); // Create an instance of the `Str` variant with associated string "Hi!" let y = &mut x; // Create a mutable alias to x if let Str(ref insides) = x { // If x is a `Str`, assign its inner data to the variable `insides` *y = Int(1); // Set `*y` to `Int(1), therefore setting `x` to `Int(1)` too println!("x says: {}", insides); // Uh oh! }

Here, we invalidated the insides reference because setting x to Int(1) meant that there is no longer a string inside it. However, insides is still a reference to a String, and the generated assembly would try to dereference the memory location where the pointer to the allocated string was, and probably end up trying to dereference 1 or some nearby data instead, and cause a segfault.

Okay, so far so good. We know that for Rust-style enums to work safely in Rust, we need the RWLock pattern. But are there any other reasons we need the RWLock pattern? Not many languages have such enums, so this shouldn’t really be a problem for them.

Iterator invalidation

Ah, the example that is brought up almost every time the question above is asked. While I’ve been quite guilty of using this example often myself (and feel that it is a very appropriate example that can be quickly explained), I also find it to be a bit of a cop-out, for reasons which I will explain below. This is partly why I’m writing this post in the first place; a better idea of the answer to The Question should be available for those who want to dig deeper.

Iterator invalidation involves using tools like iterators whilst modifying the underlying dataset somehow.

For example,

let buf = vec![1,2,3,4]; for i in &buf { buf.push(i); }

Firstly, this will loop infinitely (if it compiled, which it doesn’t, because Rust prevents this). The equivalent C++ example would be this one, which I use at every opportunity.

What’s happening in both code snippets is that the iterator is really just a pointer to the vector and an index. It doesn’t contain a snapshot of the original vector; so pushing to the original vector will make the iterator iterate for longer. Pushing once per iteration will obviously make it iterate forever.

The infinite loop isn’t even the real problem here. The real problem is that after a while, we could get a segmentation fault. Internally, vectors have a certain amount of allocated space to work with. If the vector is grown past this space, a new, larger allocation may need to be done (freeing the old one), since vectors must use contiguous memory.

This means that when the vector overflows its capacity, it will reallocate, invalidating the reference stored in the iterator, and causing use-after-free.

Of course, there is a trivial solution in this case — store a reference to the Vec/vector object inside the iterator instead of just the pointer to the vector on the heap. This leads to some extra indirection or a larger stack size for the iterator (depending on how you implement it), but overall will prevent the memory unsafety.

This would still cause problems with more comple situations involving multidimensional vectors, however.

“It’s effectively threaded”

Aliasing with mutability in a sufficiently complex, single-threaded program is effectively the same thing as accessing data shared across multiple threads without a lock

(The above is my paraphrasing of someone else’s quote; but I can’t find the original or remember who made it)

Let’s step back a bit and figure out why we need locks in multithreaded programs. The way caches and memory work; we’ll never need to worry about two processes writing to the same memory location simultaneously and coming up with a hybrid value, or a read happening halfway through a write.

What we do need to worry about is the rug being pulled out underneath our feet. A bunch of related reads/writes would have been written with some invariants in mind, and arbitrary reads/writes possibly happening between them would invalidate those invariants. For example, a bit of code might first read the length of a vector, and then go ahead and iterate through it with a regular for loop bounded on the length. The invariant assumed here is the length of the vector. If pop() was called on the vector in some other thread, this invariant could be invalidated after the read to length but before the reads elsewhere, possibly causing a segfault or use-after-free in the last iteration.

However, we can have a situation similar to this (in spirit) in single threaded code. Consider the following:

let x = some_big_thing(); let len = x.some_vec.len(); for i in 0..len { x.do_something_complicated(x.some_vec[i]); }

We have the same invariant here; but can we be sure that x.do_something_complicated() doesn’t modify x.some_vec for some reason? In a complicated codebase, where do_something_complicated() itself calls a lot of other functions which may also modify x, this can be hard to audit.

Of course, the above example is a simplification and contrived; but it doesn’t seem unreasonable to assume that such bugs can happen in large codebases — where many methods being called have side effects which may not always be evident.

Which means that in large codebases we have almost the same problem as threaded ones. It’s very hard to maintain invariants when one is not completely sure of what each line of code is doing. It’s possible to become sure of this by reading through the code (which takes a while), but further modifications may also have to do the same. It’s impractical to do this all the time and eventually bugs will start cropping up.

On the other hand, having a static guarantee that this can’t happen is great. And when the code is too convoluted for a static guarantee (or you just want to avoid the borrow checker), a single-threaded RWlock-esque type called RefCell is available in Rust. It’s a type providing interior mutability and behaves like a runtime version of the borrow checker. Similar wrappers can be written in other languages.

Edit: In case of many primitives like simple integers, the problems with shared mutability turn out to not be a major issue. For these, we have a type called Cell which lets these be mutated and shared simultaenously. This works on all Copy types; i.e. types which only need to be copied on the stack to be copied. (Unlike types involving pointers or other indirection)

This sort of bug is a good source of reentrancy problems too.

Safe abstractions

In particular, the issue in the previous section makes it hard to write safe abstractions, especially with generic code. While this problem is clearer in the case of Rust (where abstractions are expected to be safe and preferably low-cost), this isn’t unique to any language.

Every method you expose has a contract that is expected to be followed. Many times, a contract is handled by type safety itself, or you may have some error-based model to throw out uncontractual data (for example, division by zero).

But, as an API (can be either internal or exposed) gets more complicated, so does the contract. It’s not always possible to verify that the contract is being violated at runtime either, for example many cases of iterator invalidation are hard to prevent in nontrivial code even with asserts.

It’s easy to create a method and add documentation “the first two arguments should not point to the same memory”. But if this method is used by other methods, the contract can change to much more complicated things that are harder to express or check. When generics get involved, it only gets worse; you sometimes have no way of forcing that there are no shared mutable aliases, or of expressing what isn’t allowed in the documentation. Nor will it be easy for an API consumer to enforce this.

This makes it harder and harder to write safe, generic abstractions. Such abstractions rely on invariants, and these invariants can often be broken by the problems in the previous section. It’s not always easy to enforce these invariants, and such abstractions will either be misused or not written in the first place, opting for a heavier option. Generally one sees that such abstractions or patterns are avoided altogether, even though they may provide a performance boost, because they are risky and hard to maintain. Even if the present version of the code is correct, someone may change something in the future breaking the invariants again.

My previous post outlines a situation where Rust was able to choose the lighter path in a situation where getting the same guarantees would be hard in C++.

Note that this is a wider problem than just with mutable aliasing. Rust has this problem too, but not when it comes to mutable aliasing. Mutable aliasing is important to fix however, because we can make a lot of assumptions about our program when there are no mutable aliases. Namely, by looking at a line of code we can know what happened wrt the locals. If there is the possibility of mutable aliasing out there; there’s the possibility that other locals were modified too. A very simple example is:

fn look_ma_no_temp_var_l33t_interview_swap(&mut x, &mut y) { *x = *x + *y; *y = *x - *y; *x = *x - *y; } // or fn look_ma_no_temp_var_rockstar_interview_swap(&mut x, &mut y) { *x = *x ^ *y; *y = *x ^ *y; *x = *x ^ *y; }

In both cases, when the two references are the same3, instead of swapping, the two variables get set to zero. A user (internal to your library, or an API consumer) would expect swap() to not change anything when fed equal references, but this is doing something totally different. This assumption could get used in a program; for example instead of skipping the passes in an array sort where the slot is being compared with itself, one might just go ahead with it because swap() won’t change anything there anyway; but it does, and suddenly your sort function fills everything with zeroes. This could be solved by documenting the precondition and using asserts, but the documentation gets harder and harder as swap() is used in the guts of other methods.

Of course, the example above was contrived. It’s well known that those swap() implementations have that precondition, and shouldn’t be used in such cases. Also, in most swap algorithms it’s trivial to ignore cases when you’re comparing an element with itself, generally done by bounds checking.

But the example is a simplified sketch of the problem at hand.

In Rust, since this is statically checked, one doesn’t worry much about these problems, and robust APIs can be designed since knowing when something won’t be mutated can help simplify invariants.

Wrapping up

Aliasing that doesn’t fit the RWLock pattern is dangerous. If you’re using a language like Rust, you don’t need to worry. If you’re using a language like C++, it can cause memory unsafety, so be very careful. If you’re using a language like Java or Go, while it can’t cause memory unsafety, it will cause problems in complex bits of code.

This doesn’t mean that this problem should force you to switch to Rust, either. If you feel that you can avoid writing APIs where this happens, that is a valid way to go around it. This problem is much rarer in languages with a GC, so you might be able to avoid it altogether without much effort. It’s also okay to use runtime checks and asserts to maintain your invariants; performance isn’t everything.

But this is an issue in programming; and make sure you think of it when designing your code.

Discuss: HN, Reddit

  1. Hereafter referred to as “The Question”

  2. Note: Str and Int are variant names which I chose; they are not keywords. Additionally, I’m using “associated foo” loosely here; Rust does have a distinct concept of “associated data” but it’s not relevant to this post.

  3. Note that this isn’t possible in Rust due to the borrow checker.

Categorieën: Mozilla-nl planet

Best 5 Mozilla Firefox Add-ons to Improve your Browsing Experience - CultureMob

Nieuws verzameld via Google - zo, 17/05/2015 - 12:28

CultureMob

Best 5 Mozilla Firefox Add-ons to Improve your Browsing Experience
CultureMob
Mozilla Firefox is a very popular web browser and one thing that makes it such a great web browser is its extensibility that is almost infinite. There are literally hundreds of add-ons that can be downloaded for free and on each single day, new add-ons ...

en meer »Google Nieuws
Categorieën: Mozilla-nl planet

Pagina's