mozilla

Mozilla Nederland LogoDe Nederlandse
Mozilla-gemeenschap

Joel Maher: A-Team contribution opportunity – Dashboard Hacker

Mozilla planet - mo, 18/05/2015 - 21:43

I am excited to announce a new focused project for contribution – Dashboard Hacker.  Last week we gave a preview that today we would be announcing 2 contribution projects.  This is an unpaid program where we are looking for 1-2 contributors who will dedicate between 5-10 hours/week for at least 8 weeks.  More time is welcome, but not required.

What is a dashboard hacker?

When a developer is ready to land code, they want to test it. Getting the results and understanding the results is made a lot easier by good dashboards and tools. For this project, we have a starting point with our performance data view to fix up a series of nice to have polish features and then ensure that it is easy to use with a normal developer workflow. Part of the developer work flow is the regular job view, If time permits there are some fun experiments we would like to implement in the job view.  These bugs, features, projects are all smaller and self contained which make great projects for someone looking to contribute.

What is required of you to participate?

  • A willingness to learn and ask questions
  • A general knowledge of programming (most of this will be in javascript, django, angularJS, and some work will be in python.
  • A promise to show up regularly and take ownership of the issues you are working on
  • Good at solving problems and thinking out of the box
  • Comfortable with (or willing to try) working with a variety of people

What we will guarantee from our end:

  • A dedicated mentor for the project whom you will work with regularly throughout the project
  • A single area of work to reduce the need to get up to speed over and over again.
    • This project will cover many tools, but the general problem space will be the same
  • The opportunity to work with many people (different bugs could have a specific mentor) while retaining a single mentor to guide you through the process
  • The ability to be part of the team- you will be welcome in meetings, we will value your input on solving problems, brainstorming, and figuring out new problems to tackle.

How do you apply?

Get in touch with us either by replying to the post, commenting in the bug or just contacting us on IRC (I am :jmaher in #ateam on irc.mozilla.org, wlach on IRC will be the primary mentor).  We will point you at a starter bug and introduce you to the bugs and problems to solve.  If you have prior work (links to bugzilla, github, blogs, etc.) that would be useful to learn more about you that would be a plus.

How will you select the candidates?

There is no real criteria here.  One factor will be if you can meet the criteria outlined above and how well you do at picking up the problem space.  Ultimately it will be up to the mentor (for this project, it will be :wlach).  If you do apply and we already have a candidate picked or don’t choose you for other reasons, we do plan to repeat this every few months.

Looking forward to building great things!


Categorieën: Mozilla-nl planet

Joel Maher: A-Team contribution opportunity – DX (Developer Ergonomics)

Mozilla planet - mo, 18/05/2015 - 21:42

I am excited to announce a new focused project for contribution – Developer Ergonomics/Experience, otherwise known as DX.  Last week we gave a preview that today we would be announcing 2 contribution projects.  This is an unpaid program where we are looking for 1-2 contributors who will dedicate between 5-10 hours/week for at least 8 weeks.  More time is welcome, but not required.

What does DX mean?

We chose this project as we continue to experience frustration while fixing bugs and debugging test failures.  Many people suggest great ideas, in this case we have set aside a few ideas (look at the dependent bugs to clean up argument parsers, help our tests run in smarter chunks, make it easier to run tests locally or on server, etc.) which would clean up stuff and be harder than a good first bug, yet each issue by itself would be too easy for an internship.  Our goal is to clean up our test harnesses and tools and if time permits, add stuff to the workflow which makes it easier for developers to do their job!

What is required of you to participate?

  • A willingness to learn and ask questions
  • A general knowledge of programming (this will be mostly in python with some javascript as well)
  • A promise to show up regularly and take ownership of the issues you are working on
  • Good at solving problems and thinking out of the box
  • Comfortable with (or willing to try) working with a variety of people

What we will guarantee from our end:

  • A dedicated mentor for the project whom you will work with regularly throughout the project
  • A single area of work to reduce the need to get up to speed over and over again.
    • This project will cover many tools, but the general problem space will be the same
  • The opportunity to work with many people (different bugs could have a specific mentor) while retaining a single mentor to guide you through the process
  • The ability to be part of the team- you will be welcome in meetings, we will value your input on solving problems, brainstorming, and figuring out new problems to tackle.

How do you apply?

Get in touch with us either by replying to the post, commenting in the bug or just contacting us on IRC (I am :jmaher in #ateam on irc.mozilla.org).  We will point you at a starter bug and introduce you to the bugs and problems to solve.  If you have prior work (links to bugzilla, github, blogs, etc.) that would be useful to learn more about you that would be a plus.

How will you select the candidates?

There is no real criteria here.  One factor will be if you can meet the criteria outlined above and how well you do at picking up the problem space.  Ultimately it will be up to the mentor (for this project, it will be me).  If you do apply and we already have a candidate picked or don’t choose you for other reasons, we do plan to repeat this every few months.

Looking forward to building great things!


Categorieën: Mozilla-nl planet

Air Mozilla: Mozilla Weekly Project Meeting

Mozilla planet - mo, 18/05/2015 - 20:00

Mozilla Weekly Project Meeting The Monday Project Meeting

Categorieën: Mozilla-nl planet

Air Mozilla: Firefox OS web API's

Mozilla planet - mo, 18/05/2015 - 19:56

Firefox OS web API's New cool web apis and their impact

Categorieën: Mozilla-nl planet

Daniel Pocock: Free and open WebRTC for the Fedora Community

Mozilla planet - mo, 18/05/2015 - 19:48

In January 2014, we launched the rtc.debian.org service for the Debian community. An equivalent service has been in testing for the Fedora community at FedRTC.org.

Some key points about the Fedora service:

  • The web front-end is just HTML, CSS and JavaScript. PHP is only used for account creation, the actual WebRTC experience requires no server-side web framework, just a SIP proxy.
  • The web code is all available in a Github repository so people can extend it.
  • Anybody who can authenticate against the FedOAuth OpenID is able to get a fedrtc.org test account immediately.
  • The server is built entirely with packages from CentOS 7 + EPEL 7, except for the SIP proxy itself. The SIP proxy is reSIProcate, which is available as a Fedora package and builds easily on RHEL / CentOS.
Testing it with WebRTC

Create an RTC password and then log in. Other users can call you. It is federated, so people can also call from rtc.debian.org or from freephonebox.net.

Testing it with other SIP softphones

You can use the RTC password to connect to the SIP proxy from many softphones, including Jitsi or Lumicall on Android.

Copy it

The process to replicate the server for another domain is entirely described in the Real-Time Communications Quick Start Guide.

Discuss it

The FreeRTC mailing list is a great place to discuss any issues involving this site or free RTC in general.

WebRTC opportunities expanding

Just this week, the first batch of Firefox OS televisions are hitting the market. Every one of these is a potential WebRTC client that can interact with free communications platforms.

Categorieën: Mozilla-nl planet

Mozilla Firefox vs Google Chrome – 2015 Edition - Gun Shy Assassin

Nieuws verzameld via Google - mo, 18/05/2015 - 18:45

Mashable

Mozilla Firefox vs Google Chrome – 2015 Edition
Gun Shy Assassin
The browser battle is on and the biggest players are Firefox and Chrome. These two rivals however are not equals. Firefox was released in 2002 while Chrome was launched in 2008. It seems that Mozilla has a lot more experience when it comes to surfing ...
Google Working On A RAM Consumption Fix For Chrome BrowserWCCFtech
Back to the Basics: Surfing the InternetAboutMyArea
Google Chrome Free Download Browser Blocks All Extensions Not Listed in ...Ordoh

alle 18 nieuwsartikelen »
Categorieën: Mozilla-nl planet

Mozilla Reps Community: New council members – Spring 2015

Mozilla planet - mo, 18/05/2015 - 18:27

We are happy to announce that three new members of the Council have been elected.

Welcome Michael, Shahid and Christos! They bring with them skills they have picked up as Reps mentors, and as community leaders both inside Mozilla and in other fields. A HUGE thank you to the outgoing council members – Arturo, Emma and Raj. We are hoping you continue to use your talents and experience to continue in a leadership role in Reps and Mozilla.

The new members will be gradually on boarding during the following 3 weeks.

The Mozilla Reps Council is the governing body of the Mozilla Reps Program. It provides the general vision of the program and oversees day-to-day operations globally. Currently, 7 volunteers and 2 paid staff sit on the council. Find out more on the ReMo wiki.

Congratulate new Council members on this Discourse topic!

Categorieën: Mozilla-nl planet

Air Mozilla: Firefox OS Tricoder

Mozilla planet - mo, 18/05/2015 - 16:11

Firefox OS Tricoder Reading device sensor data in JavaScript

Categorieën: Mozilla-nl planet

Mozilla-støttet språk er klar for bruk - digi.no

Nieuws verzameld via Google - mo, 18/05/2015 - 14:12

Mozilla-støttet språk er klar for bruk
digi.no
Forsøk på å lese eller skrive større datamengder enn det en minneblokk kan inneholde, eller forsøk på å lese eller skrive data fra eller til en minneblokk som har blitt frigjort, er typiske eksempler på feil som kan oppstå når utviklere skriver ...

Google Nieuws
Categorieën: Mozilla-nl planet

Tim Taubert: Implementing a PBKDF2-based Password Storage Scheme for Firefox OS

Mozilla planet - mo, 18/05/2015 - 14:06

My esteemed colleague Frederik Braun recently took on to rewrite the module responsible for storing and checking passcodes that unlock Firefox OS phones. While we are still working on actually landing it in Gaia I wanted to seize the chance to talk about this great use case of the WebCrypto API in the wild and highlight a few important points when using password-based key derivation (PBKDF2) to store passwords.

The Passcode Module

Let us take a closer look at not the verbatim implementation but at a slightly simplified version. The API offers the only two operations such a module needs to support: setting a new passcode and verifying that a given passcode matches the stored one.

let Passcode = { store(code) { // ... }, verify(code) { // ... } };

When setting up the phone for the first time - or when changing the passcode later - we call Passcode.store() to write a new code to disk. Passcode.verify() will help us determine whether we should unlock the phone. Both methods return a Promise as all operations exposed by the WebCrypto API are asynchronous.

Passcode.store("1234").then(() => { return Passcode.verify("1234"); }).then(valid => { console.log(valid); }); // Output: true Make the passcode look “random”

The module should absolutely not store passcodes in the clear. We will use PBKDF2 as a pseudorandom function (PRF) to retrieve a result that looks random. An attacker with read access to the part of the disk storing the user’s passcode should not be able to recover the original input, assuming limited computational resources.

The function deriveBits() is a PRF that takes a passcode and returns a Promise resolving to a random looking sequence of bytes. To be a little more specific, it uses PBKDF2 to derive pseudorandom bits.

function deriveBits(code) { // Convert string to a TypedArray. let bytes = new TextEncoder("utf-8").encode(code); // Create the base key to derive from. let importedKey = crypto.subtle.importKey( "raw", bytes, "PBKDF2", false, ["deriveBits"]); return importedKey.then(key => { // Salt should be at least 64 bits. let salt = crypto.getRandomValues(new Uint8Array(8)); // All required PBKDF2 parameters. let params = {name: "PBKDF2", hash: "SHA-1", salt, iterations: 5000}; // Derive 160 bits using PBKDF2. return crypto.subtle.deriveBits(params, key, 160); }); } Choosing PBKDF2 parameters

As you can see above PBKDF2 takes a whole bunch of parameters. Choosing good values is crucial for the security of our passcode module so it is best to take a detailed look at every single one of them.

Select a cryptographic hash function

PBKDF2 is a big PRF that iterates a small PRF. The small PRF, iterated multiple times (more on why this is done later), is fixed to be an HMAC construction; you are however allowed to specify the cryptographic hash function used inside HMAC itself. To understand why you need to select a hash function it helps to take a look at HMAC’s definition, here with SHA-1 at its core:

HMAC-SHA-1(k, m) = SHA-1((k ⊕ opad) + SHA-1((k ⊕ ipad) + m))

The outer and inner padding opad and ipad are static values that can be ignored for our purpose, the important takeaway is that the given hash function will be called twice, combining the message m and the key k. Whereas HMAC is usually used for authentication PBKDF2 makes use of its PRF properties, that means its output is computationally indistinguishable from random.

deriveBits() as defined above uses SHA-1 as well, and although it is considered broken as a collision-resistant hash function it is still a safe building block in the HMAC-SHA-1 construction. HMAC only relies on a hash function’s PRF properties, and while finding SHA-1 collisions is considered feasible it is still believed to be a secure PRF.

That said, it would not hurt to switch to a secure cryptographic hash function like SHA-256. Chrome supports other hash functions for PBKDF2 today, Firefox unfortunately has to wait for an NSS fix before those can be unlocked for the WebCrypto API.

Pass a random salt

The salt is a random component that PBKDF2 feeds into the HMAC function along with the passcode. This prevents an attacker from simply computing the hashes of for example all 8-character combinations of alphanumerics (~5.4 PetaByte of storage for SHA-1) and use a huge lookup table to quickly reverse a given password hash. Specify 8 random bytes as the salt and the poor attacker will have to suddenly compute (and store!) 264 of those lookup tables and face 8 additional random characters in the input. Even without the salt the effort to create even one lookup table would be hard to justify because chances are high you cannot reuse it to attack another target, they might be using a different hash function or combine two or more of them.

The same goes for Rainbow Tables. A random salt included with the password would have to be incorporated when precomputing the hash chains and the attacker is back to square one where she has to compute a Rainbow Table for every possible salt value. That certainly works ad-hoc for a single salt value but preparing and storing 264 of those tables is impossible.

The salt is public and will be stored in the clear along with the derived bits. We need the exact same salt to arrive at the exact same derived bits later again. We thus have to modify deriveBits() to accept the salt as an argument so that we can either generate a random one or read it from disk.

function deriveBits(code, salt) { // Convert string to a TypedArray. let bytes = new TextEncoder("utf-8").encode(code); // Create the base key to derive from. let importedKey = crypto.subtle.importKey( "raw", bytes, "PBKDF2", false, ["deriveBits"]); return importedKey.then(key => { // All required PBKDF2 parameters. let params = {name: "PBKDF2", hash: "SHA-1", salt, iterations: 5000}; // Derive 160 bits using PBKDF2. return crypto.subtle.deriveBits(params, key, 160); }); }

Keep in mind though that Rainbow tables today are mainly a thing from the past where password hashes were smaller and shittier. Salts are the bare minimum a good password storage scheme needs, but they merely protect against a threat that is largely irrelevant today.

Specify a number of iterations

As computers became faster and Rainbow Table attacks infeasible due to the prevalent use of salts everywhere, people started attacking password hashes with dictionaries, simply by taking the public salt value and passing that combined with their educated guess to the hash function until a match was found. Modern password schemes thus employ a “work factor” to make hashing millions of password guesses unbearably slow.

By specifying a sufficiently high number of iterations we can slow down PBKDF2’s inner computation so that an attacker will have to face a massive performance decrease and be able to only try a few thousand passwords per second instead of millions.

For a single-user disk or file encryption it might be acceptable if computing the password hash takes a few seconds; for a lock screen 300-500ms might be the upper limit to not interfere with user experience. Take a look at this great StackExchange post for more advice on what might be the right number of iterations for your application and environment.

A much more secure version of a lock screen would allow to not only use four digits but any number of characters. An additional delay of a few seconds after a small number of wrong guesses might increase security even more, assuming the attacker cannot access the PRF output stored on disk.

Determine the number of bits to derive

PBKDF2 can output an almost arbitrary amount of pseudo-random data. A single execution yields the number of bits that is equal to the chosen hash function’s output size. If the desired number of bits exceeds the hash function’s output size PBKDF2 will be repeatedly executed until enough bits have been derived.

function getHashOutputLength(hash) { switch (hash) { case "SHA-1": return 160; case "SHA-256": return 256; case "SHA-384": return 384; case "SHA-512": return 512; } throw new Error("Unsupported hash function"); }

Choose 160 bits for SHA-1, 256 bits for SHA-256, and so on. Slowing down the key derivation even further by requiring more than one round of PBKDF2 will not increase the security of the password storage.

Do not hard-code parameters

Hard-coding PBKDF2 parameters - the name of the hash function to use in the HMAC construction, and the number of HMAC iterations - is tempting at first. We however need to be flexible if for example it turns out that SHA-1 can no longer be considered a secure PRF, or you need to increase the number of iterations to keep up with faster hardware.

To ensure future code can verify old passwords we store the parameters that were passed to PBKDF2 at the time, including the salt. When verifying the passcode we will read the hash function name, the number of iterations, and the salt from disk and pass those to deriveBits() along with the passcode itself. The number of bits to derive will be the hash function’s output size.

function deriveBits(code, salt, hash, iterations) { // Convert string to a TypedArray. let bytes = new TextEncoder("utf-8").encode(code); // Create the base key to derive from. let importedKey = crypto.subtle.importKey( "raw", bytes, "PBKDF2", false, ["deriveBits"]); return importedKey.then(key => { // Output length in bits for the given hash function. let hlen = getHashOutputLength(hash); // All required PBKDF2 parameters. let params = {name: "PBKDF2", hash, salt, iterations}; // Derive |hlen| bits using PBKDF2. return crypto.subtle.deriveBits(params, key, hlen); }); } Storing a new passcode

Now that we are done implementing deriveBits(), the heart of the Passcode module, completing the API is basically a walk in the park. For the sake of simplicity we will use localforage as the storage backend. It provides a simple, asynchronous, and Promise-based key-value store.

// <script src="localforage.min.js"/> const HASH = "SHA-1"; const ITERATIONS = 4096; Passcode.store = function (code) { // Generate a new random salt for every new passcode. let salt = crypto.getRandomValues(new Uint8Array(8)); return deriveBits(code, salt, HASH, ITERATIONS).then(bits => { return Promise.all([ localforage.setItem("digest", bits), localforage.setItem("salt", salt), localforage.setItem("hash", HASH), localforage.setItem("iterations", ITERATIONS) ]); }); };

We generate a new random salt for every new passcode. The derived bits are stored along with the salt, the hash function name, and the number of iterations. HASH and ITERATIONS are constants that provide default values for our PBKDF2 parameters and can be updated whenever desired. The Promise returned by Passcode.store() will resolve when all values have been successfully stored in the backend.

Verifying a given passcode

To verify a passcode all values and parameters stored by Passcode.store() will have to be read from disk and passed to deriveBits(). Comparing the derived bits with the value stored on disk tells whether the passcode is valid.

Passcode.verify = function (code) { let loadValues = Promise.all([ localforage.getItem("digest"), localforage.getItem("salt"), localforage.getItem("hash"), localforage.getItem("iterations") ]); return loadValues.then(([digest, salt, hash, iterations]) => { return deriveBits(code, salt, hash, iterations).then(bits => { return compare(bits, digest); }); }); }; Should compare() be a constant-time operation?

compare() does not have to be constant-time. Even if the attacker learns the first byte of the final digest stored on disk she cannot easily produce inputs to guess the second byte - the opposite would imply knowing the pre-images of all those two-byte values. She cannot do better than submitting simple guesses that become harder the more bytes are known. For a successful attack all bytes have to be recovered, which in turns means a valid pre-image for the full final digest needs to be found.

If it makes you feel any better, you can of course implement compare() as a constant-time operation. This might be tricky though given that all modern JavaScript engines optimize code heavily.

What about bcrypt or scrypt?

Both bcrypt and scrypt are probably better alternatives to PBKDF2. Bcrypt automatically embeds the salt and cost factor into its output, most APIs are clever enough to parse and use those parameters when verifying a given password.

Scrypt implementations can usually securely generate a random salt, that is one less thing for you to care about. The most important aspect of scrypt though is that it allows consuming a lot of memory when computing the password hash which makes cracking passwords using ASICs or FPGAs close to impossible.

The Web Cryptography API does unfortunately support neither of the two algorithms and currently there are no proposals to add those. In the case of scrypt it might also be somewhat controversial to allow a website to consume arbitrary amounts of memory.

Categorieën: Mozilla-nl planet

Mozilla udostępnia tajemniczego Firefoksa 38.0.5 beta. Co nowego? - Komputer Świat

Nieuws verzameld via Google - mo, 18/05/2015 - 11:45

Komputer Świat

Mozilla udostępnia tajemniczego Firefoksa 38.0.5 beta. Co nowego?
Komputer Świat
Na początku zeszłego tygodnia Mozilla udostępniła stabilne wydanie Firefoksa z numerem 38.0, o którym więcej przeczytacie tutaj. Osobiście po premierze zacząłem wypatrywać liska 39.0 beta, który powinien pojawić się na serwerach Mozilli po dniu lub ...

Google Nieuws
Categorieën: Mozilla-nl planet

Gregory Szorc: Firefox Mercurial Repository with CVS History

Mozilla planet - mo, 18/05/2015 - 10:40

When Firefox made the switch from CVS to Mercurial in March 2007, the CVS history wasn't imported into Mercurial. There were good reasons for this at the time. But it's a decision that continues to have side-effects. I am surprised how often I hear of engineers wanting to access blame and commit info from commits now more than 9 years old!

When individuals created a Git mirror of the Firefox repository a few years ago, they correctly decided that importing CVS history would be a good idea. They also correctly decided to combine the logically same but physically separate release and integration repositories into a unified Git repository. These are things we can't easily do to the canonical Mercurial repository because it would break SHA-1 hashes, breaking many systems, and it would require significant changes in process, among other reasons.

While Firefox developers do have access to a single Firefox repository with full CVS history (the Git mirror), they still aren't satisfied.

Running git blame (or hg blame for that matter) can be very expensive. For this reason, the blame interface is disabled on many web-based source viewers by default. On GitHub, some blame URLs for the Firefox repository time out and cause GitHub to display an error message. No matter how hard you try, you can't easily get blame results (running a local Git HTTP/HTML interface is still difficult compared to hg serve).

Another reason developers aren't satisfied with the Git mirror is that Git's querying tools pale in comparison to Mercurial's. I've said it before and I'll say it again: Mercurial's revision sets and templates are incredibly useful features that enable advanced repository querying and reporting. Git's offerings come nowhere close. (I really wish Git would steal these awesome features from Mercurial.)

Anyway, enough people were complaining about the lack of a Mercurial Firefox repository with full CVS history that I decided to create one. If you point your browsers or Mercurial clients to https://hg.mozilla.org/users/gszorc_mozilla.com/gecko-full, you'll be able to access it.

The process used for the conversion was the simplest possible: I used hg-git to convert the Git mirror back to Mercurial.

Unlike the Git mirror, I didn't include all heads in this new repository. Instead, there is only mozilla-central's head (the current development tip). If I were doing this properly, I'd include all heads, like gecko-aggregate.

I'm well aware there are oddities in the Git mirror and they now exist in this new repository as well. My goal for this conversion was to deliver something: it wasn't a goal to deliver the most correct result possible.

At this time, this repository should be considered an unstable science experiment. By no means should you rely on this repository. But if you find it useful, I'd appreciate hearing about it. If enough people ask, we could probably make this more official.

Categorieën: Mozilla-nl planet

Gervase Markham: Eurovision Bingo

Mozilla planet - mo, 18/05/2015 - 10:20

Some people say that all Eurovision songs are the same. That’s probably not quite true, but there is perhaps a hint of truth in the suggestion that some themes tend to recur from year to year. Hence, I thought, Eurovision Bingo.

I wrote some code to analyse a directory full of lyrics, normally those from the previous year of the competition, and work out the frequency of occurrence of each word. It will then generate Bingo cards, with sets of words of different levels of commonness. You can then use them to play Bingo while watching this year’s competition (which is on Saturday).

There’s a Github repo, or if you want to go straight to pre-generated cards for this year, they are here.

Here’s a sample card from the 2014 lyrics:

fell cause rising gonna rain world believe dancing hold once every mean LOVE something chance hey show or passed say because light hard home heart

Have fun :-)

Categorieën: Mozilla-nl planet

Air Mozilla: OuiShare Labs Camp #3

Mozilla planet - mo, 18/05/2015 - 10:00

OuiShare Labs Camp #3 OuiShare Labs Camp #3 is a participative conference dedicated to decentralization, IndieWeb, semantic web and open source community tools.

Categorieën: Mozilla-nl planet

Mozilla Pushes Web Sites To Adopt Encryption - CIO Today

Nieuws verzameld via Google - mo, 18/05/2015 - 06:41

Mozilla Pushes Web Sites To Adopt Encryption
CIO Today
The organization behind the Firefox Web browser wants to see Web site encryption become standard practice, and it has laid out a two-part plan to help that happen. Mozilla said it plans to set a date by which all new features for its browser will be ...

Categorieën: Mozilla-nl planet

Mozilla Pushes Web Sites To Adopt Encryption - CIO Today

Nieuws verzameld via Google - mo, 18/05/2015 - 06:41

CultureMob

Mozilla Pushes Web Sites To Adopt Encryption
CIO Today
The organization behind the Firefox Web browser wants to see Web site encryption become standard practice, and it has laid out a two-part plan to help that happen. Mozilla said it plans to set a date by which all new features for its browser will be ...
Best 5 Mozilla Firefox Add-ons to Improve your Browsing ExperienceCultureMob

alle 2 nieuwsartikelen »
Categorieën: Mozilla-nl planet

This Week In Rust: This Week in Rust 81

Mozilla planet - mo, 18/05/2015 - 06:00

Hello and welcome to another issue of This Week in Rust! Rust is a systems language pursuing the trifecta: safety, concurrency, and speed. This is a weekly summary of its progress and community. Want something mentioned? Send me an email! Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub. If you find any errors or omissions in this week's issue, please submit a PR.

What's cooking on master?

273 pull requests were merged in the last two weeks, and 4 RFC PRs.

Now you can follow breaking changes as they happen!

Breaking Changes Other Changes New Contributors
  • らいどっと
  • Aaron Gallagher
  • Alexander Polakov
  • Alex Burka
  • Andrei Oprea
  • Andrew Kensler
  • Andrew Straw
  • Ben Gesoff
  • Chris Hellmuth
  • Cole Reynolds
  • Colin Walters
  • David Reid
  • Don Petersen
  • Emilio Cobos Álvarez
  • Franziska Hinkelmann
  • Garming Sam
  • Hika Hibariya
  • Isaac Ge
  • Jan Andersson
  • Jan-Erik Rediger
  • Jannis Redmann
  • Jason Yeo
  • Jeremy Schlatter
  • Johann
  • Johann Hofmann
  • Lee Jeffery
  • leunggamciu
  • Marin Atanasov Nikolov
  • Mário Feroldi
  • Mathieu Rochette
  • Michael Park
  • Michael Wu
  • Michał Czardybon
  • Mike Sampson
  • Nick Platt
  • parir
  • Paul Banks
  • Paul Faria
  • Paul Quint
  • peferron
  • Pete Hunt
  • robertfoss
  • Rob Young
  • Russell Johnston
  • Shmuale Mark
  • Simon Kern
  • Sindre Johansen
  • sumito3478
  • Swaroop C H
  • Tincan
  • Wei-Ming Yang
  • Wilfred Hughes
  • Will Engler
  • Wojciech Ogrodowczyk
  • XuefengWu
  • Z1
Approved RFCs New RFCs Betawatch!

The current beta is 1.1.0-beta (cd7d89af9 2015-05-16) (built 2015-05-16).

Notable Links Project Updates Upcoming Events

If you are running a Rust event please add it to the calendar to get it mentioned here. Email Erick Tryzelaar or Brian Anderson for access.

Quote of the Week

"Yes, because laundry eating has evolved to be a specific design goal now; and the initial portions of the planned laundry eating API have been landed behind the #![feature(no_laundry)] gate. no_laundry should become stable in 6-8 weeks, though the more complicated portions, including DRY cleaning, Higher Kinded T-shirts, Opt-in Builtin Detergent, and Rinse Time Optimization will not be stabilized until much later."

"We hope this benefits the Laundry as a Service community immensely."

Manish explains Rust's roadmap for laundry-eating.

Thanks to filsmick for the tip.

And since there were so many quotables in the last two weeks, here's one from Evan Miller's evaluation of Rust:

"Rust is a systems language. I’m not sure what that term means, but it seems to imply some combination of native code compilation, not being Fortran, and making no mention of category theory in the documentation."

Thanks to ruudva for the tip. Submit your quotes for next week!.

Categorieën: Mozilla-nl planet

Mark Côté: Integration

Mozilla planet - mo, 18/05/2015 - 04:37

The other day I read about another new Mozilla project that decided to go with GitHub issues instead of our Bugzilla installation (BMO). The author’s arguments make a lot of sense: GitHub issues are much simpler and faster, and if you keep your code in GitHub, you get tighter integration. The author notes that a downside is the inability to file security or confidential bugs, for which Bugzilla has a fine-grained permission system, and that he’d just put those (rare) issues on BMO.

The one downside he doesn’t mention is interdependencies with other Mozilla projects, e.g. the Depends On/Blocks fields. This is where Bugzilla gets into project, product, and perhaps even program management by allowing people to easily track dependency chains, which is invaluable in planning. Many people actually file bugs solely as trackers for a particular feature or project, hanging all the work items and bugs off of it, and sometimes that work crosses product boundaries. There are also a number of tracking flags and fields that managers use to prioritize work and decide which releases to target.

If I had to rebut my own point, I would argue that the projects that use GitHub issues are relatively isolated, and so dependency tracking is not particularly important. Why clutter up and slow down the UI with lots of features that I don’t need for my project? In particular, most of the tracking features are currently used only by, and thus designed for, the Firefox products (aside: this is one reason the new modal UI hides most of these fields by default if they have never been set).

This seems hard to refute, and I certainly wouldn’t want to force an admittedly complex tool on anyone who had much simpler needs. But something still wasn’t sitting right with me, and it took a while to figure out what it was. As usual, it was that a different question was going unasked, leading to unspoken assumptions: why do we have so many isolated projects, and what are we giving up by having such loose (or even no) integration amongst all our work?

Working on projects in isolation is comforting because you don’t have to think about all the other things going on in your organization—in other words, you don’t have to communicate with very many people. A lack of communication, however, leads to several problems:

  • low visibility: what is everyone working on?
  • redundancy: how many times are we solving the same problem?
  • barriers to coordination: how can we become greater than the sum of our parts by delivering inter-related features and products?

By working in isolation, we can’t leverage each other’s strengths and accomplishments. We waste effort and lose great opportunities to deliver amazing things. We know that places like Twitter use monorepos to get some of these benefits, like a single build/test/deploy toolchain and coordination of breaking changes. This is what facilitates architectures like microservices and SOAs. Even if we don’t want to go down those paths, there is still a clear benefit to program management by at least integrating the tracking and planning of all of our various endeavours and directions. We need better organization-wide coordination.

We’re already taking some steps in this direction, like moving Firefox and Cloud Services to one division. But there are many other teams that could benefit from better integration, many teams that are duplicating effort and missing out on chances to work together. It’s a huge effort, but maybe we need to form a team to define a strategy and process—a Strategic Integration Team perhaps?

Categorieën: Mozilla-nl planet

Mark Côté: Project Isolation

Mozilla planet - mo, 18/05/2015 - 04:37

The other day I read about another new Mozilla project that decided to go with GitHub issues instead of our Bugzilla installation (BMO). The author’s arguments make a lot of sense: GitHub issues are much simpler and faster, and if you keep your code in GitHub, you get tighter integration. The author notes that a downside is the inability to file security or confidential bugs, for which Bugzilla has a fine-grained permission system, and that he’d just put those (rare) issues on BMO.

The one downside he doesn’t mention is interdependencies with other Mozilla projects, e.g. the Depends On/Blocks fields. This is where Bugzilla gets into project, product, and perhaps even program management by allowing people to easily track dependency chains, which is invaluable in planning. Many people actually file bugs solely as trackers for a particular feature or project, hanging all the work items and bugs off of it, and sometimes that work crosses product boundaries. There are also a number of tracking flags and fields that managers use to prioritize work and decide which releases to target.

If I had to rebut my own point, I would argue that the projects that use GitHub issues are relatively isolated, and so dependency tracking is not particularly important. Why clutter up and slow down the UI with lots of features that I don’t need for my project? In particular, most of the tracking features are currently used only by, and thus designed for, the Firefox products (aside: this is one reason the new modal UI hides most of these fields by default if they have never been set).

This seems hard to refute, and I certainly wouldn’t want to force an admittedly complex tool on anyone who had much simpler needs. But something still wasn’t sitting right with me, and it took a while to figure out what it was. As usual, it was that a different question was going unasked, leading to unspoken assumptions: why do we have so many isolated projects, and what are we giving up by having such loose (or even no) integration amongst all our work?

Working on projects in isolation is comforting because you don’t have to think about all the other things going on in your organization—in other words, you don’t have to communicate with very many people. A lack of communication, however, leads to several problems:

  • low visibility: what is everyone working on?
  • redundancy: how many times are we solving the same problem?
  • barriers to coordination: how can we become greater than the sum of our parts by delivering inter-related features and products?

By working in isolation, we can’t leverage each other’s strengths and accomplishments. We waste effort and lose great opportunities to deliver amazing things. We know that places like Twitter use monorepos to get some of these benefits, like a single build/test/deploy toolchain and coordination of breaking changes. This is what facilitates architectures like microservices and SOAs. Even if we don’t want to go down those paths, there is still a clear benefit to program management by at least integrating the tracking and planning of all of our various endeavours and directions. We need better organization-wide coordination.

We’re already taking some steps in this direction, like moving Firefox and Cloud Services to one division. But there are many other teams that could benefit from better integration, many teams that are duplicating effort and missing out on chances to work together. It’s a huge effort, but maybe we need to form a team to define a strategy and process—a Strategic Integration Team perhaps?

Categorieën: Mozilla-nl planet

Mike Conley: The Joy of Coding (Ep. 14): More OS X Printing

Mozilla planet - mo, 18/05/2015 - 01:09

In this episode, I kept working on the same bug as last week – proxying the print dialog from the content process on OS X. We actually finished the serialization bit, and started doing deserialization!

Hopefully, next episode we can polish off the deserialization and we’l be done. Fingers crossed!

Note that this episode was about 2 hours and 10 minutes, but the standard-definition recording up on Air Mozilla only plays for about 13 minutes and 5 seconds. Not too sure what’s going on there – we’ve filed a bug with the people who’ve encoded it. Hopefully, we’ll have the full episode up for standard-definition soon.

In the meantime, if you’d like to watch the whole episode, you can go to the Air Mozilla page and watch it in HD, or you can go to the YouTube mirror.

Episode Agenda

References

Bug 1091112 – Print dialog doesn’t get focus automatically, if e10s is enabled – Notes

Categorieën: Mozilla-nl planet

Pages