mozilla

Mozilla Nederland LogoDe Nederlandse
Mozilla gemeenschap

James Long: Transducers.js Round 2 with Benchmarks

Mozilla planet - zo, 12/10/2014 - 02:00

A few weeks ago I released my transducers library and explained the algorithm behind it. It's a wonderfully simple technique for high-performant transformations like map and filter and was created by Clojure (mostly Rich Hickey I think).

Over the past week I've been hard at work polishing and benchmarking it. Today I published version 0.2.0 with a new API and completely refactored internals that make it easy to use and get performance that beats other popular utility libraries.

A Few Benchmarks

Benchmarking is hard, but I think it's worthwhile to post a few of them that backs up these claims. All of these were run on the latest version of node (0.10.32). First I wanted to prove how transducers devastates other libraries for large arrays. The test performs two maps and two filters. Here is the transducer code:

into([], compose( map(function(x) { return x + 10; }), map(function(x) { return x * 2; }), filter(function(x) { return x % 5 === 0; }), filter(function(x) { return x % 2 === 0; }) ), arr);

The same transformations were implemented in lodash and underscore, and benchmarked with an arr of various sizes. The graph below shows the time it took to run versus the size of arr, which starts at 500 and goes up to around 500,000. Here's the full benchmark (it outputs Hz so the y-axis is 1/Hz).

Once the array reaches around the size of 90,000, transducers completely blow the competition away. This should be obvious; we never need to allocate anything between transformations, while underscore and lodash always have to allocation an intermediate array.

Laziness would not help here, since we are eagerly evaluating the whole array.

Small Arrays

While it's not as dramatic, even with arrays as small as 1000 you will see performance wins. Here is the same benchmarks but only running it twice with a size of 1000 and 10,000:

_.map/filter (1000) x 22,302 ops/sec ±0.90% (100 runs sampled) u.map/filter (1000) x 21,290 ops/sec ±0.65% (96 runs sampled) t.map/filter+transduce (1000) x 26,638 ops/sec ±0.77% (98 runs sampled) _.map/filter (10000) x 2,277 ops/sec ±0.49% (101 runs sampled) u.map/filter (10000) x 2,155 ops/sec ±0.77% (99 runs sampled) t.map/filter+transduce (10000) x 2,832 ops/sec ±0.44% (99 runs sampled) Take

If you use the take operation to only take, say, 10 items, transducers will only send 10 items through the transformation pipeline. Obviously if I ran benchmarks we would also blow away lodash and underscore here because they do not lazily optimize for take (and transform all the array first and then runs take).

Laziness does buy you this short-curcuiting behavior, but we get it without explicitly being lazy.

immutable-js

The immutable-js library is fantastic collection of immutable data structures. They implement lazy transformations so you get a lot of perf wins with that. Even so, there is a cost to the laziness machinery. I implemented the same map->map->filter->filter transformation above in another benchmark which compares it with their transformations. Here is the output with arr sizes of 1000 and 100,000:

Immutable map/filter (1000) x 6,414 ops/sec ±0.95% (99 runs sampled) transducer map/filter (1000) x 7,119 ops/sec ±1.58% (96 runs sampled) Immutable map/filter (100000) x 67.77 ops/sec ±0.95% (72 runs sampled) transducer map/filter (100000) x 79.23 ops/sec ±0.47% (69 runs sampled)

This kind of perf win isn't a huge deal, and their transformations perform well. But we can apply this to any data structure. Did you notice how easy it was to use our library with immutable-js? View the full benchmark here.

Transducers.js Refactored

I just pushed v0.2.0 to npm with all the new APIs and performance improvements. Read more in the new docs.

You may have noticed the Cognitect, where Rich Hickey and other core maintainers of Clojure(Script) work, released their own JavaScript transducers library on Friday. I was a little bummed because I had just spent a lot of time refactoring mine, but I think I offer a few improvements. Internally, we basically converged on the exact same technique for implementing transducers, so you should find the same performance characteristics above with their library.

All of the following features are things you can find in my library transducers.js.

My library now offers several integration points for using transducers:

  • seq takes a collection and a transformer and returns a collection of the same type. If you pass it an array, you will get back an array. An iterator will give you back an iterator. For example:
// Filter an array seq([1, 2, 3], filter(x => x > 1)); // -> [ 2, 3 ] // Map an object seq({ foo: 1, bar: 2 }, map(kv => [kv[0], kv[1] + 1])); // -> { foo: 2, bar: 3 } // Lazily transform an iterable function* nums() { var i = 1; while(true) { yield i++; } } var iter = seq(nums(), compose(map(x => x * 2), filter(x => x > 4)); iter.next().value; // -> 6 iter.next().value; // -> 8 iter.next().value; // -> 10
  • toArray, toObject, and toIter will take any iterable type and force them into the type that you requested. Each of these can optionally take a transform as the second argument.
// Make an array from an object toArray({ foo: 1, bar: 2 }); // -> [ [ 'foo', 1 ], [ 'bar', 2 ] ] // Make an array from an iterable toArray(nums(), take(3)); // -> [ 1, 2, 3 ]

That's a very quick overview, and you can read more about these in the docs.

Collections as Arguments

All the transformations in transducers.js optionally take a collection as the first argument, so the familiar pattern of map(coll, function(x) { return x + 1; }) still works fine. This is an extremely common use case so this will be very helpful if you are transitioning from another library. You can also pass a context as the third argument to specify what this should be bound to.

Read more about the various ways to use transformations.

Laziness

Transducers remove the requirement of being lazy to optimize for things like take(10). However, it can still be useful to "bind" a collection to a set of transformations and pass it around, without actually evaluating the transformations. It's also useful if you want to apply transformations to a custom data type, get an iterator back, and rebuild another custom data type from it (there is still no intermediate array).

Whenever you apply transformations to an iterator it does so lazily. It's easy to convert array transformations into a lazy operation, just use the utility function iterator to grab an iterator of the array instead:

seq(iterator([1, 2, 3]), compose( map(x => x + 1), filter(x => x % 2 === 0))) // -> <Iterator>

Our transformations are completely blind to the fact that our transformations may or may not be lazy.

The transformer Protocol

Lastly, transducers.js supports a new protocol that I call the transformer protocol. If a custom data structure implements this, not only can we iterate over it in functions like seq, but we can also build up a new instance. That means seq won't return an iterator, but it will return an actual instance.

For example, here's how you would implement it in Immutable.Vector:

var t = require('./transducers'); Immutable.Vector.prototype[t.protocols.transformer] = { init: function() { return Immutable.Vector().asMutable(); }, result: function(vec) { return vec.asImmutable(); }, step: function(vec, x) { return vec.push(x); } };

If you implement the transformer protocol, now your data structure will work with all of the builtin functions. You can just use seq like normal and you get back an immutable vector!

t.seq(Immutable.Vector(1, 2, 3, 4, 5), t.compose( t.map(function(x) { return x + 10; }), t.map(function(x) { return x * 2; }), t.filter(function(x) { return x % 5 === 0; }), t.filter(function(x) { return x % 2 === 0; }))); // -> Vector [ 30 ]

I hope you give transducers a try, they are really fun! And unlike Cognitect's project, mine is happy to receive pull requests. :)

Categorieën: Mozilla-nl planet

Brett Gaylor: From Mozilla to new making

Mozilla planet - za, 11/10/2014 - 19:00

Yesterday was my last day as an employee of the Mozilla Foundation. I’m leaving my position as VP, Webmaker to create an interactive web series about privacy and the economy of the web.

I’ve had the privilege of being a “crazy Mofo” for nearly five years. Starting in early 2010, I worked with David Humphrey and researchers at the Center for Development of Open Technology to create Popcorn.js. Having just completed “Rip!”, I was really interested in mashups - and Popcorn was a mashup of open web technology questions (how can we make video as elemental an element of the web as images or links?) and formal questions about documentary (what would a “web native” documentary look like? what can video do on the web that it can’t do on TV?). That mashup is one of the most exciting creative projects I’ve ever been involved with, and lead to a wonderful amount of unexpected innovation and opportunity. An award winning 3D documentary by a pioneer of web documentaries, the technological basis of a cohort of innovative(and fun) startups, and a kick ass video creation tool that was part of the DNA of Webmaker.org - which this year reached 200,000 users and facilitated the learning experience of over 127,200 learners face to face at our annual Maker Party.

Thinking about video and the web, and making things that aim to get the best of both mediums, is what brought me to Mozilla - and it’s what’s taking me to my next adventure.

I’m joining my friends at Upian in Paris (remotely, natch) to direct a multi-part web series around privacy, surveillance and the economy of the web. The project is called Do Not Track and it’s supported by the National Film Board of Canada, Arte, and Bayerischer Rundfunk (BR), the Tribeca Film Institute and the Centre National du Cinéma. I’m thrilled by the creative challenge and humbled by the company I’ll be keeping - I’ve wanted to work with Upian since their seminal web documentary “Gaza/Sderot” and have been thrilled to watch from the sidelines as they’ve made Prison Valley, Alma, the MIT’s Moments of Innovation project, and the impressive amount of work they do for clients in France and around the world. These are some crazy mofos, and they know how to ship.

Fake it Till You Make it

Nobody knows what they’re doing. I can’t stress this enough.

— God (@TheTweetOfGod) September 20, 2014

Mozilla gave me a wonderful gift: to innovate on the web, to dream big, without asking permission to do so. To in fact internalize innovation as a personal responsibility. To hammer into me every day the belief that for the web to remain a public resource, the creativity of everyone needs to be brought to the effort. That those of us in positions of privilege have a responsibility to wake up every day trying to improve the network. It’s a calling that tends to attract really bright people, and it can illicit strong feelings of impostor syndrome for a clueless filmmaker. The gift Mozilla gave me is to witness first hand that even the most brilliant people, or especially the most brilliant people, are making it up every single day. That’s why the web remains as much an inspiration to me today as when I first touched it as a teenager. Even though smart people criticize sillicon valley’s hypercapitalism, or while governments are breeding cynicsm and misrust by using the network for surveillance, I still believe the web remains the best place to invent your future.

I’m very excited, and naturally a bit scared, to be making something new again. Prepare yourself - I’m going to make shit up. I’ll need your help.

Working With

source

“Where some people choose software projects in order to solve problems, I have taken to choosing projects that allow me to work with various people. I have given up the comfort of being an expert , and replaced it with a desire to be alongside my friends, or those with whom I would like to be friends, no matter where I find them. My history among this crowd begins with friendships, many of which continue to this day.

This way of working, where collegiality subsumes technology or tools, is central to my personal and professional work. Even looking back over the past two years, most of the work I’ve done is influenced by a deep desire to work with rather than on. ” - On Working With Instead of On

David Humphrey, who wrote that, is who I want to be when I grow up. I will miss daily interactions with him, and many others who know who they are, very much. "In the context of working with, technology once again becomes the craft I both teach and am taught, it is what we share with one another, the occasion for our time together, the introduction, but not the reason, for our friendship.”

Thank you, Mozilla, for a wonderful introduction. Till the next thing we make!

Categorieën: Mozilla-nl planet

Mozilla WebDev Community: Webdev Extravaganza – October 2014

Mozilla planet - vr, 10/10/2014 - 16:06

Once a month, web developers from across Mozilla don our VR headsets and connect to our private Minecraft server to work together building giant idols of ourselves for the hoards of cows and pigs we raise to worship as gods. While we build, we talk about the work that we’ve shipped, share the libraries we’re working on, meet new folks, and talk about whatever else is on our minds. It’s the Webdev Extravaganza! The meeting is open to the public; you should stop by!

You can check out the wiki page that we use to organize the meeting, view a recording of the meeting in Air Mozilla, or attempt to decipher the aimless scrawls that are the meeting notes. Or just read on for a summary!

Shipping Celebration

The shipping celebration is for anything we finished and deployed in the past month, whether it be a brand new site, an upgrade to an existing one, or even a release of a library.

Phonebook now Launches Dialer App

lonnen shared the exciting news that the Mozilla internal phonebook now launches the dialer app on your phone when you click phone numbers on a mobile device. He also warned that anyone who has a change they want to make to the phonebook app should let him know before he forgets all that he had to learn to get this change out.

Open-source Citizenship

Here we talk about libraries we’re maintaining and what, if anything, we need help with for them.

django-browserid 0.11 is out

I (Osmose) chimed in to share the news that a new version of django-browserid is out. This version brings local assertion verification, support for offline development, support for Django 1.7, and other small fixes. The release is backwards-compatible with 0.10.1, and users on older versions can use the upgrade guide to get up-to-date. You can check out the release notes for more information.

mozUITour Helper Library for Triggering In-Chrome Tours

agibson shared a wrapper around the mozUITour API, which was used in the Australis marketing pages on mozilla.org to trigger highlights for new features within the Firefox user interface from JavaScript running in the web page. More sites are being added to the whitelist, and more features are being added to the API to open up new opportunities for in-chrome tours.

Parsimonious 0.6 (and 0.6.1) is Out!

ErikRose let us know that a new version of Parsimonious is out. Parsimonious is a parsing library written in pure Python, based on formal Parsing Expression Grammars (PEGs). You write a specification for the language you want to parse in a notation similar to EBNF, and Parsimonious does the rest.

The latest version includes support for custom rules, which let you hook in custom Python code for handling cases that are awkward or impossible to describe using PEGs. It also includes a @rule decorator and some convenience methods on the NodeVisitor class that simplify the common case of single-visitor grammars.

contribute.json Wants More Prettyness

peterbe stopped by to show of the design changes on the contribute.json website. There’s more work to be done; if you’re interested in helping out with contribute.json, let him know!

New Hires / Interns / Volunteers / Contributors

Here we introduce any newcomers to the Webdev group, including new employees, interns, volunteers, or any other form of contributor.

Name IRC Nick Role Project Cory Price ckprice Web Production Engineer Various Roundtable

The Roundtable is the home for discussions that don’t fit anywhere else.

Leeroy was Broken for a Bit

lonnen wanted to let people know that Leeroy, a service that triggers Jenkins test runs for projects on Github pull requests, was broken for a bit due to accidental deletion of the VM that was running the app. But it’s fixed now! Probably.

Webdev Module Updates

lonnen also shared some updates that have happened to the Mozilla Websites modules in the Mozilla Module System:

Static Caching and the State of Persona

peterbe raised a question about the cache timeouts on static assets loaded from Persona by implementing sites. In response, I gave a quick overview of the current state of Persona:

  • Along with callahad, djc has been named as co-maintainer, and the two are currently focusing on simplifying the codebase in order to make contribution easier.
  • A commitment to run the servers for Persona for a minimum period of time is currently working it’s way through approval, in order to help ease fears that the Persona service will just disappear.
  • Mozilla still has a paid operations employee who manages the Persona service and makes sure it is up and available. Persona is still accepting pull requests and will review, merge, and deploy them when they come in. Don’t be shy, contribute!

The answer to peterbe’s original question was “make a pull request and they’ll merge and push!”.

Graphviz graphs in Sphinx

ErikRose shared sphinx.ext.graphviz, which allows you to write Graphviz code in your documentation and have visual graphs be generated from the code. DXR uses it to render flowcharts illustrating the structure of a DXR plugin.

Turns out that building giant statues out of TNT was a bad idea. On the bright side, we won’t be running out of pork or beef any time soon.

If you’re interested in web development at Mozilla, or want to attend next month’s Extravaganza, subscribe to the dev-webdev@lists.mozilla.org mailing list to be notified of the next meeting, and maybe send a message introducing yourself. We’d love to meet you!

See you next month!

 

Categorieën: Mozilla-nl planet

Carsten Book: Mozilla Plugincheck – Its a Community Thing

Mozilla planet - vr, 10/10/2014 - 12:57

Hi,

A lot of people are using Mozilla’s Plugincheck Page to make sure all the Plugins like Adobe Flash are up-to-date.

Schalk Neethling has create a great Blog Post about Plugincheck here.

So if you are interested in contributing to Plugincheck check out Schalk’s Blogpost!

Thanks!

 

– Tomcat

Categorieën: Mozilla-nl planet

Carsten Book: The past, Current and future

Mozilla planet - vr, 10/10/2014 - 12:34

- The past -
I’m now about a year member of the A-Team (Automation and Tools Team) and also Fulltime Sheriff.

It was the end of a lot of changes personal and job wise. I moved from the Alps to the Munich Area to the City of Freising and (and NO i was not at the Beer Oktoberfest ;) and working as Fulltime Sheriff after my QA/Partner Build and Releng Duties.

Its awesome to be part of the Sheriff Team and was also awesome to get so much help from the Team like from Ed, Wes and Ryan to get started.

At one time i took over the Sheriff-Duty for my European Timezone and it was quite challenging having the responsible for all the Code Trees with Backouts etc and also later checkin-neededs :) What i really like as Sheriff is the work across the different Divisions at Mozilla and its exciting to work as Mozilla Sheriff too :)

One of main contribution beside getting started was helping creating some How-To Articles at https://wiki.mozilla.org/Sheriffing/How:To . I hope that will help also others to get involved into sheriffing.

- Current -

We just switched over to use Treeherder as new tool for Sheriffing.

Its quite new and so it feels (as everything like a new car there are this questions like how i do things i used to do in my old car etc) and there are still some bugs and things we can improve but we will get there. Also its a ideal time for getting involved into sheriffing like with hammering on Treeherder.

and that leads into ….:)

- The Future is You – !

As every Open Source Project Mozilla is heavily depended on Community Members like you and there is even the oppourtunity beside all the other Areas at Mozilla to work as Community Sheriff.So let us know if you want to be involved as Community-Sheriff. You are always welcome. You can find us in the #ateam Channel on irc.mozilla.org

For myself i’m planning to work beside my other tasks more in Community Building like blogging about Sheriffing and also taking more part in the Open Source Meetings in Munich.

– Tomcat

Categorieën: Mozilla-nl planet

Mozilla Reps Community: Council Elections – Campaing and candidates

Mozilla planet - vr, 10/10/2014 - 11:59

We’re excited to announce that we have 7 candidates for the forth cycle
of the Council elections, scheduled for October 18th. The Council has
carefully reviewed the candidates and agrees that they are all
extremely strong candidates to represent the Mozilla Reps program and
the interests of Reps.

The candidates are:

Now, it is up to Reps to elect the candidates to fill the four available
seats for a 12-month term
.

As detailed in the wiki, we are now entering the “campaign” phase
of this election cycle. This means that for the next 7 days,
candidates will all have an opportunity to communicate their agenda,
plans, achievements as a Rep/Mentor, and personal strengths, to the
Mozilla Reps voting body. They are encouraged to use their personal
Mozilla Reps profile page, their personal website/blog, Mozilla wiki
page or any other channel that they see fit to post information
regarding your candidacy.

To guide them in this effort, the Council has prepared 6 questions
that each candidate is asked to answer. We had originally wanted to
have candidates go through mozmoderator but due lack of time we will
do this next election cycle. The questions are the following:

  • What are the top three issues that you would want the Council to address were you to join the Council?
  • What is in your view the Mozilla Reps program’s biggest strength and weakness?
  • Identify something that is currently not working well in the Mozilla Reps program and which you think could be easy to fix?
  • What past achievement as a Rep or Mentor are you most proud of?
  • What are the specific qualities and skills that you have that you think will help you be an effective Council member?
  • As a Mentor, what do you do to try to encourage your inactive Mentees to be active again?

In the spirit of innovation and to help bring a human face to the
election process, the Council would like to add a new element to the
campaign: video. This video is optional, but we strongly encourage
candidates to create one.

That’s it for now. As always, if you have any questions, please don’t
hesitate to ask the Council at

reps-council at mozilla dot com

We’ll be giving regular election updates throughout these next two weeks
so stay tuned!

And remember, campaigning ends and voting starts on October 18th!

Comments on discourse.

Categorieën: Mozilla-nl planet

Daniel Stenberg: internal timers and timeouts of libcurl

Mozilla planet - vr, 10/10/2014 - 08:29

wall clockBear with me. It is time to take a deep dive into the libcurl internals and see how it handles timeouts and timers. This is meant as useful information to libcurl users but even more as insights for people who’d like to fiddle with libcurl internals and work on its source code and architecture.

socket activity or timeout

Everything internally in libcurl is using the multi, asynchronous, interface. We avoid blocking calls as far as we can. This means that libcurl always either waits for activity on a socket/file descriptor or for the time to come to do something. If there’s no socket activity and no timeout, there’s nothing to do and it just returns back out.

It is important to remember here that the API for libcurl doesn’t force the user to call it again within or at the specific time and it also allows users to call it again “too soon” if they like. Some users will even busy-loop like crazy and keep hammering the API like a machine-gun and we must deal with that. So, the timeouts are mostly to be considered advisory.

many timeouts

A single transfer can have multiple timeouts. For example one maximum time for the entire transfer, one for the connection phase and perhaps even more timers that handle for example speed caps (that makes libcurl not transfer data faster than a set limit) or detecting transfers speeds below a certain threshold within a given time period.

A single transfer is done with a single easy handle, which holds a list of all its timeouts in a sorted list. It allows libcurl to return a single time left until the nearest timeout expires without having to bother with the remainder of the timeouts (yet).

Curl_expire()

… is the internal function to set a timeout to expire a certain number of milliseconds into the future. It adds a timeout entry to the list of timeouts. Expiring a timeout just means that it’ll signal the application to call libcurl again. Internally we don’t have any identifiers to the timeouts, they’re just a time in the future we ask to be called again at. If the code needs that specific time to really have passed before doing something, the code needs to make sure the time has elapsed.

Curl_expire_latest()

A newcomer in the timeout team. I figured out we need this function since if we are in a state where we need to be called no later than a certain specific future time this is useful. It will not add a new timeout entry in the timeout list in case there’s a timeout that expires earlier than the specified time limit.

This function is useful for example when there’s a state in libcurl that varies over time but has no specific time limit to check for. Like transfer speed limits and the like. If Curl_expire() is used in this situation instead of Curl_expire_latest() it would mean adding a new timeout entry every time, and for the busy-loop API usage cases it could mean adding an excessive amount of timeout entries. (And there was a scary bug reported that got “tens of thousands of entries” which motivated this function to get added.)

timeout removals

We don’t remove timeouts from the list until they expire. Like for example if we have a condition that is timing dependent, then we set a timeout with Curl_expire() and we know we should be called again at the end of that time.

If we wouldn’t add the timeout and there’s no socket activity on the socket then we may not be called again – ever.

When an internal state transition into something else and we therefore don’t need a previously set timeout anymore, we have no handle or identifier to the timeout so it cannot be removed. It will instead lead to us getting called again when the timeout triggers even though we didn’t really need it any longer. As we’re having an API that allows this anyway, this is already handled by the logic and getting called an extra time is usually very cheap and is not considered a problem worth addressing.

Timeouts are removed automatically from the list of timers when they expire. Timeouts that are in passed time are removed from the list and the timers following will then get moved to the front of the queue and be used to calculate how long the single timeout should be next.

The only internal API to remove timeouts that we have removes all timeouts, used when cleaning up a handle.

many easy handles

I’ve mentioned how each easy handle treats their timeouts above. With the multi interface, we can have any amount of easy handles added to a single multi handle. This means one list of timeouts for each easy handle.

To handle many thousands of easy handles added to the same multi handle, all with their own timeout (as each easy handle only show their closest timeout), it builds a splay tree of easy handles sorted on the timeout time. It is a splay tree rather than a sorted list to allow really fast insertions and removals.

As soon as a timeout expires from one of the easy handles and it moves to the next timeout in its list, it means removing one node (easy handle) from the splay tree and inserting it again with the new timeout timer.

Categorieën: Mozilla-nl planet

Raniere Silva: Mathml October Meeting

Mozilla planet - vr, 10/10/2014 - 05:00
Mathml October Meeting ../../../_images/mathml.jpg

This is a report about the Mozilla MathML October Meeting (see the announce). The topics of the meeting can be found in this PAD (local copy of the PAD). This meeting happens all at appear.in and because of that we don’t have a log.

The next meeting will be in November 14th (note that November 14th is Friday). Some countries will move to winter time and others to summer time so we will change the time and announce it later on mozilla.dev.tech.mathml. Please add topics in the PAD.

Leia mais...

Categorieën: Mozilla-nl planet

Erik Vold: What is the Jetpack/Add-on SDK?

Mozilla planet - vr, 10/10/2014 - 02:00

There are many opinions on this, and I think I’ve heard them all, but no one has worked on this project for as long as I have, so I’d like to write what I think the Jetpack/Add-on SDK is.

Originally the Jetpack prototype was developed as a means to make add-on development easier for web developers, I say this because it was both the impression that I got and it was one of the bullet points Aza Raskin listed for me in an email he sent to me asking me to be a project ambassador. This was very appealing to me at the time because I had no idea how to write add-ons back then. The prototype however provided chrome access from the beginning, which is basically the ability to do almost anything that you want with the browser and the system it runs on. So to my mind the Jetpack prototype was an on-ramp to add-on and Firefox development, because it also did not have the same power that add-ons had, it had subset of abilities.

When Jetpack graduated from being a prototype it was renamed to the Add-on SDK, and it included the seeds of something that was lacking in add-on development, sharable modules. These modules could be written using the new tech at the time, CommonJS, which is now widely used and commonplace. The reason for this as I understood it was both to make add-on development easier, and to make reviewing add-ons easier (because each version of a module would only need to be reviewed once). When I started writing old school add-ons I quickly saw the value in the first, and later when I became an AMO reviewer the deep value of the latter was also quickly apparent.

In order to make module development decentralized it was important to provide chrome access to those modules that need it, otherwise all of the SDK features would have to be developed and approved in-house by staffers, as is done with Google Chrome, which would not only hamper creativity, but also defeat the purpose for having a module system. This is our advantage over Google Chrome, not our weakness.

To summarize I feel that the Jetpack/Add-on SDK is this:

  1. An on-ramp to extension and Firefox development for web devs, with a shallow learning curve.
  2. A means for sharing code/modules, which reduces review time.
  3. A quicker way to develop add-ons than previous methods, because there is less to learn (see a chrome.manifest or bootstrap.js file or if you have doubts).
  4. A means for testing both add-ons and the browser itself (possibly the easiest way to write tests for add-ons and Firefox when used in combination with point 2).
  5. A more reliable way to write extensions than previous methods, because the platform code changes so much the modules system (point 2) can provide an abstraction layer such that developers can blissfully ignore platform changes, which reinforces point 3.
Categorieën: Mozilla-nl planet

Ben Kero: September ’14 Mercurial Code Sprint

Mozilla planet - do, 09/10/2014 - 23:15

A week ago I was fortunate enough to attend the latest code sprint of the Mercurial project. This was my second sprint with this project, and took away quite a bit from the meeting. The attendance of the sprint was around 20 people and took the form of a large group, with smaller groups splitting out intermittently to discuss particular topics. I had seen a few of the attendees before at a previous sprint I attended.

Joining me at the sprint were two of my colleagues Gregory Szorc (gps) and Mike Hommey (glandium). They took part in some of the serious discussions about core bugfixes and features that will help Mozilla scale its use of Mercurial. Impressively, glandium had only been working on the project for mere weeks, but was able to make serious contributions to the bundle2 format (an upcoming feature of Mercurial). Specifically, we talked to Mercurial developers about some of the difficulties and bugs we’ve encountered with Mozilla’s “try” repository due to the “tens of thousands of heads” and the events that cause a serving request to spin forever.

By trade I’m a sysadmin/DevOps person, but I also do have a coder hat that I don from time to time. Still though, the sprint was full of serious coders who seemingly worked on Mercurial full-time. There were attendees who had big named employers, some of whom would probably prefer that I didn’t reveal their identities here.

Unfortunately due to my lack of familiarity with a lot of the deep-down internals I was unable to contribute to some of the discussions. It was primarily a learning experience for me for both the process which direction-driving decisions are made for the project (mpm’s BDFL status) and all of the considerations that go into choosing a particular method to implement an idea.

That’s not to say I was entirely useless. My knowledge of systems and package management meant I was able to collaborate with another developer (kiilerix) to improve the Docker package building support, including preliminary work for building (un)official Debian packages for the first time.

I also learned about some infrequently used features or tips about Mercurial. For example, folks who come from a background of using git often complain about Mercurial’s lack of interactive rebase functionality. The “histedit” extension provides this feature. Much like many other features of Mercurial, this is technically “in core”, but not enabled by default. Adding a line in the “[extensions]” section your “hgrc” file such as “histedit =” enables it. It allows all the expected picking, folding, dropping, editing, or modifying commit messages.

Changeset evolution is another feature that’s been coming for a long time. It enables developers to safely modify history and be able to propagate those changes to any down/upstream clones. It’s still disabled by default, but is available as an extension. Gregory Szorc, a colleague of mine, has written about it before. If you’re curious you can read more about it here.

One of the features I’m most looking forward to is sparse checkouts. Imagine a la Perforce being able to only check out a subtree or subtrees of a repository using ‘–include subdir1/’ and –exclude subdir2/’ arguments during cloning/updating. This is what sparse checkouts will allow. Additionally, functionality is being planned to enable saved ‘profiles’ of subdirs for different uses. For instance, specifying the ‘–enable-profile mobile’ argument will allow a saved list of included and excluded items. This seems like a really powerful way of building lightweight build profiles for each different type of build we do. Unfortunately to be properly implemented it is waiting on some other code to be developed such as sharded manifests.

One last thing I’d like to tell you about is an upcoming free software project for Mercurial hosting named Kallithea. It was borne from the liberated code from the RhodeCode project. It is still in its infancy (version 0.1 as of the writing of this post), but has some attractive features for viewing repositories, such visualizations of changelog graphs, diffs, code reviews, a built-in editor, LDAP support, and even a JSON-RPC API for issue tracker integration.

In all I feel it was a valuable experience for me to attend that benefited both the Mercurial project and myself. I was able to lend some of my knowledge about building packages and familiarity with operations of large-scale hgweb serving, and was able to learn a lot about the internals of Mercurial and understand that even the deep core code of the project isn’t very scary.

I’m very thankful for my ability to attend and look forward to attending the next Sprint in the following year.

Categorieën: Mozilla-nl planet

Mozilla Open Policy & Advocacy Blog: Spotlight on the ACLU: A Ford-Mozilla Open Web Fellow Host

Mozilla planet - do, 09/10/2014 - 20:30

{The Ford-Mozilla Open Web Fellows applications are now open. To shed light on the fellowship, we will be featuring posts from the 2015 Host Organizations. Today’s post comes from Kade Crockford, the Director of the Technology for Liberty program at the ACLU of Massachusetts. We are so excited to have the ACLU as a host organization. It has a rich history of defending civil liberties, and has been on the forefront of defending Edward Snowden following his revelations of the NSA surveillance activities. The Ford-Mozilla Open Web fellow at the ACLU of Massachusetts will have a big impact in protecting Internet freedom.}

Spotlight on the ACLU: A Ford-Mozilla Open Web Fellow Host
By Kade Crockford, Director of Technology for Liberty, ACLU of Massachusetts

Intellectual freedom, the right to criticize the government, and freedom of association are fundamental characteristics of a democratic society. Dragnet surveillance threatens them all. Today, the technologies that provide us access to the world’s knowledge are mostly built to enable a kind of omnipotent tracking human history has never before seen. The law mostly works in favor of the spies and data-hoarders, instead of the people. We are at a critical moment as the digital age unfolds: Will we rebuild and protect an open and free Internet to ensure the possibility of democracy for future generations?

We need your help at the ACLU of Massachusetts to make sure we, as a society, answer that question in the affirmative.

aclu_massachusetts

The ACLU is the oldest civil rights and civil liberties organization in the U.S. It was founded in 1920 in the wake of the imprisonment of anti-World War I activists for distributing anti-war literature, and in the midst of widespread government censorship of materials deemed obscene, radical or insufficiently patriotic. In 1917, the U.S. Congress had passed the Espionage Act, making it a crime to interfere with military recruitment. A blatantly unconstitutional “Sedition Act” followed in 1918, making illegal the printing or utterance of anything “disloyal…scurrilous, or abusive” about the United States government. People like Rose Pastor Stokes were subsequently imprisoned for long terms for innocuous activity such as writing letters to the editor critical of US policy. In 1923, muckraking journalist Upton Sinclair was arrested simply for reading the text of the First Amendment at a union rally. Today, thanks to almost one hundred years of effective activism and impact litigation, people would be shocked if police arrested dissidents for writing antiwar letters to the editor.

But now we face an even greater threat: our primary means of communication, organization, and media—the Internet—is threatened by pervasive, dragnet surveillance. The Internet has opened up the world’s knowledge to anyone with a connection, but it has also put us under the microscope like never before. The stakes couldn’t be higher.

That’s why the ACLU—well versed in the Bill of Rights, constitutional precedent, community organizing, advocacy, and public education—needs your help. If we want to live in an open society, we must roll back corporate and government electronic tracking and monitoring, and pass on a free Internet to our children and theirs. We can’t do it without committed technologists who understand systems and code. Democracy requires participation and agitation; today, it also requires freedom fighters with computer science degrees.

Apply to become a Ford-Mozilla Open Web Fellow at the ACLU of Massachusetts if you want to put your technical skills to work on a nationally-networked team made up of the best lawyers, advocates, and educators. Join us as we work to build a free future. There’s much to be done, and we can’t wait for you to get involved.

After all, Internet freedom can’t protect itself.

Apply to be a 2015 Ford-Mozilla Open Web Fellow. Visit www.mozilla.org/advocacy.

Categorieën: Mozilla-nl planet

Mozilla Reps Community: Reps Weekly Call – October 9th 2014

Mozilla planet - do, 09/10/2014 - 19:50

Last Thursday we had our regular weekly call about the Reps program where we talk about what’s going on in the program and what Reps have been doing during the last week.

reps-polo

Summary

Detailed notes

AirMozilla video

https://air.mozilla.org/reps-weekly-20141009/

Don’t forget to comment about this call on Discourse and we hope to see you next week!

Categorieën: Mozilla-nl planet

Julien Vehent: Automated configuration analysis for Mozilla's TLS guidelines

Mozilla planet - do, 09/10/2014 - 17:24

Last week, we updated Mozilla's Server Side TLS guidelines to add a third recommended configurations. Each configuration maps to a target compatibility level:

  1. Old supports Windows XP pre-SP2 with IE6 and IE7. Those clients do not support AES ciphers, and for them we need to maintain a configuration that accepts 3DES, SSLv3 and SHA-1 certificates.
  2. Intermediate is the new default, and supports clients from Firefox 1 until now. Unlike the old configuration, SSLv3, 3DES and SHA-1 are disabled. We also recommend using a Diffie-Hellman parameter of 2048 bits when PFS DHE ciphers are in use (note that java 6 fails with a DH param > 1024 bits, use the old configuration if you need java 6 compatibility).
  3. Modern is what we would really love to enable everywhere, but is not yet supported by enough clients. This configuration only accepts PFS ciphers and TLSv1.1+. Unfortunately, clients older than Firefox 27 will fail to negotiate this configuration, so we reserve it for services that do not need backward compatibility before FF27 (webrtc, sync1.5, ...).

Three recommended configurations means more choice, but also more work to evaluate a given endpoint.To help with the analysis of real-world TLS setups, we rely on cipherscan, a wrapper to openssl s_client that quickly pulls TLS configuration from a target. I wrote the initial version of cipherscan last year, and I'm very happy to see it grow with major contributions from Hubert Kario (Red Hat) and a handful of other people.

Today I'm releasing an extension to cipherscan that evaluates a scan result against our guidelines. By running it against a target, it will tell you what the current configuration level is, and what should be changed to reach the next level.

$ ./analyze.py -t jve.linuxwall.info
jve.linuxwall.info:443 has intermediate tls

Changes needed to match the old level:
* consider enabling SSLv3
* add cipher DES-CBC3-SHA
* use a certificate with sha1WithRSAEncryption signature
* consider enabling OCSP Stapling

Changes needed to match the intermediate level:
* consider enabling OCSP Stapling

Changes needed to match the modern level:
* remove cipher AES128-GCM-SHA256
* remove cipher AES256-GCM-SHA384
* remove cipher AES128-SHA256
* remove cipher AES128-SHA
* remove cipher AES256-SHA256
* remove cipher AES256-SHA
* disable TLSv1
* consider enabling OCSP Stapling

The analysis above evaluates my blog. I'm aiming for intermediate level here, and it appears that I reach it. I could improve further by enabling OCSP Stapling, but that's not a hard requirement.

If I wanted to reach modern compatibility, I would need to remove a few ciphers that are not PFS, disable TLSv1 and, again, enable OCSP Stapling. I would probably want to update my ciphersuite to the one proposed on Server Side TLS #Modern compatibility.

Looking at another site, twitter.com, the script return "bad ssl". This is because twitter still accepts RC4 ciphers, and in the opinion of analyze.py, this is a bad thing to do. We really don't trust RC4 anymore.

$ ./analyze.py -t twitter.com
twitter.com:443 has bad ssl

Changes needed to match the old level:
* remove cipher ECDHE-RSA-RC4-SHA
* remove cipher RC4-SHA
* remove cipher RC4-MD5
* use a certificate with sha1WithRSAEncryption signature
* consider enabling OCSP Stapling

Changes needed to match the intermediate level:
* remove cipher ECDHE-RSA-RC4-SHA
* remove cipher RC4-SHA
* remove cipher RC4-MD5
* remove cipher DES-CBC3-SHA
* disable SSLv3
* consider enabling OCSP Stapling

Changes needed to match the modern level:
* remove cipher ECDHE-RSA-RC4-SHA
* remove cipher AES128-GCM-SHA256
* remove cipher AES128-SHA
* remove cipher RC4-SHA
* remove cipher RC4-MD5
* remove cipher AES256-SHA
* remove cipher DES-CBC3-SHA
* disable TLSv1
* disable SSLv3
* consider enabling OCSP Stapling

The goal of analyze.py is to help operators define a security level for their site, and use this script to verify their configuration. If you want to check compatibility with a target level, you can use the -l flag to specify the level you want:

$ ./analyze.py -t stooge.mozillalabs.com -l modern
stooge.mozillalabs.com:443 has modern tls

Changes needed to match the modern level:
* consider enabling OCSP Stapling

Our guidelines are opinionated, and you could very well disagree with some of the recommendations. The discussion is open on the Talk section of the wiki page, I'm always happy to discuss them, and make them helpful to as many people as possible.

You can get cipherscan and analyze.py from the github repository at https://github.com/jvehent/cipherscan.

Categorieën: Mozilla-nl planet

Michael Kaply: Firefox 24 ESR EOL

Mozilla planet - do, 09/10/2014 - 15:45

I just want to take a moment to remind everyone that the Firefox 24 ESR will be officially replaced by the Firefox 31 ESR this coming Tuesday, October 14, 2014. At that time, the Firefox 24 ESR will be unsupported. Firefox 24 ESR users will be automatically upgraded to the Firefox 31 ESR.

I would hope by now everyone has tested with the Firefox 31 ESR, but if you haven't, it might be time to start.

The CCK2 has been fully updated to work with Firefox 31 and beyond.

On another note, there are major packaging changes coming to Firefox on Mac due to changes to the way applications are signed. You can read more about it in this bug. This will primarily impact the locations of autoconfig files, preferences and the distribution directory. I'll try to find some time soon to document these changes.

Categorieën: Mozilla-nl planet

Mozilla Release Management Team: Firefox 33 beta9 to RC

Mozilla planet - do, 09/10/2014 - 13:58

  • 13 changesets
  • 30 files changed
  • 215 insertions
  • 157 deletions

ExtensionOccurrences cpp6 h4 ini2 sh1 nsh1 mm1 html1 hgtags1

ModuleOccurrences mobile12 content5 browser4 gfx3 widget2 toolkit1 testing1 js1

List of changesets:

JW WangBug 994292 - Call SpecialPowers.pushPermissions() to ensure permission change is completed before continuing the rest of the tests. r=baku, a=test-only - 16bd77984527 Ryan VanderMeulenBug 1025040 - Disable test_single_finger_desktop.py on Windows for frequent failures. a=test-only - 3d1029947008 J. Ryan StinnettBug 989168 - Disable browser_manifest_editor. r=jryans, a=test-only - 7fefb97d2f75 Jim MathiesBug 1068189 - Force disable browser.tabs.remote.autostart in non-nightly builds. r=felipe, a=sledru - 5217e39df54c Jim MathiesBug 1068189 - Take into account 'layers.offmainthreadcomposition.testing.enabled' settings when disabling remote tabs. r=billm, a=sledru - 7b2887bd78a0 Randell JesupBug 1077274: Clean up tracklists r=jib a=dveditz - f0253d7268bb Kan-Ru Chen (陳侃如) Bug 942411 - Use SpecialPowers.pushPermissions to make sure the permission is set before test run. r=smaug, a=test-only - bbf1c4e2ddce Brian BondyBug 1049521 - Only register for types when there is no default in either of HKLM or HKCU and fix users affected by bad file associations. r=rstrong, a=sledru - d126cd83b4b8 Jon CoppeardBug 1061214. r=terrence, a=sledru - bbc35ec2c90e Nicolas SilvaBug 1074378 - Blocklist driver Intel GMAX4500HD v 8,15,10,1749. r=Bas, a=sledru - e8360a0c7d74 Ralph GilesBug 772347 - Back out MacOS X video wakelock. a=sledru - df37248fafcb Nicolas SilvaBug 1076825 - Don't crash release builds if allocating the buffer on white failed in RotatedBuffer.cpp. r=Bas, a=sledru - d89ec5b69c01 Nicolas SilvaBug 1044975 - Don't crash if mapping D3D11 shader constant buffers fails. r=Bas a=sledru - 9bf2a5b5162d

Categorieën: Mozilla-nl planet

Doug Belshaw: Survey: 5 proposals for Web Literacy Map 2.0

Mozilla planet - do, 09/10/2014 - 11:13

We’re currently working on a v2.0 of Mozilla’s Web Literacy Map. From the 38 interviews with stakeholders and community members I’ve identified 21 emerging themes for Web Literacy Map 2.0 as well as some ideas for Webmaker. The canonical home for everything relating to this update can now be found on the Mozilla wiki.

Cat with tail

While there are some decisions that need to be made by paid contributors / staff (e.g. design, combining competencies, wording of skills) there are some that should be made by the wider community. I’ve come up with five proposals in this survey:

http://goo.gl/forms/LKNSNrXCnu

The five proposals are:

  1. I believe the Web Literacy Map should explicitly reference the Mozilla manifesto.
  2. I believe the three strands should be renamed ‘Reading’, ‘Writing’ and ‘Participating’.
  3. I believe the Web Literacy Map should look more like a ‘map’.
  4. I believe that concepts such as ‘Mobile’, ‘Identity’, and ‘Protecting’ should be represented as cross-cutting themes in the Web Literacy Map.
  5. I believe a ‘remix’ button should allow me to remix the Web Literacy Map for my community and context.

Please do take the time to fill in the survey. Any meta feedback should go to @dajbelshaw / doug@mozillafoundation.org.

Categorieën: Mozilla-nl planet

Daniel Stenberg: Coverity scan defect density: 0.00

Mozilla planet - do, 09/10/2014 - 09:14

A couple of days ago I decided to stop slacking and grab this long dangling item in my TODO list: run the coverity scan on a recent curl build again.

Among the static analyzers, coverity does in fact stand out as the very best one I can use. We run clang-analyzer against curl every night and it hasn’t report any problems at all in a while. This time I got almost 50 new issues reported by Coverity.

To put it shortly, a little less than half of them were issues done on purpose: for example we got several reports on ignored return codes we really don’t care about and there were several reports on dead code for code that are conditionally built on other platforms than the one I used to do this with.

But there were a whole range of legitimate issues. Nothing really major popped up but a range of tiny flaws that were good to polish away and smooth out. Clearly this is an exercise worth repeating every now and then.

End result

21 new curl commits that mention Coverity. Coverity now says “defect density: 0.00” for curl and libcurl since it doesn’t report any more flaws. (That’s the number of flaws found per thousand lines of source code.)

Want to see?

I can’t seem to make all the issues publicly accessible, but if you do want to check them out in person just click over to the curl project page at coverity and “request more access” and I’ll grant you view access, no questions asked.

Categorieën: Mozilla-nl planet

Firefox OS Shows Continued Global Growth

Mozilla Blog - do, 09/10/2014 - 09:10
Firefox OS is now available on three continents with 12 smartphones offered by 13 operators in 24 countries. As the only truly open mobile operating system, Firefox OS demonstrates the versatility of the Web as a platform, free of the … Continue reading
Categorieën: Mozilla-nl planet

Pagina's