Mozilla Nederland LogoDe Nederlandse

The Mozilla Blog: Forging an Alliance for Royalty-Free Video

Mozilla planet - ti, 01/09/2015 - 17:09

Things are moving fast for royalty-free video codecs. A month ago, the IETF NETVC Working Group had its first meeting and two weeks ago Cisco announced Thor. Today, we’re taking the next big step in this industry-wide effort with the formation of the Alliance for Open Media. Its founding members represent some of the biggest names in online video, such as Netflix, Amazon, and YouTube, multiple browser vendors including Mozilla, Microsoft, and Google, and key technology providers like Cisco and Intel. The Alliance has come together to share technology and run the kind of patent analysis necessary to build a next-generation royalty-free video codec.

Mozilla has long championed royalty-free codecs. The Web was built on innovation without asking permission, and patent licensing regimes are incompatible with some of the Web’s most successful business models. That’s why we already support great codecs like VP8, VP9, and Opus in Firefox. But the Web doesn’t stand still and neither do we. As resolutions and framerates increase, the need for more advanced codecs with ever-better compression ratios will only grow. We started our own Daala project and formed NETVC to meet those needs, and we’ve seen explosive interest in the result. We believe that Daala, Cisco’s Thor, and Google’s VP10 combine to form an excellent basis for a truly world-class royalty-free codec.

In order to allow us to move quickly, the alliance is structured as a Joint Development Foundation project. These are an ideal complement to a larger, open standards organization like the IETF: One of the biggest challenges in developing open standards in a field like video codecs is figuring out how to review the patents. The Alliance provides a venue for us to share the legal legwork without having to worry about it being used against us down the road. That distributes the load, allows us to innovate faster and cheaper, and gives everyone more confidence that we are really producing a royalty-free codec.

The Alliance will operate under W3C patent rules and release code under an Apache 2.0 license. This means all Alliance participants are waiving royalties both for the codec implementation and for any patents on the codec itself. The initial members are just a start. We invite anyone with an interest in video, online or off, to join us.

For further information please visit or view the press release.

Categorieën: Mozilla-nl planet

Gregory Szorc: Serving Mercurial Clones from a CDN

Mozilla planet - ti, 01/09/2015 - 17:00

For the past few months, Mozilla has been serving Mercurial clones from Amazon S3. We upload snapshots (called bundles) of large and/or high-traffic repositories to S3. We have a custom Mercurial extension on the client and server that knows how to exchange the URLs for these snapshots and to transparently use them to bootstrap a clone. The end result is drastically reduced Mercurial server load and faster clone times. The benefits are seriously ridiculous when you operate version control at scale.

Amazon CloudFront is a CDN. You can easily configure it up to be backed by an S3 bucket. So we did. is Mozilla's CDN for hosting Mercurial data. Currently it's just bundles to be used for cloning.

As of today, if you install the bundleclone Mercurial extension and hg clone a repository on such as mozilla-central (hg clone, the CDN URLs will be preferred by default. (Previously we preferred S3 URLs that hit servers in Oregon, USA.)

This should result in clone time reductions for Mozillians not close to Oregon, USA, as the CloudFront CDN has servers all across the globe and your Mercurial clone should be bootstrapped from the closest and hopefully therefore fastest server to you.

Unfortunately, you do need the the aforementioned bundleclone extension installed for this to work. But, this should only be temporary: I've proposed integrating this feature into the core of Mercurial so if a client talks to a server advertising pre-generated bundles the clone offload just works. I already have tentative buy-in from one Mercurial maintainer. So hopefully I can land this feature in Mercurial 3.6, which will be released November 1. After that, I imagine some high-traffic Mercurial servers (such as Bitbucket) will be very keen to deploy this so CPU load on their servers is drastically reduced.

Categorieën: Mozilla-nl planet

Air Mozilla: Martes mozilleros

Mozilla planet - ti, 01/09/2015 - 17:00

Martes mozilleros Reunión bi-semanal para hablar sobre el estado de Mozilla, la comunidad y sus proyectos.

Categorieën: Mozilla-nl planet

Byron Jones: happy bmo push day!

Mozilla planet - ti, 01/09/2015 - 15:05

the following changes have been pushed to

  • [1197073] add support for 2fa using totp (eg. google authenticator)
  • [1199136] security bug group moves for new core-security-release group
  • [1199941] inactive sessions should expire faster (a week)

today’s push lands initial support for two-factor authentication on BMO. we currently support time-based one-time passwords (totp) with protection around just logging in. 2fa protection will be extended to protect other actions in the upcoming weeks.

visit the ‘two-factor authentication‘ section under your user preferences to enable 2fa.

discuss these changes on

Filed under: bmo, mozilla
Categorieën: Mozilla-nl planet

QMO: Firefox 41 Beta 7 Testday, September 4th

Mozilla planet - ti, 01/09/2015 - 14:43

I’m writing to let you know that this Friday, September 4th, we’ll be hosting the Firefox 41.0 Beta 7 Testday. The main focus of this event is going to be set on Flash on 64-bit Firefox builds and plug-ins testing. Detailed participation instructions are available in this etherpad.

No previous testing experience is required so feel free to join us on the #qa IRC channel and our moderators will make sure you’ve got everything you need to get started.

Hope to see you all on Friday! Let’s make Firefox better together!

Categorieën: Mozilla-nl planet

Daniel Stenberg: Blog refresh

Mozilla planet - ti, 01/09/2015 - 11:30

Dear reader,

If you ever visited my blog in the past and you see this, you should’ve noticed a pretty significant difference in appearance that happened the other day here.

When I kicked off my blog here on the site back in August 2007 and moved my blogging from advogato to self-host, I installed WordPress and I’ve been happy with it since then from a usability stand-point. I crafted a look based on an existing theme and left it at that.

Over time, WordPress has had its hefty amount of security problems over and over again and I’ve also suffered from them myself a couple of times, and a few times I ended up patching it manually more than once. At one point when I decided to bite the bullet and upgrade to the latest version it didn’t work to upgrade anymore and I postpone it for later.

Time passed, I tried again without success and then more time passed.

I finally fixed the issues I had with upgrading. With a series of manual fiddling I finally managed to upgrade to the latest WordPress and when doing so my old theme was considered broken/incompatible so I threw that out and started fresh with a new theme. This new one is based on one of the simple default ones WordPress ships for free. I’ve mostly just made it slightly wider and edited the looks somewhat. I don’t need fancy. Hopefully I’ll be able to keep up with WordPress better this time.

Additionally, I added a captcha that now forces users to solve an easy math problem to submit anything to the blog to help me fight spam, and perhaps even more to solve a problem I have with spambots creating new users. I removed over 3300 users yesterday that never posted anything that has been accepted.

Enjoy.  Now back to our regular programming!

Categorieën: Mozilla-nl planet

Door Mozilla's missers loopt het web gevaar -

Nieuws verzameld via Google - ti, 01/09/2015 - 09:16

Door Mozilla's missers loopt het web gevaar
Mozilla stelt zichzelf ten doel met zijn browser het internet open en toegankelijk te houden, maar dat is lastig te realiseren als je minder dan één procent van de mobiele markt hebt. Erger nog, de Mozilla Foundation - nog altijd met Firefox goed voor ...

Categorieën: Mozilla-nl planet

Seif Lotfy: Counting flows (Semi-evaluation of CMS, CML and PMC)

Mozilla planet - ti, 01/09/2015 - 00:25

Assume we have a stream of events coming in one at a time, and we need to count the frequency of the different types of events in the stream.

In other words: We are receiving fruits one at a time in no given order, and at any given time we need to be able to answer how many of a specific fruit did we receive.

The most naive implementation is a dictionary in the form of , and is most accurate and suitable for streams with limited types of events.

Let us assume a unique item consists of 15 bytes and has a dedicated uint32 (4 bytes) counter assigned to it.

At 10 million unique items we end up using 19 MB which is a bit much, but on the plus side its as accurate as it gets.

But what if we don't have the 19 MB. Or what if we have to keep track of several streams?

Maybe saving to a DB? Well when querying the DB upon request, something in the lines of:

SELECT count(event) WHERE event = ?)

The more items we add, the more resource intensive the query becomes.

Thankfully solutions come in the form of Probabalistic datastructures (sketches).

I won't get into details but to solve this problem I semi-evaluated the following data structures:

  • Count-Min sketch (CMS) [2]
  • Count-Min-Log sketch (CML) [1][2]
  • Probabilistic Multiplicity Counting sketch (PMC) [1]

Test details:

For each sketch I linearly added a new flow with equivalently linear events. So the first flow got 1 event inserted. The second flow for 2 events inserted, all the way up to 10k-th flow with 10k events inserted.

flow 1: 1 event flow 2: 2 events ... flow 10000: 10000 events

All three data structures were configured to have a size of 217KB (exactly 1739712 bits).

A couple dozen runs yielded the following results (based on my unoptimized code esp. for PMC and CML)

CMS: 07s for 50005000 insertion (fill rate: 31%) CML: 42s for 50005000 insertion (fill rate: 09%) PMC: 18s for 50005000 insertion (fill rate: 54%)

CMS with ɛ: 0.0001, δ: 0.99 (code) CMS

Observe the biased estimation of CMS. CMS will never underestimate. In our case looking at the top border of the diagram we can see the there was a lot of overestimation.

CML with ɛ: 0.000025, δ: 0.99 (16-bit counters) (code)CML

Just like CMS, CML is also biased and will never underestimate. However unlike CMS the top border of the diagram is less noisy. Yet accuracy seems to be decreasing for the high counting flows.

PMC with (256x32) virtual metrices (code) PMC

Unlike the previous two sketches. This sketch is unbiased, so underestimations exist. Also the estimate flow count increases with the actual flow count (linearly bigger errors). The drawback here is that PMC gets filled up very quickly which means at some point it will just have everything overestimated. It is recommended to know what the max num of different flows will be beforehand.

Bringing it all together ALL

So what do you think. If you are familiar with these algorithms or can propose a different benchmarking scenario, please comment. I might be able to work on that on a weekend. The code was all written in Go, feel free to suggest optimizations of fix any bugs found (links above respective plots).

Categorieën: Mozilla-nl planet

QMO: An open letter about Mozilla QA

Mozilla planet - mo, 31/08/2015 - 23:35

Dear people of the web,

As some of you may already be aware, Mozilla has experienced a lot of change over the years. Most teams and projects within Mozilla have felt this change in some way, either directly or indirectly. The QA Team is no exception.

As a microcosm of the Mozilla Project, people involved in many disparate projects, QA has changed course many times. To many of you, these changes may have passed by unnoticed. Perhaps you noticed something was different about QA but were not able to understand how or why things had changed. Perhaps it was a feeling, that some of us seemed more distant, or that it just felt different.

This may come as a surprise to some, but there is no longer a single, unified QA team at Mozilla. After going through a few re-organizations, we are spread across the organization, embedded with — and reporting to — various product teams.

Those teams have benefited by having a dedicated QA person on staff full time. However, with so few of us to go around, many teams find themselves without any QA. In this state, we’ve lost the distinguished central QA organization that once was and in doing so we’ve lost a central QA voice.

As a result of these changes and a sense of perpetual reorganization we have reached a tipping point. We’ve lost some very talented and passionate people. Change within itself isn’t a bad thing. The loss of cohesion is. It is time to break this pattern, regain our cohesion and regain our focus on the community.

The core group of QA community members, paid and volunteer, will soon be getting together to formulate a mission statement. We’ll do this with a series of one-on-one conversations between core individuals who are interested in architecting a new QA community.  This will serve as the guiding light of our journey toward a more optimistic future together.

In recognition of those who might feel excluded from this process, we want to assure you that there will be opportunity to contribute very early on. Conducting these one on ones is just the  first step in a very long journey. We plan to bring everyone along who wants to be here, but this process requires great care and it will take time. If you’d like to help us build the future please get in touch with us.

Please read our wiki page to find out more about what we’re doing and where we’re going.



Anthony Hughes and Matt Brandt

Categorieën: Mozilla-nl planet

Firefox: Mozilla-Konzept zeigt kontextbezogene Profile -

Nieuws verzameld via Google - mo, 31/08/2015 - 22:44

Firefox: Mozilla-Konzept zeigt kontextbezogene Profile
Kontextbezogene Profile oder Container, wie Mozilla sie im aktuellen Entwurf nennt, sollen dem Nutzer unter anderem die Möglichkeit geben, auf einer Webseite gleichzeitig mit unterschiedlichen Benutzerkonten angemeldet zu sein, oder Webseiten privat zu ...

Google Nieuws
Categorieën: Mozilla-nl planet

Matt Thompson: What we’re working on

Mozilla planet - mo, 31/08/2015 - 21:42
What we got done: What we’re doing next: Next steps on the Mozilla Learning plan

Stuff we’re working on for the Sep 8 community call:

  • Leadership: focusing on a) audience tiers / ladder of engagement, and b) an analysis of our current offerings.
  • Advocacy: fleshing out our potential core activities / tactics. Plus mapping user journeys against them.
  • Impact: synthesizing the feedback we got on impact statements and milestones from the last community call.

Oh Hey — we’re launching the new Thimble!

Categorieën: Mozilla-nl planet

Manish Goregaokar: Designing a GC in Rust

Mozilla planet - mo, 31/08/2015 - 20:53

For a while I’ve been working on a garbage collector for Rust with Michael Layzell. I thought this would be a good time to talk of our design and progress so far.


“Wait”, you ask, “why does Rust need a garbage collector”? Rust is supposed to work without a GC, that’s one of its main selling points!

True. Rust does work pretty well without a GC. It’s managed to do without one so far, and we still have all sorts of well-written crates out there (none of which use a GC).

But Rust is not just about low-cost memory safety. It’s also about choosing your costs and guarantees. Box<T> and stack allocation are not always sufficient, sometimes one needs to reach for something like Rc<T> (reference counting). But even Rc is not perfect; it can’t handle cycles between pointers. There are solutions to that issue like using Weak<T>, but that only works in limited cases (when you know what the points-to graph looks like at compile time), and isn’t very ergonomic.

Cases where one needs to maintain a complicated, dynamic graph are where a GC becomes useful. Similarly, if one is writing an interpreter for a GCd language, having a GC in Rust would simplify things a lot.

Not to say that one should pervasively use a GC in Rust. Similar to Rc<T>, it’s best to use regular ownership-based memory management as much as possible, and sprinkle Rc/Gc in places where your code needs it.

Previous designs

This isn’t the first GC in Rust. Automatic memory management has existed before in various forms, but all were limited.

Besides the ones listed below, Nick Fitzgerald’s cycle collector based on this paper exists and is something that you should look into if you’re interested. There’s also an RFC by Peter Liniker which sketches out a design for an immutable GC.

Core Rust GC(s)

Rust itself had a garbage collector until a bit more than a year ago. These “managed pointers” (@T) were part of the language. They were removed later with a plan to make GC a library feature.

I believe these were basically reference counted (cycle collected?) pointers with some language integration, but I’m not sure.

Nowadays, the only form of automatic memory management in Rust are via Rc and Arc which are nonatomic and atomic reference counted pointers respectively. In other words, they keep track of the number of shared references via a reference count (incremented when it is cloned, decremented when destructors run). If the reference count reaches zero, the contents are cleaned up.

This is a pretty useful abstraction, however, as mentioned above, it doesn’t let you create cycles without leaking them.


You can read more about Servo’s Spidermonkey bindings in this blog post (somewhat outdated, but still relevant)

In Servo we use bindings to the Spidermonkey Javascript engine. Since Javascript is a garbage collected language, the Rust representations of Javascript objects are also garbage collected.

Of course, this sort of GC isn’t really useful for generic use since it comes bundled with a JS runtime. However, the Rust side of the GC is of a design that could be used in an independent library.

The Rust side of the Spidermonkey GC is done through a bunch of smart pointers, and a trait called JSTraceable. JSTraceable is a trait which can “trace” recursively down some data, finding and marking all GC-managed objects inside it. This is autoderived using Rust’s plugin infrastructure, so a simple #[jstraceable] annotation will generate trace hooks for the struct it is on.

Now, we have various smart pointers. The first is JS<T>. This is opaque, but can be held by other GC-managed structs. To use this on the stack, this must be explicitly rooted, via .root(). This produces a Root<T>, which can be dereferenced to get the inner object. When the Root is created, the contained object is listed in a collection of “roots” in a global. A root indicates that the value is being used on the stack somewhere, and the GC starts tracing usage from these roots. When the Root<T> is destroyed, the root is removed.

The problem with this is that JS<T> doesn’t work on the stack. There is no way for the GC to know that we are holding on to JS<T> on the stack. So, if I copy a JS<T> to the stack, remove all references to it from objects in the GC heap, and trigger a collection, the JS<T> will still be around on the stack after collection since the GC can’t trace to it. If I attempt to root it, I may get a panic or a segfault depending on the implementation.

To protect against this, we have a bunch of lints. The relevant one here protects against JS<T> from being carried around on the stack; but like most lints, it’s not perfect.

To summarize: Spidermonkey gives us a good GC. However using it for a generic Rust program is ill advised. Additionally, Servo’s wrappers around the GC are cheap, but need lints for safety. While it would probably be possible to write safer wrappers for general usage, it’s pretty impractical to carry around a JS runtime when you don’t need one.

However, Spidermonkey’s GC did inspire me to think more into the matter.

Brainstorming a design

For quite a while I’d had various ideas about GCs. Most were simplifications of Servo’s wrappers (there’s some complexity brought in there by Spidermonkey that’s not necessary for a general GC). Most were tracing/rooting with mark-and-sweep collection. All of them used lints. Being rather busy, I didn’t really work on it past that, but planned to work on it if I could find someone to work with.

One day, Michael pinged me on IRC and asked me about GCs. Lots of people knew that I was interested in writing a GC for Rust, and one of them directed him to me when he expressed a similar interest.

So we started discussing GCs. We settled on a tracing mark-and-sweep GC. In other words, the GC runs regular “sweeps” where it first “traces” the usage of all objects and marks them and their children as used, and then sweeps up all unused objects.

This model on its own has a flaw. It doesn’t know about GC pointers held on the stack as local variables (“stack roots”). There are multiple methods for solving this. We’ve already seen one above in the Spidermonkey design – maintain two types of pointers (one for the stack, one for the heap), and try very hard using static analysis to ensure that they don’t cross over.

A common model (used by GCs like Boehm, called “conservative GCs”) is to do something called “stack scanning”. In such a system, the GC goes down the stack looking for things which may perhaps be GC pointers. Generally the GC allocates objects in known regions of the memory, so a GC pointer is any value on the stack which belongs to one of these regions.

Of course, this makes garbage collection rather inefficient, and will miss cases like Box<Gc<T>> where the GCd pointer is accessible, but through a non-GC pointer.

We decided rather early on that we didn’t want a GC based on lints or stack scanning. Both are rather suboptimal solutions in my opinion, and very hard to make sound1. We were also hoping that Rust’s type system and ownership semantics could help us in designing a good, safe, API.

So, we needed a way to keep track of roots, and we needed a way to trace objects.


The latter part was easy. We wrote a compiler plugin (well, we stole Servo’s tracing plugin which I’d written earlier) which autoderives an implementation of the Trace trait on any given struct or enum, using the same internal infrastructure that #[derive(PartialEq)] and the rest use. So, with just the following code, it’s easy to make a struct or enum gc-friendly:

#[derive(Trace)] struct Foo { x: u8, y: Bar, } #[derive(Trace)] enum Bar { Baz(u8), Quux }

For a foo of type Foo foo.trace(), will expand to a call of foo.x.trace() and foo.y.trace(). bar.trace() will check which variant it is and call trace() on the u8 inside if it’s a Baz. For most structs this turns out to be a no-op and is often optimized away by inlining, but if a struct contains a Gc<T>, the special implementation of Trace for Gc<T> will “mark” the traceability of the Gc<T>. Types without Trace implemented cannot be used in types implementing Trace or in a Gc, which is enforced with a T: Trace bound on Gc<T>.

So, we have a way of walking the fields of a given object and finding inner Gc<T>s. Splendid. This lets us write the mark&sweep phase easily: Take the list of known reachable Gc<T>s, walk their contents until you find more Gc<T>s (marking all you find), and clean up any which aren’t reachable.


Of course, now we have to solve the problem of keeping track of the known reachable Gc<T>s, i.e. the roots. This is a hard problem to solve without language support, and I hope that eventually we might be able to get the language hooks necessary to solve it. LLVM has support for tracking GCthings on the stack, and some day we may be able to leverage that in Rust.

As noted above, Spidermonkey’s solution was to have non-rooted (non-dereferencable) heap pointers, which can be explicitly converted to rooted pointers and then read.

We went the other way. All Gc<T> pointers, when created, are considered “rooted”. The instance of Gc<T> has a “rooted” bit set to true, and the underlying shared box (GcBox, though this is not a public interface) has its “root count” set to one.

When this Gc<T> is cloned, an identical Gc<T> (with rooted bit set to true) is returned, and the underlying root count is incremented. Cloning a Gc does not perform a deep copy.

let a = Gc::new(20); // a.root = true, (*a.ptr).roots = 1, (*a.ptr).data = 20 // ptr points to the underlying box, which contains the data as well as // GC metadata like the root count. `Gc::new()` will allocate this box let b = a.clone(); // b.root = true, (*a.ptr).roots++, b.ptr = a.ptr

This is rather similar to how Rc works, however there is no root field, and the roots counter is called a “reference counter”.

For regular local sharing, it is recommended to just use a borrowed reference to the inner variable (borrowing works fine with rust-gc!) since there is no cost to creating this reference.

When a GC thing is put inside another GC thing, the first thing no longer can remain a root. This is handled by “unrooting” the first GC thing:

struct Foo { bar: u32, baz: Gc<u32>, } let a = Gc::new(20); // why anyone would want to GC an integer I'll never know // but I'll stick with this example since it's simple let b = Gc::new(Foo {bar: 1, baz: a}); // a.root = false, (*a.ptr).roots-- // b initialized similar to previous example // `a` was moved into `b`, so now `a` cannot be accessed directly here // other than through `b`, and `a` is no longer a root. // To avoid moving a, passing `a.clone()` to `b` will work

Of course, we need a way to traverse the object passed to the Gc<T>, in this case Foo, and look for any contained Gc<T>s to unroot. Sound familiar? This needs the same mechanism that trace() needed! We add struct-walking root() and unroot() methods to the Trace trait which are auto- derived exactly the same way, and continue. (We don’t need root() right now, but we will need it later on).

Now, during collection, we can just traverse the list of GcBoxs and use the ones with a nonzero root count as roots for our mark traversal.

So far, so good. We have a pretty sound design for a GC that works … for immutable data.


Like Rc<T>, Gc<T> is by default immutable. Rust abhors aliasable mutability, even in single threaded contexts, and both these smart pointers allow aliasing.

Mutation poses a problem for our GC, beyond the regular problems of aliasable mutability: It’s possible to move rooted things into heap objects and vice versa:

let x = Gc::new(20); let y = Gc::new(None); *y = Some(x); // uh oh, x is still considered rooted! // and the reverse! let y = Gc::new(Some(Gc::new(20))); let x = y.take(); // x was never rooted! // `take()` moves the `Some(Gc<u32>)` out of `y`, replaces it with `None`

Since Gc<T> doesn’t implement DerefMut, none of this is possible — one cannot mutate the inner data. This is one of the places where Rust’s ownership/mutability system works out awesomely in our favor.

Of course, an immutable GC isn’t very useful. We can’t even create cycles in an immutable GC, so why would anyone need this in the first place2?

So of course, we needed to make it somehow mutable. People using Rc<T> solve this problem by using RefCell<T>, which maintains something similar to the borrow semantics at runtime and is internally mutable. RefCell<T> itself can’t be used by us since it doesn’t guard against the problem illustrated above (and hence won’t implement Trace, but a similar cell type would work).

So we created GcCell<T>. This behaves just like RefCell<T>, except that it will root() before beginning a mutable borrow, and unroot() before ending it (well, only if it itself is not rooted, which is tracked by an internal field similar to Gc<T>). Now, everything is safe:

#[derive(Trace)] struct Foo { a: u8, b: GcCell<Gc<u8>>, } let x = Gc::new(20); let y = Gc::new(Foo {a: 10, b: Gc::new(30)}); { *y.b.borrow_mut() = x; // the `Gc(30)` from `y.b` was rooted by this call // but since we don't actually use it here, // the destructor gets rid of it. // We could use swap() to retain access to it. // ... // x unrooted } // and the reverse case works too: let y = Gc::new(GcCell::new(Some(Gc::new(20)))); let x = y.borrow_mut().take(); // the inner `Some(Gc(20))` gets rooted by `borrow_mut()` // before `x` can access it

So now, mutation works too! We have a working garbage collector!

Open problems Destructors

I believe this can be solved without lints, but it may require some upcoming features of Rust to be implemented first (like specialization).

In essence, destructors implemented on a value inside Gc<T> can be unsafe. This will only happen if they try to access values within a Gc<T> — if they do, they may come across a box that has already been collected, or they may lengthen the lifetime of a box scheduled to be collected.

The basic solution to this is to use “finalizers” instead of destructors. Finalizers, like in Java, are not guaranteed to run. However, we may need further drop hooks or trait specialization to make an airtight interface for this. I don’t have a concrete design for this yet, though.


Our model mostly just works in a concurrent situation (with thread safety tweaks, of course); in fact it’s possible to make it so that the concurrent GC will not “stop the world” unless someone tries to do a write to a GcCell. We have an experimental concurrent GC in this pull request. We still need to figure out how to make interop between both GCs safe, though we may just end up making them such that an object using one GC cannot be fed to an object using the other.


So far we haven’t really focused on performance, and worked on ensuring safety. Our collection triggering algorithm, for example, was horribly inefficient, though we planned on improving it. The wonderful Huon fixed this, though.

Similarly, we haven’t yet optimized storage. We have some ideas which we may work on later. (If you want to help, contributions welcome!)

Cross-crate deriving

Currently, an object deriving Trace should have Traceable children. This isn’t always possible when members from another crate (which does not depend on rust-gc) are involved. At the moment, we allow an #[unsafe_ignore_trace] annotation on fields which are of this type (which excludes it from being traced – if that crate doesn’t transitively depend on rust-gc, its members cannot contain GCthings anyway unless generics are involved). It should be possible to detect whether or not this is safe, and/or autoderive Trace using the opt-in builtin traits framework (needs specialization to work), but at the moment we don’t do anything other than expose that annotation.

Stdlib support for a global Trace trait that everyone derives would be awesome.


Designing a GC was a wonderful experience! I didn’t get to write much code (I was busy and Michael was able to implement most of it overnight because he’s totally awesome), but the long design discussions followed by trying to figure out holes in the GC design in every idle moment of the day were quite enjoyable. GCs are very hard to get right, but it’s very satisfying when you come up with a design that works! I’m also quite happy at how well Rust helped in making a safe interface.

I encourage everyone to try it out and/or find holes in our design. Contributions of all kind welcome, we’d especially love performance improvements and testcases.

Discuss: HN, Reddit

  1. I’m very skeptical that it’s possible to make either of these completely sound without writing lints which effectively rewrite a large chunk of the compiler

  2. There is a case to be made for an immutable GC which allows some form of deferred initialization of GC fields, however.

Categorieën: Mozilla-nl planet

David Humphrey: Introducing a New Thimble and Bramble

Mozilla planet - mo, 31/08/2015 - 20:37

This week we're shipping something really cool with Mozilla, and I wanted to pause and tell you about what it is, and how it works.

The tldr; is that we took the Mozilla Foundation's existing web code editor, Thimble, and rewrote it to use Bramble, our forked version of the Brackets editor, which runs in modern web browsers. You can try it now at

If you're the type who prefers animated pictures to words, I made you a bunch over on the wiki, showing what a few of the features look like in action. You can also check out Luke's great intro video.

If you're the type who likes words, the rest of this is for you.


I started working on this project two years ago. While at MozFest 2013 I wrote about an idea I had for a new concept app that merged Thimble and Brackets; at the time I called it Nimble.

I was interested in merging these two apps for a number of reasons. First, I wanted to eliminate the "ceiling" users had when using Thimble, wherein they would graduate beyond its abilities, and be forced to use other tools. In my view, Thimble should be able to grow and expand along with a learner's abilities, and a teacher's needs.

Second, people were asking for lots of new features in Thimble, and I knew from experience that the best code is code you don't have to write. I wanted to leverage the hard work of an existing community that was already focused on building a great web coding platform. Writing a coding environment is a huge challenge, and our team wasn't equipped to take it on by ourselves. Thankfully the Brackets project had already solved this.

On Brackets

Brackets was an easy codebase to get started on, and the community was encouraging and willing to help us with patches, reviews, and questions (I'm especially thankful for @randyedmunds and @busykai).

Brackets is written in an AMD module system, and uses requirejs, react, CodeMirror, LESS, jQuery, Bootstrap, loadash, acorn, tern, etc. One of the things I've loved most about working with the Brackets source is that it uses so much of the the best of the open web. It's ~1.3 million lines of code offer APIs for things things like:

  • code hinting, static analysis, and linting
  • language parsing and tokenizing (html, css, js, xml, less)
  • file system operations
  • editors
  • live DOM diff'ing and in-browser preview
  • swappable servers
  • layout, widgets, and dialogs
  • localization, theming, and preferences
  • extension loading at runtime, with hundreds already written

In short, Brackets isn't an editor so much as a rich platform for coding and designing front-end web pages and apps. Bracket's killer feature is its ability to render a live preview of what's in your editor, including dynamic updates as you type, often without needing to save. The preview even has an awareness of changes to linked files (e.g., external stylesheets and scripts).

Another thing I loved was that Brackets wasn't trying to solve code editing in general: they had a very clear mandate that favoured web development, and front-end web development in particular. HTML, CSS, and JavaScript get elevated status in Brackets, and don't have to fight with every other language for features.

All of these philosophies and features melded perfectly with our goal of making a great learning and teaching tool for web programming.

But what about X?

Obviously there are a ton of code editing tools available. If we start with desktop editors, there are a lot to choose from; but they all suffer from the same problem: you have to download 10s of megs of installer, and then you have to install them, along with a web server, in order to preview your work. Consider what's involved in installing each of these (on OS X):

Thimble, on the other hand, is ~1M (877K for Bramble, the rest for the front-end app). We worked extremely hard to get Brackets (38.5M if you install it) down to something that fits in the size of an average web page. If we changed how Brackets loads more significantly, we could get it smaller yet, but we've chosen to keep existing extensions working. The best part is that there is no install: the level of commitment for a user is the URL.

In addition to desktop editors, there are plenty of popular online options, too:

The list goes on. They are all great, and I use, and recommend them all. Each of these tools has a particular focus, and none of them do exactly what the new Thimble does; specifically, none of them tries to deal with trees of files and folders. We don't need to do what these other tools do, because they already do it well. Instead, we focused on making it possible for users to create a rich and realistic environment for working with arbitrary web site/app structures without needing to install a and run a web server.

From localhost to nohost

I've always been inspired by @jswalden's httpd.js. It was written back before there was node.js, back in a time when it wasn't yet common knowledge that you could do anything in JS. The very first time I saw it I knew that I wanted to find some excuse to make a web server in the browser. With nohost, our in-browser web server, we've done it.

In order to run in a browser, Bramble has to be more than just a code editor; it also has to include a bunch of stuff that would normally be provided by the Brackets Shell (similar to and node.js. This means providing a:

  • web server
  • web browser
  • filesystem

and glue to connect those three. Bracket's uses Chrome's remote debugging protocol and node.js to talk between the editor, browser, and server. This works well, but ties it directly to Chrome.

At first I wasn't sure how we'd deal with this. But then an experimental implementation of the Bracket's LiveDevelopment code landed, which switched away from using Chrome and the remote dev tools protocol to any browser and a WebSocket. Then, in the middle of the docs, we found an offhand comment that someone could probably rewrite it to use an iframe and postMessage...a fantastic idea! So we did.

Making it possible for an arbitrary web site to work in a browser-based environment is a little like Firefox's Save Page... feature. You can't just deal with the HTML alone--you also have to get all the linked assets.

Consider an example web page:

<!DOCTYPE html> <html> <head> <meta charset="utf-8"> <title>Example Page</title> <link rel="stylesheet” href="styles/style.css"> </head> <body> <img src="images/cat.png"> <script src="script.js"></script> <script> // Call function f in script.js f(); </script> </body> </html>

In this basic web page we have three external resources referenced by URL. The browser needs to be able to request styles/style.css, images/cat.png, and script.js in order to fully render this page. And we're not done yet.

The stylesheet might also reference other stylesheets using @import, or might use other images (e.g., background-image: url(...)).

It gets worse. The script might need to XHR a JSON file from the server in order to do whatever f() requires.

Bramble tries hard to deal with these situations through a combination of static and dynamic rewriting of the URLs. Eventually, if/when all browsers ship it, we could do a lot of this with ServiceWorkers. Until then, we made do with what we already have cross browser.

First, Bramble's nohost server recursively rewrites the HTML, and its linked resources, in order to find relative filesystem paths (images/cat.png) and replace them with Blobs and URL objects that point to cached memory resources read out of the browser filesystem.

Parsing HTML with regex is a non-starter. Luckily browsers have a full parser built in, DOMParser. Once we have an in memory DOM vs. HTML text string, we can accurately querySelectorAll to find things that might contain URLs (img, link, video, iframe, etc., avoiding a due to circular references) and swap those for generated Blob URLs from the filesystem. When we're done, we can extract rewritten HTML text from our live in-memory DOM via documentElement.outerHTML, obtaining something like this:

<!DOCTYPE html> <html> <head> <meta charset="utf-8"> <title>Example Page</title> <link rel="stylesheet” href="blob:https%3A//"> </head> <body> <img src="blob:https%3A//"> <script src="blob:https%3A//"></script> <script> // Call function f in script.js f(); </script> </body> </html>

All external resources now use URLs to cached memory resources. This HTML can then be itself turned into a Blob and URL object, and used as the src for our iframe browser (this works everywhere except IE, where you have to document.write the HTML, but can use Blob URLs for everything else).

For CSS we do use regex, looking for url(...) and other places where URLs can lurk. Thankfully there aren't a lot, and it's just a matter of reading the necessary resources from disk, caching to a Blob URL, and replacing the filesystem paths for URLs, before generating a CSS Blob URL that can be used in the HTML.

Despite what everyone tells you about the DOM being slow, the process is really fast. And because we own the filesystem layer, whenever the editor does something like a writeFile(), we can pre-generate a URL for the resource, and maintain a cache of such URLs keyed on filesystem paths for when we need to get them again in the future during a rewrite step. Using this cache we are able to live refresh the browser quite often without causing any noticeable slowdown on the main thread.

As an aside, it would be so nice if we could move the whole thing to a worker and be able to send an HTML string, and get back a URL. Workers can already access IndexedDB, so we could read from the filesystem there, too. This would mean having access to DOMParser (even if we can't touch the main DOM from a worker, being able to parse HTML is still incredibly useful for rewriting, diff'ing, etc).

Finally, we do dynamic substitutions of relative paths for generated Blob URLs at runtime by hijacking XMLHttpRequest and using our postMessage link from the iframe to the editor in order to return response data for a given filename.

And it all works! Sure, there's lots of things we won't ever be able to cope with, from synchronous XHR to various types of DOM manipulation by scripts that reference URLs as strings. But for the general case, it works remarkably well. Try downloading and dragging a zipped web site template from into the editor. Bramble doesn't claim to be able to replace a full, local development environment for every use case; however, it makes it unnecessary in most common cases. It's amazing what the modern web can do via storage, file, drag-and-drop, parser, and worker APIs.

Origin Sandboxing

I talk about Thimble and Bramble as different things, and they are, especially at runtime. Bramble is an embeddable widget with an iframe API, and Thimble hosts it and provides some UI for common operations.

I've put a simple demo of the the Bramble API online for people to try (source is here). Bramble uses, but doesn't own its filesystem; nor does it have any notion of where the files came from or where they are going. It also doesn't have opinions about how the filesystem should be laid out.

This is all done intentionally so that we can isolate the editor and preview from the hosting app, running each on a different domain. We want users to be able to write arbitrary code, execute and store it; but we don't want to mix code for the hosting app and the editor/preview. The hosting app needs to decide on a filesystem layout, get and write the files, and then "boot" Bramble.

I've written previously about how we use MessageChannel to remotely host an IndexedDB backed filesystem in a remote window running on another domain: Thimble owns the filesystem and database and responds to proxied requests to do things via postMessage.

In the case of Thimble, we store data in a Heroku app using postgres on the server. Thimble listens for filesystem events, and then queues and executes file update requests over the network to sync the data upstream. Published projects are written to S3, and we then serve them on a secure domain. Because users can upload files to their filesystem in the editor, it makes it easier to transition to an https:// only web.

When the user starts Thimble, we request a project as a gzipped tarball from the publishing server, then unpack it in a Worker and recreate the filesystem locally. Bramble then "mounts" this local folder and begins working with the local files and folders, with no knowledge of the servers (all data is autosaved, and survives refreshes).


Now that we've got the major pieces in place, I'm interested to see what people will do with both Thimble and Bramble. Because we're in a full browser vs. an "almost-browser" shell, we have access to all the latest toys (for example, WebRTC and the camera). Down the road we could use this for some amazing pair programming setups, so learners and mentors could work with each other directly over the web on the same project.

We can also do interesting things with different storage providers. It would be just as easy to have Bramble talk to Github, Dropbox, or some other cloud storage provider. We intentionally kept Thimble and Bramble separate in order to allow different directions in the future.

Then there's all the possibilities that custom extensions opens up (did I mention that Bramble has dynamic extension loading? because it does!). I'd love to see us use bundles of extensions to enable different sorts of learning activities, student levels, and instructional modes. I'm also really excited to see what kind of new curriculum people will build using all of this.

In the meantime, please try things out, file bugs, chat with us on irc #thimble on moznet and have fun making something cool with just your browser. Even better, teach someone how to do it.

Let me close by giving a big shout out to the amazing students (current and former) who hacked on this with me. You should hire them: Gideon Thomas, Kieran Sedgwick, Kenny Nguyen, Jordan Theriault, Andrew Benner, Klever Loza Vega, Ali Al Dallal, Yoav Gurevich, as well as the following top notch Mozilla folks, who have been amazing to us: Hannah Kane, Luke Pacholski, Pomax, Cassie McDaniel, Ashley Williams, Jon Buckley, and others.

Categorieën: Mozilla-nl planet

Air Mozilla: Mozilla Weekly Project Meeting

Mozilla planet - mo, 31/08/2015 - 20:00

Mozilla Weekly Project Meeting The Monday Project Meeting

Categorieën: Mozilla-nl planet

Mozilla's mobile misstep puts the Web at risk - InfoWorld

Nieuws verzameld via Google - mo, 31/08/2015 - 12:04


Mozilla's mobile misstep puts the Web at risk
Mozilla, the foundation behind the Firefox Web browser, may declare in its mission statement that "the Internet is a global public resource that must remain open and accessible," but it's going to have a hard time realizing that vision with less than 1 ...
64-bit Firefox Stable release for Windows delayed againGhacks Technology News

alle 2 nieuwsartikelen »Google Nieuws
Categorieën: Mozilla-nl planet

Mozilla schließt schwere Lücken in Firefox -

Nieuws verzameld via Google - mo, 31/08/2015 - 09:51

Mozilla schließt schwere Lücken in Firefox
Mozilla behebt mit einem außerplanmäßigen Update auf Version 40.0.3 zwei Sicherheitslücken im Firefox-Browser. Der Entwickler selbst stuft die Bedrohlichkeit der Lücken als hoch beziehungsweise kritisch ein. Mozilla hat erneut ein Sicherheitsupdate für ...

Google Nieuws
Categorieën: Mozilla-nl planet

This Week In Rust: This Week in Rust 94

Mozilla planet - mo, 31/08/2015 - 06:00

Hello and welcome to another issue of This Week in Rust! Rust is a systems language pursuing the trifecta: safety, concurrency, and speed. This is a weekly summary of its progress and community. Want something mentioned? Tweet us at @ThisWeekInRust or send us an email! Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub. If you find any errors in this week's issue, please submit a PR.

From the Blogosphere New Releases & Project Updates
  • Rust web framework comparison. A comparison of some web frameworks written in Rust.
  • eco. A tool for reasoning about breaking changes in Rust ecosystems.
  • mioco. Scalable, asynchronous IO coroutine-based handling for Rust (aka MIO COroutines).
  • cargo-vendor. Cargo subcommand to vendor dependencies.
  • regex-dfa. Proof of concept for fast regexes in Rust (a regex -> DFA compiler).
  • rust-gc. Simple tracing (mark and sweep) garbage collector for Rust.
  • torch. Generate CPU FlameGraphs based on DWARF Debug Info.
  • libwebpki. Web PKI Certificate Validation in Rust.
  • sokoban-rs. An implementation of Sokoban in Rust.
  • urdict. Command line client for Urban Dictionary.
  • power-assert-rs. Power Assert in Rust. Provides better assertion message.
  • rust-passert. Pretty/Power assertion macros for Rust.
  • colerr. Wrap a given process and colorize it's standard error output.
  • minesweeper-rs. Simple minesweeper in Rust.
What's cooking on nightly?

104 pull requests were merged in the last week.

New Contributors
  • Adam Crume
  • Artem Shitov
  • Cesar Eduardo Barros
  • Daan Rijks
  • Jake Shadle
  • Matěj Grabovský
  • Michael Choate
  • Nikolay Kondratyev
  • Overmind JIANG
  • Tim JIANG
Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

Final Comment Period

Every week the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now. This week's FCPs are:

New RFCs Upcoming Events

If you are running a Rust event please add it to the calendar to get it mentioned here. Email Erick Tryzelaar or Brian Anderson for access.

fn work(on: RustProject) -> Money

No jobs listed for this week. Tweet us at @ThisWeekInRust to get your job offers listed here!

Quote of the Week

"And God said, Noah you must transport these animals across a large body of water... but they are not Send. And Noah replied, I shall build a great Arc!"durka42 on #rust

Thanks to tomprogrammer for the tip. Submit your quotes for next week!.

Categorieën: Mozilla-nl planet

Wladimir Palant: Why you probably want to disable jQuery.parseHTML even though you don't call it

Mozilla planet - mo, 31/08/2015 - 00:45

TL;DR: jQuery.parseHTML is a security hazard and will be called implicitly in a number of obvious and not so obvious situations.

Why should you care?

Hey, jQuery is great! It’s so great that Stack Overflow users will recommend it no matter what your question is. And now they have two problems. Just kidding, they will have the incredible power of jQuery:

$("#list").append('<li title="' + + '">' + + '</li>');

The above is locating a list in the document, creating a new list item with dynamic content and adding it to the list — all that in a single line that will still stay below the 80 columns limit. And we didn’t even lose readability in the process.

Life is great until some fool comes along and mumbles “security” (yeah, that’s me). Can you tell whether the code above is safe to be used in a web application? Right, it depends on the context. Passing HTML code to jQuery.append will use the infamous innerHTML property implicitly. If you aren’t careful with the HTML code you are passing there, this line might easily turn into a Cross-Site Scripting (XSS) vulnerability.

Does or contain data from untrusted sources? Answering that question might be complicated. You need to trace the data back to its source, decide who should be trusted (admin user? localizer?) and make sure you didn’t forget any code paths. And even if you do all that, some other developer (or maybe even yourself a few months from now) might come along and add another code path where is no longer trusted. Do you want to bet on this person realizing that they are making an entirely different piece of code insecure?

It’s generally better to give jQuery structured data and avoid taking any chances. The secure equivalent of the code above would be:

$("#list").append($("<li>", {title:}).text(;

Not quite as elegant any more but now jQuery will take care of producing a correct HTML structure and you don’t need to worry about that.

Wait, there is more!

There is one remarkable thing about jQuery APIs: each function can take all kinds of parameters. For example, the .append() function we used above can take a DOM element, a CSS selector, HTML code or a function returning any of the above. This keeps function names short, and you only need to remember one function name instead of four.

The side effect is however: even if you are not giving jQuery any HTML code, you still have to keep in mind that the function could accept HTML code. Consider the following code for example:

$(tagname + " > .temporary").remove();

This will look for elements of class temporary within a given tag and remove them, right? Except that the content of tagname better be trusted here. What will happen if an attacker manages to set the value of tagname to "<img src='dummy' onerror='alert(/xss/)'>"? You probably guessed it, the “selector” will be interpreted as HTML code and will execute arbitrary JavaScript code.

There is more than a dozen jQuery functions that will happily accept both selectors and HTML code. Starting with jQuery 1.9.0 security issues here got somewhat less likely, the string has to start with < in order to be interpreted as HTML code. Older versions will accept anything as HTML code as long as it doesn’t contain #, the versions before jQuery 1.6.1 didn’t even have that restriction.

To sum up: you better use jQuery 1.9.0 or above, otherwise your dynamically generated selector might easily end up being interpreted as an HTML string. And even with recent jQuery versions you should be careful with dynamic selectors, the first part of the selector should always be a static string to avoid security issues.

Defusing jQuery

With almost all of the core jQuery functionality potentially problematic, evaluating security of jQuery-based code is tricky. Ideally, one would simply disable unsafe functionality so that parsing HTML code by accident would no longer be possible. Unfortunately, there doesn’t seem to be a supported way yet. The approach I describe here seems to work in the current jQuery versions (jQuery 1.11.3 and jQuery 2.1.4) but might not prevent all potential issues in older or future jQuery releases. Use at your own risk! Oh, and feel free to nag jQuery developers into providing supported functionality for this.

There is a comment in the source code indicating that jQuery.parseHTML function being missing is an expected situation. However, removing this function doesn’t resolve all the issues, and it disables safe functionality as well. Removing jQuery.buildFragment on the other hand doesn’t seem to have any downsides:

delete jQuery.buildFragment; // Safe element creation still works $('<img>', {src: "dummy"}); // Explicitly assigning or loading HTML code for an element works $(document.body).html('<img src="dummy">'); $(document.body).load(url); // These will throw an exception however $('<img src="dummy">'); $(document.body).append('<img src="dummy">'); $.parseHTML('<img src="dummy">');

Of course, you have to adjust all your code first before you disable this part of the jQuery functionality. And even then you might have jQuery plugins that will stop working with this change. There are some code paths in the jQuery UI library for example that rely on parsing non-trivial HTML code. So this approach might not work for you.

But how do I create larger DOM structures?

The example creating a single list item is nice of course but what if you have to create some complicated structure? Doing this via dozens of nested function calls is impractical and will result in unreadable code.

One approach would be placing this structure in your HTML document, albeit hidden. Then you would need to merely clone it and fill in the data:

<style type="text/css"> #entryTemplate { display: none; } </style> <div id="entryTemplate"> <div class="title"></div> <div class="description"></div> </div> <script> var template = $("#entryTemplate"); var entry = template.clone().removeAttr("id"); entry.find(".title").text(item.title); entry.find(".description").text(item.description); $(document.body).append(entry); </script>

Other templating approaches for JavaScript exist as well of course. It doesn’t matter which one you use as long as you don’t generate HTML code on the fly.

Categorieën: Mozilla-nl planet

Christian Heilmann: Quickie: Fading in a newly created element using CSS

Mozilla planet - snein, 30/08/2015 - 13:59

Update: I got an email from James at VIDesignz who found another solution to this problem using the :empty selector. I added it at the end of the article.

As part of our JSFoo workshop today I was asked to look into an issue a team had that you can not apply a CSS transition to a newly created element when you change the CSS settings in JavaScript. As I was dealing with professionals, they created a simple JSFiddle to show the problem:

As you see just changing the property (in this case the opacity) is not enough to have a transition. There are a few solutions to this shown in the Fiddle, too, like forcing a reflow, which of course could be a terrible idea.

I played with this and found the solution to be to not change the properties in JavaScript (which is kind of dirty anyways) but leave it all to CSS instead. The biggest part of the solution is not to use a transition but an animation instead and trigger it by applying a class to the newly created element right after adding it to the DOM:

Update: As mentioned by Kyle Simpson on Twitter, there is a way to do the same with a transition, but you need to wrap the creation and applying the class into requestAnimationFrame calls which in turn means some polyfilling:

Update2 You can also use the :empty selector in CSS to achieve the same when you add the new element as a child:

Categorieën: Mozilla-nl planet

Aaron Klotz: On WebExtensions

Mozilla planet - snein, 30/08/2015 - 10:00

There has been enough that has been said over the past week about WebExtensions that I wasn’t sure if I wanted to write this post. As usual, I can’t seem to help myself. Note the usual disclaimer that this is my personal opinion. Further note that I have no involvement with WebExtensions at this time, so I write this from the point of view of an observer.

API? What API?

I shall begin with the proposition that the legacy, non-jetpack environment for addons is not an API. As ridiculous as some readers might consider this to be, please humour me for a moment.

Let us go back to the acronym, “API.” Application Programming Interface. While the usage of the term “API” seems to have expanded over the years to encompass just about any type of interface whatsoever, I’d like to explore the first letter of that acronym: Application.

An Application Programming Interface is a specific type of interface that is exposed for the purposes of building applications. It typically provides a formal abstraction layer that isolates applications from the implementation details behind the lower tier(s) in the software stack. In the case of web browsers, I suggest that there are two distinct types of applications: web content, and extensions.

There is obviously a very well defined API for web content. On the other hand, I would argue that Gecko’s legacy addon environment is not an API at all! From the point of view of an extension, there is no abstraction, limited formality, and not necessarily an intention to be used by applications.

An extension is imported into Firefox with full privileges and can access whatever it wants. Does it have access to interfaces? Yes, but are those interfaces intended for applications? Some are, but many are not. The environment that Gecko currently provides for legacy addons is analagous to an operating system running every single application in kernel mode. Is that powerful? Absolutely! Is that the best thing to do for maintainability and robustness? Absolutely not!

Somewhere a line needs to be drawn to demarcate this abstraction layer and improve Gecko developers’ ability to make improvements under the hood. Last week’s announcement was an invitation to addon developers to help shape that future. Please participate and please do so constructively!

WebExtensions are not Chrome Extensions

When I first heard rumors about WebExtensions in Whistler, my source made it very clear to me that the WebExtensions initiative is not about making Chrome extensions run in Firefox. In fact, I am quite disappointed with some of the press coverage that seems to completely miss this point.

Yes, WebExtensions will be implementing some APIs to be source compatible with Chrome. That makes it easier to port a Chrome extension, but porting will still be necessary. I like the Venn Diagram concept that the WebExtensions FAQ uses: Some Chrome APIs will not be available in WebExtensions. On the other hand, WebExtensions will be providing APIs above and beyond the Chrome API set that will maintain Firefox’s legacy of extensibility.

Please try not to think of this project as Mozilla taking functionality away. In general I think it is safe to think of this as an opportunity to move that same functionality to a mechanism that is more formal and abstract.

Categorieën: Mozilla-nl planet