Mozilla Nederland LogoDe Nederlandse

Mozilla krijgt geen geld voor instellen Google als zoekmachine in Europese Firef - Tweakers

Nieuws verzameld via Google - wo, 25/11/2015 - 19:34

Mozilla krijgt geen geld voor instellen Google als zoekmachine in Europese Firef
Hoewel Google voor Europese gebruikers van Firefox nog steeds de zoekmachine is na installatie, krijgt Mozilla geen geld meer als onderdeel van een commerciële overeenkomst. Dat stelt de maker van de browser. Mozilla claimt zonder Google te kunnen ...

en meer »
Categorieën: Mozilla-nl planet

Air Mozilla: The Joy of Coding - Episode 36

Mozilla planet - wo, 25/11/2015 - 19:00

The Joy of Coding - Episode 36 mconley livehacks on real Firefox bugs while thinking aloud.

Categorieën: Mozilla-nl planet

Firefox maker Mozilla: We don't need Google's money anymore - CNET

Nieuws verzameld via Google - wo, 25/11/2015 - 17:28


Firefox maker Mozilla: We don't need Google's money anymore
For years, Google in effect sponsored Mozilla by paying for searches launched through the Firefox browser. In 2014, that deal accounted for most of the nonprofit organization's $330 million in revenue, according to financial results just now being ...

en meer »
Categorieën: Mozilla-nl planet

Mozilla will den Firefox schlanker machen -

Nieuws verzameld via Google - wo, 25/11/2015 - 17:22

Mozilla will den Firefox schlanker machen
Diese erlauben einen tiefgreifenden Eingriff in den Look des Browser, benutzen aber das von Mozilla entwickelte XUL, dessen man sich langfristig ebenfalls entledigen will. Als Alternative gibt es schon länger die Möglichkeit schlanke Themes zu ...
Mozilla will Firefox-Funktionen streichenHeise Newsticker
Mozilla will Firefox
Browser: Mozilla will Firefox
FOCUS Online -WinFuture
alle 18 nieuwsartikelen »
Categorieën: Mozilla-nl planet

Mozilla launches new JavaScript-based Add-ons Validator for developers - BetaNews

Nieuws verzameld via Google - wo, 25/11/2015 - 15:45


Mozilla launches new JavaScript-based Add-ons Validator for developers
Like apps hitting a store, browser add-ons have to go through validation to ensure that they work properly and are secure. This is the case with Firefox, and developers will be only too aware that the validation tool provided by Mozilla is unreliable ...

Categorieën: Mozilla-nl planet

Jan de Mooij: Making `this` a real binding in SpiderMonkey

Mozilla planet - wo, 25/11/2015 - 14:38

Last week I landed bug 1132183, a pretty large patch rewriting the implementation of this in SpiderMonkey.

How this Works In JS

In JS, when a function is called, an implicit this argument is passed to it. In strict mode, this inside the function just returns that value:

function f() { "use strict"; return this; }; // 123

In non-strict functions, this always returns an object. If the this-argument is a primitive value, it's boxed (converted to an object):

function f() { return this; }; // returns an object: new Number(123)

Arrow functions don't have their own this. They inherit the this value from their enclosing scope:

function f() { "use strict"; () => this; // `this` is 123 };

And, of course, this can be used inside eval:

function f() { "use strict"; eval("this"); // 123 };

Finally, this can also be used in top-level code. In that case it's usually the global object (lots of hand waving here).

How this Was Implemented

Until last week, here's how this worked in SpiderMonkey:

  • Every stack frame had a this-argument,
  • Each this expression in JS code resulted in a single bytecode op (JSOP_THIS),
  • This bytecode op boxed the frame's this-argument if needed and then returned the result.

Special case: to support the lexical this behavior of arrow functions, we emitted JSOP_THIS when we defined (cloned) the arrow function and then copied the result to a slot on the function. Inside the arrow function, JSOP_THIS would then load the value from that slot.

There was some more complexity around eval: eval-frames also had their own this-slot, so whenever we did a direct eval we'd ensure the outer frame had a boxed (if needed) this-value and then we'd copy it to the eval frame.

The Problem

The most serious problem was that it's fundamentally incompatible with ES6 derived class constructors, as they initialize their 'this' value dynamically when they call super(). Nested arrow functions (and eval) then have to 'see' the initialized this value, but that was impossible to support because arrow functions and eval frames used their own (copied) this value, instead of the updated one.

Here's a worst-case example:

class Derived extends Base { constructor() { var arrow = () => this; // Runtime error: `this` is not initialized inside `arrow`. arrow(); // Call Base constructor, initialize our `this` value. eval("super()"); // The arrow function now returns the initialized `this`. arrow(); } }

We currently (temporarily!) throw an exception when arrow functions or eval are used in derived class constructors in Firefox Nightly.

Boxing this lazily also added extra complexity and overhead. I already mentioned how we had to compute this whenever we used eval.

The Solution

To fix these issues, I made this a real binding:

  • Non-arrow functions that use this or eval define a special .this variable,
  • In the function prologue, we get the this-argument, box it if needed (with a new op, JSOP_FUNCTIONTHIS) and store it in .this,
  • Then we simply use that variable each time this is used.

Arrow functions and eval frames no longer have their own this-slot, they just reference the .this variable of the outer function. For instance, consider the function below:

function f() { return () =>; }

We generate bytecode similar to the following pseudo-JS:

function f() { var .this = BoxThisIfNeeded(this); return () => (.this).foo(); }

I decided to call this variable .this, because it nicely matches the other magic 'dot-variable' we already had, .generator. Note that these are not valid variable names so JS code can't access them. I only had to make sure with-statements don't intercept the .this lookup when this is used inside a with-statement...

Doing it this way has a number of benefits: we only have to check for primitive this values at the start of the function, instead of each time this is accessed (although in most cases our optimizing JIT could/can eliminate these checks, when it knows the this-argument must be an object). Furthermore, we no longer have to do anything special for arrow functions or eval; they simply access a 'variable' in the enclosing scope and the engine already knows how to do that.

In the global scope (and in eval or arrow functions in the global scope), we don't use a binding for this (I tried this initially but it turned out to be pretty complicated). There we emit JSOP_GLOBALTHIS for each this-expression, then that op gets the this value from a reserved slot on the lexical scope. This global this value never changes, so the JITs can get it from the global lexical scope at compile time and bake it in as a constant :) (Well.. in most cases. The embedding can run scripts with a non-syntactic scope chain, in that case we have to do a scope walk to find the nearest lexical scope. This should be uncommon and can be optimized/cached if needed.)

The Debugger

The main nuisance was fixing the debugger: because we only give (non-arrow) functions that use this or eval their own this-binding, what do we do when the debugger wants to know the this-value of a frame without a this-binding?

Fortunately, the debugger (DebugScopeProxy, actually) already knew how to solve a similar problem that came up with arguments (functions that don't use arguments don't get an arguments-object, but the debugger can request one anyway), so I was able to cargo-cult and do something similar for this.

Other Changes

Some other changes I made in this area:

  • In bug 1125423 I got rid of the innerObject/outerObject/thisValue Class hooks (also known as the holy grail). Some scope objects had a (potentially effectful) thisValue hook to override their this behavior, this made it hard to see what was going on. Getting rid of that made it much easier to understand and rewrite the code.
  • I posted patches in bug 1227263 to remove the this slot from generator objects, eval frames and global frames.
  • IonMonkey was unable to compile top-level scripts that used this. As I mentioned above, compiling the new JSOP_GLOBALTHIS op is pretty simple in most cases; I wrote a small patch to fix this (bug 922406).

We changed the implementation of this in Firefox 45. The difference is (hopefully!) not observable, so these changes should not break anything or affect code directly. They do, however, pave the way for more performance work and fully compliant ES6 Classes! :)

Categorieën: Mozilla-nl planet

Software-update: Pale Moon 25.8.0 - Tweakers

Nieuws verzameld via Google - wo, 25/11/2015 - 13:57


Software-update: Pale Moon 25.8.0
Pale Moon logo (75 pix) Versie 25.8 van Pale Moon is verleden week uitgekomen. Deze webbrowser maakt gebruik van de broncode van Mozilla Firefox, maar is geoptimaliseerd voor moderne hardware. De Windows-versie van Mozilla Firefox wordt ...

Google Nieuws
Categorieën: Mozilla-nl planet

Mozilla contributor creates diabetes project for the masses -

Nieuws verzameld via Google - wo, 25/11/2015 - 13:01

Mozilla contributor creates diabetes project for the masses
So I started getting involved in Mozilla around 2009, helping out on IRC and then eventually getting involved with the Mozilla WebFWD program, becoming a team member, then the Mozilla Reps Program, Mozilla DevRel Program, and then for just over two ...
Debian-Based Webconverger 33.1 Kiosk Linux OS Is Out with Mozilla Firefox 42.0Softpedia News (blog)

alle 3 nieuwsartikelen »
Categorieën: Mozilla-nl planet

Mozilla contributor creates diabetes project for the masses -

Nieuws verzameld via Google - wo, 25/11/2015 - 13:01

Mozilla contributor creates diabetes project for the masses
So I started getting involved in Mozilla around 2009, helping out on IRC and then eventually getting involved with the Mozilla WebFWD program, becoming a team member, then the Mozilla Reps Program, Mozilla DevRel Program, and then for just over two ...
Debian-Based Webconverger 33.1 Kiosk Linux OS Is Out with Mozilla Firefox 42.0Softpedia News (blog)

alle 2 nieuwsartikelen »
Categorieën: Mozilla-nl planet

Mozilla Firefox 64-Bit to Come with Microsoft Silverlight Support - Softpedia News

Nieuws verzameld via Google - wo, 25/11/2015 - 11:47

Softpedia News

Mozilla Firefox 64-Bit to Come with Microsoft Silverlight Support
Softpedia News
The 64-bit version of Firefox is still on the table after so many years of development, and Mozilla is now working on the features that will be included in it, with new evidence now showing that support for Silverlight is very likely to be offered ...
Firefox 64-bit to support Microsoft Silverlight after allGhacks Technology News

alle 3 nieuwsartikelen »
Categorieën: Mozilla-nl planet

Mozilla Addons Blog: A New Firefox Add-ons Validator

Mozilla planet - wo, 25/11/2015 - 11:30

The state of add-ons has changed a lot over the past five years, with Jetpack add-ons rising in popularity and Web Extensions on the horizon. Our validation process hasn’t changed as much as the ecosystem it validates, so today Mozilla is announcing we’re building a new Add-ons Validator, written in JS and available for testing today! We started this project only a few months ago and it’s still not production-ready, but we’d love your feedback on it.

Why the Add-ons Validator is Important

Add-ons are a huge part of why people use Firefox. There are currently over 22,000 available, and with work underway to allow Web Extensions in Firefox, it will become easier than ever to develop and update them.

All add-ons listed on (AMO) are required to pass a review by Mozilla’s add-on review team, and the first step in this process is automated validation using the Add-ons Validator.

The validator alerts reviewers to deprecated API usage, errors, and bad practices. Since add-ons can contain a lot of code, the alerts can help developers pinpoint the bits of code that might make your browser buggy or slow, among other problems. It also helps detect insecure add-on code. It helps keep your browsing fast and safe.

Our current validator is a bit old, and because it’s written in Python with JavaScript dependencies, our old validator is difficult for add-on developers to install themselves. This means add-on developers often don’t know about validation errors until they submit their add-on for review.

This wastes time, introducing a feedback cycle that could have been avoided if the add-on developer could have just run addons-validator myAddon.xpi before they uploaded their add-on. If developers could easily check their add-ons for errors locally, getting their add-ons in front of millions of users is that much faster.

And now they can!

The new Add-ons Validator, in JS

I’m not a fan of massive rewrites, but in this case it really helps. Add-on developers are JavaScript coders and nearly everyone involved in web development these days uses Node.js. That’s why we’ve written the new validator in JavaScript and published it on npm, which you can install right now.

We also took this opportunity to review all the rules the old add-on validator defined, and removed a lot of outdated ones. Some of these hadn’t been seen on AMO for years. This allowed us to cut down on code footprint and make a faster, leaner, and easier-to-work-with validator for the future.

Speaking of which…

What’s next?

The new validator is not production-quality code yet and there are rules that we haven’t implemented yet, but we’re looking to finish it by the first half of next year.

We’re still porting over relevant rules from the old validator. Our three objectives are:

  1. Porting old rules (discarding outdated ones where necessary)
  2. Adding support for Web Extensions
  3. Getting the new validator running in production

We’re looking for help with those first two objectives, so if you’d like to help us make our slightly ambitious full-project-rewrite-deadline, you can…

Get Involved!

If you’re an add-on developer, JavaScript programmer, or both: we’d love your help! Our code and issue tracker are on GitHub at We keep a healthy backlog of issues available, so you can help us add rules, review code, or test things out there. We also have a good first bug label if you’re new to add-ons but want to contribute!

If you’d like to try the next-generation add-ons validator, you can install it with npm: npm install addons-validator. Run your add-ons against it and let us know what you think. We’d love your feedback as GitHub issues, or emails on the add-on developer mailing list.

And if you’re an add-on developer who wishes the validator did something it currently doesn’t, please let us know!

We’re really excited about the future of add-ons at Mozilla; we hope this new validator will help people write better add-ons. It should make writing add-ons faster, help reviewers get through add-on approvals faster, and ultimately result in more awesome add-ons available for all Firefox users.

Happy hacking!

Categorieën: Mozilla-nl planet

Matjaž Horvat: Meet Jarek, splendid Pontoon contributor

Mozilla planet - wo, 25/11/2015 - 10:43

Some three months ago, a new guy named jotes showed up in #pontoon IRC channel. It quickly became obvious he’s running a local instance of Pontoon and is ready to start contributing code. Fast forward to the present, he is one of the core Pontoon contributors. In this short period of time, he implemented several important features, all in his free time:

Top contributors. He started by optimizing the Top contributors page. More specifically, he reduced the number of DB queries by some 99%. Next, he added filtering by time period and later on also by locale and project.

User permissions. Pontoon used to rely on the Mozillians API for giving permissions to localizers. It turned out we need a more detailed approach with team managers manually granting permission to their localizers. Guess who took care of it!

Translation memory. Currently, Jarek is working on translation memory optimizations. Given his track record, our expectations are pretty high. :-)

I have this strange ability to close my eyes when somebody tries to take a photo of me, so on most of them I look like a statue of melancholy. :D

What brought you to Mozilla?
A friend recommended me a documentary called Code Rush. Maybe it will sound stupid, but I was fascinated by the idea of a garage full of fellow hackers with power to change the world. During one of the sleepless nights I visited and after a few patches I knew Mozilla is my place. A place where I can learn something new with help of many amazing people.

Jarek Śmiejczak, thank you for being splendid! And as you said, huge thanks to Linda – love of your life – for her patience and for being an active supporter of the things you do.

To learn more about Jarek, follow his blog at Joyful hackin’.
To start hackin’ on Pontoon, get involved now.

Categorieën: Mozilla-nl planet

Emily Dunham: Giving Thanks to Rust Contributors

Mozilla planet - wo, 25/11/2015 - 09:00
Giving Thanks to Rust Contributors

It’s the day before Thanksgiving here in the US, and the time of year when we’re culturally conditioned to be a bit more public than usual in giving thanks for things.

As always, I’m grateful that I’m working in tech right now, because almost any job in the tech industry is enough to fulfill all of one’s tangible needs like food and shelter and new toys. However, plenty of my peers have all those material needs met and yet still feel unsatisfied with the impact of their work. I’m grateful to be involved with the Rust project because I know that my work makes a difference to a project that I care about.

Rust is satisfying to be involved with because it makes a difference, but that would not be true without its community. To say thank you, I’ve put together a little visualization for insight into one facet of how that community works its magic:


The stats page is interactive and available at The pretty graphs take a moment to render, since they’re built in your browser.

There’s a whole lot of data on that page, and you can scroll down for a list of all authors. It’s especially great to see the high impact that the month’s new contributors have had, as shown in the group comparison at the bottom of the “natural log of commits” chart!

It’s made with the little toy I wrote a while ago called orglog, which builds on gitstat to help visualize how many people contribute code to a GitHub organization. It’s deployed to GitHub Pages with TravisCI (eww) and so that the Rust’s organization-wide contributor stats will be automatically rebuilt and updated every day.

If you’d like to help improve the page, you can contribute to gitstat or orglog!

Categorieën: Mozilla-nl planet

Tarek Ziadé: Should I use PYTHONOPTIMIZE ?

Mozilla planet - wo, 25/11/2015 - 08:50

Yesterday, I was reviewing some code for our projects and in a PR I saw something roughly similar to this:

try: assert hasattr(SomeObject, 'some_attribute') SomeObject.some_attribute() except AssertionError: SomeObject.do_something_else()

That didn't strike me as a good idea to rely on assert because when Python is launched using the PYTHONOPTIMIZE flag, which you can activate with the eponymous environment variable or with -O or -OO, all assertions are stripped from the code.

To my surprise, a lot of people are dismissing -O and -OO saying that no one uses those flags in production and that a code containing asserts is fine.

PYTHONOPTIMIZE has three possible values: 0, 1 (-O) or 2 (-OO). 0 is the default, nothing happens.

For 1 this is what happens:

  • asserts are stripped
  • the generated bytecode files are using the .pyo extension instead of .pyc
  • sys.flags.optimize is set to 1
  • __debug__ is set to False

And for 2:

  • everything 1 does
  • doctsrings are stripped.

To my knowledge, one legacy reason to run -O was to produce a more efficient bytecode, but I was told that this is not true anymore.

Another behavior that has changed is related to pdb: you could not run some step-by-step debugging when PYTHONOPTIMIZE was activated.

Last, the pyo vs pyc thing should go away one day, according to PEP 488

So what does that leaves us ? is there any good reason to use those flags ?

Some applications leverage the __debug__ flag to offer two running modes. One with more debug information or a different behavior when an error is encoutered.

That's the case for pyglet, according to their doc.

Some companies are also using the -OO mode to slighlty reduce the memory footprint of running apps. It seems to be the case at YouTube.

And it's entirely possible that Python itself in the future, adds some new optimizations behind that flag.

So yeah, even if you don't use yourself those options, it's good practice to make sure that your python code is tested with all possible values for PYTHONOPTIMIZE.

It's easy enough, just run your tests with -O and -OO and without, and make sure your code does not depend on doctsrings or assertions.

If you have to depend on one of them, make sure your code gracefully handles the optimize modes or raises an early error explaining why you are not compatible with them.

Thanks to Brett Cannon, Michael Foord and others for their feedback on Twitter on this.

Categorieën: Mozilla-nl planet

Debian-Based Webconverger 33.1 Kiosk Linux OS Is Out with Mozilla Firefox 42.0 - Softpedia News (blog)

Nieuws verzameld via Google - wo, 25/11/2015 - 04:12

Softpedia News (blog)

Debian-Based Webconverger 33.1 Kiosk Linux OS Is Out with Mozilla Firefox 42.0
Softpedia News (blog)
According to the brief release notes posted by the project's devs, Webconverger 33.1 is still based on the latest stable release of the Debian GNU/Linux 8 (Jessie) operating system and updates the Mozilla Firefox web browser to version 42.0 ...

Categorieën: Mozilla-nl planet

James Long: A Simple Way to Route with Redux

Mozilla planet - wo, 25/11/2015 - 01:00

This post took months to write. I wasn't working on it consistently, but every time I made progress something would happen that made me scratch everything. It started off as an explanation of how I integrated react-router 0.13 into my app. Now I'm going to talk about how redux-simple-router came to be and explain the philosophy behind it.

Redux embraces a single atom app state to represent all the state for your UI. This has many benefits, the biggest of which is that pieces of state are always consistent with each other. If we update the tree immutably, it's very easy to make atomic updates to the state and keep everything consistent (as opposed to mutating individual pieces of state over time).

Conceptually, the UI is derived from this app state. Everything needed to render the UI is contained in this state, and this is powerful because you can inspect/snapshot/replay the entire UI just by targeting the app state.

But it gets awkard when you want to work with other libraries like react-router that want to take part in state management. react-router is a powerful library for component-based routing; it inherently manages the routing state to provide the user with powerful APIs that handle everything gracefully.

So what do we do? We could use react-router and redux side-by-side, but then the app state object does not contain everything needed for the UI. Snapshotting, replaying, and all that is broken.

One option is to try to take control over all the router state and proxy everything back to react-router. This is what redux-router attempts to do, but it's very complicated and prone to bugs. react-router may put unserializable state in the tree, thus still breaking snapshotting and other useful features.

After integrating redux and react-router in my site, I extracted my solution to a new project: redux-simple-router. The goal is simple: let react-router do all the work. They have already developed very elegant APIs for implementing routing components, and you should just use them.

If you use the regular react-router APIs, how does it work? How does the app state object know anything about routing? Simple: we already have a serialized form of all the react-router state: the URL. All we have to do is store the URL in the app state and keep it in sync with react-router, and the app state has everything it needs to render the UI.

People think that the app state object has to have everything, but it doesn't. It just has to have the primary state; anything that can be deduced can live outside of redux.

Above, the blue thing is serializable dumb app state, and the green things are unserializable programs that exist in memory. As long as you can recreate the green things above when loading up an app state, you're fine. And you can easily do this with react-router by just initializing it with the URL from the app state.

Since launching it, a bunch of people have already helped improve it in many ways, and a lot of people seem to be finding it useful. Thank you for providing feedback and contributing patches!

Just use react-router

The brilliant thing about just tracking the URL is that it takes almost no code at all. redux-simple-router is only 87 lines of code and it's easy to understand what's going on. You already have a lot of concepts to juggle (react, redux, react-router, etc); you shouldn't have to learn another large abstraction.

Everything you want to do can be done with react-router directly. A lot of people coming from redux-router seem to surprised about this. Some people don't understand the following:

  • Routing components have all the information you need as properties. See the docs; the current location, params, and more are all there for you to use.
  • You can block route transitions with listenBefore.
  • You can inject code to run when a routing component is created with createElement, if you want to do stuff like automatically start loading data.

We should invest in the react-router community and figure out the right patterns for everybody using it, not just people using redux. We also get to use new react-router features immediately.

The only additional thing redux-simple-router provides is a way to change the URL with the updatePath action creator. The reason is that it's a very common use case to update the URL inside of an action creator; you might want to redirect the user to another page depending on the result of an async request, for example. You don't have access to the history object there.

You shouldn't really even be selecting the path state from the redux-simple-router state; try to only make top-level routing components actually depend on the URL.

So how does it work?

You can skip this section if you aren't interested in the nitty-gritty details. We use a pretty clever hack to simplify the syncing though, so I wanted to write about it!

You call syncReduxAndRouter with history and store objects and it will keep them in sync. It does this by listening to history changes with history.listen and state changes with store.subscribe and telling each other when something changes.

It's a little tricky because each listener needs to know when to "stop." If the app state changes, it needs to call history.pushState, but the history listener should see that it's up-to-date and not do anything. When it's the other way around, the history listener needs to call store.dispatch to update the path but the store listener should see that nothing has changed.

First, let's talk about history. How can we tell if anything has changed? We get the new location object so we just stringify it into a URL and then compare it with the URL in the app state. If it's the same, we do nothing. Pretty easy!

Detecting app state changes is a little harder. In previous versions, we were comparing the URL from state with the current location's URL. But this caused tons of problems. For example, if the user has installed a listenBefore hook, it will be invoked from the pushState call in the store subscriber (because the app state URL is different from the current URL). The user might dispatch actions in listenBefore and update other state though, and since we are subscribed to the whole store, our listener will run again. At this point the URL has not been updated yet so we will call pushState again, and the listenBefore hook will be called again, causing an infinite loop.

Even if we could somehow only trigger pushState calls when the URL app state changes, this is not semantically correct. Every single time the user tries to change the URL, we should always call pushState even if the URL is the same as the current one. This is how browsers work; think of clicking on a link to "/foo" even though "/foo" is the current URL: what happens?

In redux, reducers are pure so we cannot call pushState there. We could do it in a middleware (which is what redux-router does) but I really don't want to force people to install a middleware just for this. We could do it in the action creator, but that seems like the wrong time: reducers may respond to the UPDATE_PATH action and update some state, so we shouldn't rerender routing components until after reducing.

I came up with a clever hack: just use an id in the routing state and increment it whenever we want to trigger a pushState! This has drastically simplified everything, made it far more robust, and even better made testing really easy because we can just check that the changeId field is the right number.

We just have to keep track of the last changeId we've seen an compare it in the store subscriber. This means there's always a 1:1 relationship with updatePath action creator calls and pushState calls no matter what. Try any transition logic you want, it should work!

It also simplifies how changes from the router to redux work, because it calls the updatePath action creator with an avoidRouterUpdate flag and all we have to do in the reducer it just not increment changeId and we won't call back into the router.

I think my favorite side effect of this technique is testing. Look at the tests and you'll see I can compare a bunch of changeIds to make sure that the right number of pushState calls are being made.

More Complex Examples of react-router

Originally I was going to walk through how I used react-router for complex use cases like server-side rendering. This post is already too long to go into details, and I don't have time to write another post, so I will leave you with a few points that will help you dig into the code to see how it works:

  • There's no problem making a component both a redux "connected" component and a route component. Here I'm exporting a connected Drafts page will be installed in the router. That means the component can both select from state as well as be controlled by the router.
  • I perform data fetching by specifying a static populateStore function. On the client, the router will call this in createElement seen here , and the backend can prepopulate the store by iterating over all route components and calling this method. The action creators are responsible for checking if the data is already loaded and not re-fetching on the frontend of it's already there (example).
  • The server uses the lower-level match API seen here to get the current route. This gives us flexibility to control everything. We store the current HTML status in redux (like a 500) so that components can change it. For example, the Post component can set a 404 code if the post isn't found. The server sends the page with the right HTML status code.
  • This also means the top-level App component can inspect the status code to see if it should display a special 404 or 500 page.

I really like how the react-router 1.0 API turned out. The idea seems to be use low-level APIs on the server so that you can control everything, but the client can simply render a Router component to automatically handle state. The two environments are different enough that this works great.

That's It

It's my goal to research ideas and present them in a way to help other people. In this case a cool project, redux-simple-router, came out of it. I hope this post explains the reasons behind and the above links help show more complicated examples of using it.

We are working on porting react-redux-universal-hot-example to redux-simple-router, so that will be another example of all kinds of uses. We're really close to finishing it, and you can follow along in this issue.

I'm also going to add more examples in the repo itself. But the goal is that you should be able to just read react-router's docs and do whatever it tells you to do.

Lastly, the folks working on redux-router have put in a lot of good work and I don't mean to diminish that. I think it's healthy for multiple approaches to exist and everyone can learn something from each one.

Categorieën: Mozilla-nl planet

Nick Cameron: Macro plans, overview

Mozilla planet - wo, 25/11/2015 - 00:05

In this post I want to give a bit of an overview of the changes I'm planning to propose for the macro system. I haven't worked out some of the details yet, so this could change a lot.

To summarise, the broad thrusts of the redesign are:

  • changes to the way procedural macros work and the parts of the compiler they have access to,
  • change the hygiene algorithm, and what hygiene is applied to,
  • address modularisation issues,
  • syntactic changes.

I'll summarise each here, but there will probably be a blog post about each before a proper RFC. At the end of this blog post I'll talk about backwards compatibility.

I'd also like to support macro and ident inter-operation better, as described here.

Procedural macros Mechanism

I intend to tweak the system of traits and enums, etc. to make procedural macros easier to use. My intention is that there should be a small number of function signatures that can be implemented (not just one unfortunately, because I believe function-like vs attribute-like macros will take different arguments, furthermore I think we need versions for hygienic expansion and expansion with DIY-hygiene, and the latter case must be supplied with some hygiene information in order for the function to do it's own hygiene. I'm not certain that is the right approach though). Although this is not as Rust-y as using traits, I believe the simplicity benefits outweigh the loss in elegance.

All macros will take a set of tokens in and generate a set of tokens out. The token trees should be a simplified version of the compiler's internal token trees to allow procedural macros more flexibility (and forwards compatibility). For attribute-like macros, the code that they annotate must still parse (necessary due to internal attributes, unfortunately), but will be supplied as tokens to the macro itself.

I intend that libsyntax will remain unstable and (stable) procedural macros will not have direct access to it or any other compiler internals. We will create a new crate, libmacro (or something) which will re-export token trees from libsyntax and provide a whole bunch of functionality specifically for procedural macros. This library will take the usual path to stabilisation.

Macros will be able to parse tokens and expand macros in various ways. The output will be some kind of AST. However, after manipulating the AST, it is converted back into tokens to be passed back to the macro expander. Note that this requires us storing hygiene and span information directly in the tokens, not the AST.

I'm not sure exactly what the AST we provide should look like, nor the bounds on what should be in libmacro vs what can be supplied by outside libraries. I would like to start by providing no AST at all and see what the eco-system comes up with.

It is worth thinking about the stability implications of this proposal. At some point in the future, the procedural macro mechanism and libmacro will be stable. So, a crate using stable Rust can use a crate which provides a procedural macro. At some point later we evolve the language in a non-breaking way which changes the AST (internal to libsyntax). We must ensure that this does not change the structure of the token trees we give to macros. I believe that should not be a problem for a simple enough token tree. However, the procedural macro might expect those tokens to parse in a certain way, which they no longer do causing the procedural macro to fail and thus compilation to fail. Thus, the stability guarantees we provide users can be subverted by procedural macros. However, I don't think this is possible to prevent. In the most pathological case, the macro could check if the current date is later than a given one and in that case panic. So, we are basically passing the buck about backwards compatibility with the language to the procedural macro authors and the libraries they use. There is an obvious hazard here if a macro is widely used and badly written. I'm not sure if this can be addressed, other than making sure that libraries exist which make compatibility easy.


I hope that the situation for macro authors will be similar to that for other authors: we provide a small but essential standard library (libmacro) and more functionality is provided by the ecosystem via

The functionality I expect to see in libmacro should be focused on interaction with the rest of the parser and macro expander, including macro hygiene. I expect it to include:

  • interning a string and creating an ident token from a string
  • creating and manipulating tokens
  • expanding macros (macro_rules and procedural), possibly in different ways
  • manipulating the hygiene of tokens
  • manipulating expansion traces for spans
  • name resolution of module and macro names - note that I expect these to return token trees, which gives a macro access to the whole program, I'm not sure this is a good idea since it breaks locality for macros
  • check and set feature gates
  • mark attributes and imports as used

The most important external libraries I would like to see would be to provide an AST-like abstraction, parsing, and tools for building and manipulating AST. These already exist (syntex, ASTer), so I am confident we can have good solutions in this space, working towards crates which are provided on, but are officially blessed (similar to the goals of other libraries).

I would very much like to see quasi-quoting and pattern matching in blessed libraries. These are important tools, the former currently provided by libsyntax. I don't see any reason these must be provided by libmacro, and since quasi-quoting produces AST, they probably can't be (since they would be associated with a particular AST implementation). However, I would like to spend some time improving the current quasi-quoting system, in particular to make it work better with hygiene and expansion traces.

Alternatively, libmacro could provide quasi-quoting which produces token trees, and there is then a second step to produce AST. Since hygiene info will operate at the tokens level, this might be possible.

Pattern matching on tokens should provide functionality similar to that provided by macro_rules!, making writing procedural macros much easier. I'm convinced we need something here, but not sure of the design.

Naming and registration

See section on modularisation below, the same things apply to procedural macros as to macro_rules macros.

A macro called baz declared in a module bar inside a crate foo could be called using ::foo::bar::baz!(...) or imported using use foo::bar::baz!; and used as baz!(...). Other than a feature flag until procedural macros are stabilised, users of macros need no other annotations. When looking at an extern crate foo statement, the compiler will work out whether we are importing macros.

I expect that functions expected to work as procedural macros would be marked with an attribute (#[macro] or some such). We would also have #[cfg(macro)] for helper functions, etc. Initially, I expect a whole crate must be #[cfg(macro)], but eventually I would like to allow mixing in a crate (just as we allow macro_rules macros in the same crate as normal code).

There would be no need to register macros with the plugin registry.

A vaguely related issue is whether interaction between the macros and the compiler should be via normal function calls (to libmacro) or via IPC. The latter would allow produral macros to be used without dynamic linking and thus permit a statically linked compiler.


I plan to change the hygiene algorithm we use from mtwt to sets of scopes. This allows us to use hygiene information in name resolution, thus alleviating the 'absolute path' problem in macros. We can also use this information to support hygienic checking of privacy. I'll explain the algorithm and how it will apply to Rust in another blog post. I think this algorithm will be easier for procedural macro authors to work with too.

Orthogonally, I want to make all identifiers hygienic, not just variables and labels. I would also like to support hygienic unsafety. I believe both these things are more implementation than design issues.


The goal here is to treat macros the same way as other items, naming via paths and allowing imports. This includes naming of attributes, which will allow paths for naming (e.g., #[foo::bar::baz]). Ordering of macros should also not be important. The mechanism to support this is moving parts of name resolution and privacy checking to macro expansion time. The details of this (and the interaction with sets of scopes hygiene, which essentially gives a new mechanism for name resolution) are involved.


These things are nice to have, rather than core parts of the plan. New syntax for procedural macros is covered above.

I would like to fix the expansion issues with arguments and nested macros, see blog post.

I propose that new macros should use macro! rather than macro_rules!.

I would like a syntactic form for macro_rules macros which only matches a single pattern and is more lightweight than the current syntax. The current syntax would still be used where there are multiple patterns. Something like,

macro! foo(...) => { ... }

Perhaps we drop the => too.

We need to allow privacy annotations for macros, not sure the best way to do this: pub macro! foo { ... } or macro! pub foo { ... } or something else.

Backwards compatability

Procedural macros are currently unstable, there will be a lot of breaking changes, but the reward is a path to stability.

macro_rules! is a stable part of the language. It will not break (modulo usual caveat about bug fixes). The plan is to introduce a whole new macro system around macro!, if you have macros currently called macro!, I guess we break them (we will run a warning cycle for this and try and help anyone who is affected). We will deprecate macro_rules! once macro! is stable. We will track usage with the intention of removing macro_rules at 2.0 or 3.0 or whatever. All macros in the standard libraries will be converted to using macro!, this will be a breaking change, we will mitigate by continuing to support the old but deprecated versions of the macros. Hopefully, modularisation will support this (needs more thought to be sure). The only change for users of macros will be how the macro is named, not how it is used (modulo new applications of hygiene).

Most existing macro_rules! macros should be valid macro! macros. The only difference will be using macro! instead of macro_rules! and the new scoping/naming rules may lead to name clashes that didn't exist before (note this is not in itself a breaking change, it is a side effect of using the new system). Macros converted in this way should only break where they take advantage of holes in the current hygiene system. I hope that this is a low enough bar that adoption of macro! by macro_rules! authors will be quick.


There are two backwards compatibility hazards with hygiene, both affect only macro_rules! macros: we must emulate the mtwt algorithm with the sets of scopes algorithm, and we must ensure unhygienic name resolution for items which are currently not treated hygienically. In the second case, I think we can simulate unhygienic expansion for types etc, by using the set of scopes for the macro use-site, rather than the proper set. Since only local variables are currently treated hygienically, I believe this means the first case will Just Work. More details on this in a future blog post.

Categorieën: Mozilla-nl planet

Air Mozilla: Privacy for Normal People

Mozilla planet - ti, 24/11/2015 - 19:00

Privacy for Normal People Mozilla cares deeply about user control. But designing products that protect users is not always obvious. Sometimes products give the illusion of control and security...

Categorieën: Mozilla-nl planet

Armen Zambrano: Welcome F3real, xenny and MikeLing!

Mozilla planet - ti, 24/11/2015 - 18:58
As described by jmaher, we started this week our first week of mozci's quarter of contribution.

I want to personally welcome Stefan, Vaibhav and Mike to mozci. We hope you get to learn and we thank you for helping Mozilla move forward in this corner of our automation systems.

I also want to give thanks to Alice for committing at mentoring. This could not be possible without her help.

Creative Commons License
This work by Zambrano Gasparnian, Armen is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 Unported License.
Categorieën: Mozilla-nl planet

Armen Zambrano: Mozilla CI tools meet up

Mozilla planet - ti, 24/11/2015 - 18:52
In order to help the contributors' of mozci's quarter of contribution, we have set up a Mozci meet up this Friday.

If you're interested on learning about Mozilla's CI, how to contribute or how to build your own scheduling with mozci come and join us!

9am ET -> other time zones
Vidyo room:

Creative Commons License
This work by Zambrano Gasparnian, Armen is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 Unported License.
Categorieën: Mozilla-nl planet