mozilla

Mozilla Nederland LogoDe Nederlandse
Mozilla-gemeenschap

Abonneren op feed Mozilla planet
Planet Mozilla - http://planet.mozilla.org/
Bijgewerkt: 16 uur 30 min geleden

Ian Bicking: A Product Journal: Conception

do, 15/01/2015 - 07:00

I’m going to try to journal the process of a new product that I’m developing in Mozilla Cloud Services

When Labs closed and I entered management I decided not to do any programming for a while. I had a lot to learn about management, and that’s what I needed to focus on. Whether I learned what I need to I don’t know, but I have been getting a bit tired.

We went through a fairly extensive planning process towards the end of 2014. I thought it was a good process. We didn’t end up where we started, which is a good sign – often planning processes are just documenting the conventional wisdom and status quo of a group or project, but in a critically engaged process you are open to considering and reconsidering your goals and commitments.

Mozilla is undergoing some stress right now. We have a new search deal, which is good, but we’ve been seeing declining marketshare which is bad. And then when you consider that desktop browsers are themselves a decreasing share of the market it looks worse.

The first planning around this has been to decrease attrition among our existing users. Longer term much of the focus has been in increasing the quality of our product. A noble goal of course, but does it lead to growth? I suspect it can only address attrition, the people who don’t use Firefox but could won’t have an opportunity to see what we are making. If you have other growth techniques then focusing on attrition can be sufficient. Chrome for instance does significant advertising and has deals to side-load Chrome onto people’s computers. Mozilla doesn’t have the same resources for that kind of growth.

When finished up the planning process I realized, damn, all our plans were about product quality. And I liked our plan! But something was missing.

This perplexed me for a while, but I didn’t really know what to make of it. Talking with a friend about it he asked then what do you want to make? – a seemingly obvious question that no one had asked me, and somehow hearing the question coming at me was important.

Talking through ideas, I reluctantly kept coming back to sharing. It’s the most incredibly obvious growth-oriented product area, since every use of a product is a way to implore non-users to switch. But sharing is so competitive. When I first started with Mozilla we would obsess over the problem of Facebook and Twitter and silos, and then think about it until we threw our hands up in despair.

But I’ve had this trick up my sleeve that I pull out for one project after another because I think it’s a really good trick: make a static copy of the live DOM. Mostly you just iterate over the elements, get rid of scripts and stuff, do a few other clever things, use <base href> and you are done! It’s like a screenshot, but it’s also still a webpage. I’ve been trying to do something with this for a long time. This time let’s use it for sharing…?

So, the first attempt at a concept: freeze the page as though it’s a fancy screenshot, upload it somewhere with a URL, maybe add some fun features because now it’s disassociated from its original location. The resulting page won’t 404, you can save personalized or dynamic content, we could add highlighting or other features.

The big difference with past ideas I’ve encountered is that here we’re not trying to compete with how anyone shares things, this is a tool to improve what you share. That’s compatible with Facebook and Twitter and SMS and anything.

If you think pulling a technology out of your back pocket and building a product around it is like putting the cart in front of the horse, well maybe… but you have to start somewhere.

[I’ll add a link here to the next post once it is written]

Categorieën: Mozilla-nl planet

Benjamin Kerensa: Call for Help: Mentors Wanted!

do, 15/01/2015 - 05:11

Western Oregon UniversityThis is very last minute as I have not been able to find enough people interested by directly approaching folks, but I have a great mentoring opportunity for Mozillians. One of my friends is a professor at Western Oregon University and tries to expose her students to a different Open Source project each term and up to bat this term is the Mozilla Project.

So I am looking for mentors from across the project who would be willing to correspond a couple times a week and answer questions from students who are learning about Firefox for Android or Firefox for Desktop.

It is ok not to be an expert on all the questions coming your way but if you do not know then you would help find the right person and get them the answers they need so they do not hit a roadblock.

This opportunity is open to both staff and contributors and the time commitment should not exceed an hour or two a week but realistically could be as little as twenty minutes or so a week to exchange emails.

Not only does this opportunity help expose these students to Open Source but also to contributing to our project. In the past, I have mentored students from WOU and the end result was many from the class continued on as contributors.

Interested? Get in touch!

Categorieën: Mozilla-nl planet

Michael Verdi: Refresh from web in Firefox 35

do, 15/01/2015 - 01:32

refresh
Back in July, I mentioned working on making download pages offer a reset (now named “Refresh”) when you are trying to download the same exact version of Firefox that you already have. Well, this is now live with Firefox 35 (released yesterday) and it works on our main download page (pictured above) and on the product support page. In addition, our support documentation can now include refresh buttons. This should make the refresh feature easier to discover and use and let people recover from problems quickly.

Categorieën: Mozilla-nl planet

James Long: Presenting The Most Over-Engineered Blog Ever

do, 15/01/2015 - 01:00

Several months ago I posted about plans to rebuild this blog. After a few false starts, I finally finished and launched the new version two weeks ago. The new version uses React and is way better (and I open-sourced it).

Notably, using React my app is split into components that can all be rendered on the client or the server. I have full power to control what gets rendered on each side.

And it feels weird.

It's what people call an "isomorphic" app, which is a fancy way of saying that generally I don't have to think about the server or the client when writing code; it just works in both places. When we finally got JavaScript on the server, this is what everyone dreamed about, but until React there hasn't been a great way to realize this.

I really enjoyed this exercise. I was so embedded with the notion that the server and client are completely separate that it was awkward and weird for a while. It took me while to figure out how to even structure my project. Eventually, I learned something new that will greatly impact all of my future projects (which is the best kind of learning!).

If you want to see what it's like logged in, I setup a demo site, test.jlongster.com, which has admin access. You can test things like my simple markdown editor.

Yes, this is just a blog. Yes, this is absolutely over-engineering. But it's fun, and I learned. If we can't even over-engineer our own side projects, well, I just don't want to live in that world.

This is quick post-mortem of my experience and some explanation of how it works. The code is up on github, but beware it is still quite messy as I did all of this in a small amount of time.

One thing I should note is that I use js-csp (soon to be renamed) channels for all my async work. I find this to be the best way to do anything asynchronous, and you can read my article about it if interested.

The Server & Client Dance

You might be wondering why this is so exciting, since we've been rendering complex pages statically from the server and hooking them up on the client-side for ages. The problem is that you used to have to write code completely separately, one file for the server and one for the client, even though your describing the same components/behaviors/what have you. That turns out to be a disaster for complex apps (hence the push for fully client-side apps that pull data from APIs).

Unfortunately, full client-side apps (or "single page apps") suffer from slow startup time and lack of discoverability from search engines.

We really want to write components that aren't bound to either the server or the client. And React lets us do that:

let dom = React.DOM; let Toolbar = React.createClass({ load: function() { // loading functionality... }, render: function() { return dom.div( { className: 'toolbar' }, dom.button({ onClick: this.load }, 'Load' }) ); } });

This looks like a front-end component, but it's super simple to render on the back-end: React.renderToString(Toolbar()), which would return something like <div class="toolbar"><button>Load</button></div>. The coolest part is when the browser loads the rendered HTML, you can just do React.render(Toolbar(), element), and React won't touch the DOM except to simply hook up your event handlers (like the onClick). element would be the DOM element wherever the toolbar was prerendered.

It's not that hard to build a workflow on top of this that can fully prerender a complex app so that it loads instantly on the client, but additionally all the event handlers get hooked up appropriately. To do this, you do need to figure out how to specify data dependencies so that the server can pull in everything it needs to render (see later sections), but there are libraries to help with this. I'm never doing $('.date-picker').datePicker() again, but I'm also not bound to a fully client-side technology like Web Components or Angular (Ember is finally working on server-side rendering).

Full prerendering is nice, but you probably don't need quite all of that. Most likely, you want to prerender some of the basic structure, but let the client-side pull in the rest. The beauty of React's component approach is that it's easy (once you have server-side rendering going with routes & data dependencies) to fine-tune precisely what gets rendered where. Each component can configure itself to be server-renderable or not, and the client basically picks up wherever the server left off. It depends on how you set it up, so I won't go into detail about it, but I certainly felt empowered with control to fine-tune everything.

Not to mention that anything server renderable is easily testable!

A Quick Glance at Code

React provides a great infrastructure for server-rendering, but you need a lot more. You need to be able to run the same routes server-side and figure out which data your components need. This is where react-router comes in. This is the critical piece for complex React apps.

It's a great router for the client-side, but it also provides the pieces for server-rendering. For my blog, I specify the routes in routes.js, and the router is run in the bootstrap file. The server and client call this run function. The router tells me the components that are required for the specific URL.

For data handling, I copied an approach from the react-router async data example. Each component can define a fetchData static method, and you can see also in the bootstrap file a method to run through all the required components and gather all the data from these methods. It attaches the fetched data as a property to each component.

This is simplistic. More complex apps use an architecture like Flux. I'm not entirely happy with the fetchData approach, but it works alright for small apps like a blog. The point here is that you have the infrastructure to do this without a whole lot of work.

Ditching Client-Side Page Transitions

With this setup, instead of refreshing the entire page whenever you click a link, it can just fetch any new data it needs and only update parts of the page that need to be changed. react-router especially helps with this, as it takes care of all of the pushState work to make it feel like the page actually changed. This makes the site pretty snappy.

Although it feels a little weird to do that for a blog, I had it working at one point. The page never refreshed; it only fetched data over XHR and updated the page contents. In fact, I enabled that mode on the demo site, test.jlongster.com, so you can play with it there.

I ended up disabling it though. The main reason is that many of my demos mutate the DOM directly, so you couldn't reliably enter and leave a post page, as there would be side effects. In general, I realized that it was just too much work for a simple blog. I'm really glad I learned how to set this up, but rendering everything on the sever is nice and simple.

It turns out that writing React server apps is completely awesome. I didn't expect to end up here, but think about it, I'm writing in React but my whole site acts as if it were a site from the 90s where a request is made, data is fetched, and HTML is rendered. Rendering transitions on the client without refreshing the page is just an optimization.

There is a still a React piece on the client which "renders" each page, but all it is doing is hooking up all the event handlers.

Implementation Notes

Here's a few more details about how everything works.

Folder Structure

The src folder is the core of the app and everything in there can be rendered on the server or the client. The server folder holds the express server and the API implementation, and the static/js folder holds the client-side bootstrapping code.

Both sides pull in the src directory with relative imports, like require('../src/routes'). The components within src each fetch the data they need, but this needs to work on the client and the server. My blog runs everything only on the server now, but I'm discussing apps that support client-side rendering too.

The problem is that components in src need to pull in different modules if they are on the server or the client. If they are on the server, they can call API methods directly, but on the client they need to use XHR. I solve this by creating an implementation folder impl on the server and the client, with the same modules that implement the same APIs. Components can require impl/api.js and they will load the right API implementation, as seen here.

In node, this require works because I symlink server/impl as impl in my node_modules folder. On the client, I configure webpack to resolve the impl folder to the client-side implementation. All of the database methods are implemented in the server-side api.js, and the same API is implemented on the client-side api.js but it calls the back-end API over XHR.

I tried to munge NODE_PATHS at first, but I found the above setup rather elegant.

Large Static HTML Chunks

There are a couple places on my blog where the content is simply a large static chunk of HTML like the projects section. I don't use JSX, and I didn't really feel like wrapping them up in components anyway. I simply dump this content in the static folder and created server and client-side implementations of a statics.js module that loads in this content. To render it, I just tell React to load it as raw HTML.

Gulp & Webpack

I use 6to5 to write ES6 code and compile it to ES5. I set up a gulp workflow to build everything on the server-side, run the app and restart it on changes. For the client, I use webpack to bundle everything together into a single js file (mostly, I use code splitting to separate out a few modules into other files). Both run 6to5 on all the code.

I like this setup, but it does feel like there is duplicate work going on. It'd be nice to somehow use webpack for node modules too, and only have a single build process.

Ansible/Docker

In addition to all of this, I completely rebuilt my server and now use ansible and docker. Both are amazing; I can use ansible to bootstrap a new machine and then docker to run any number of apps on it. This deserves its own post.

I told you I over-engineered this right?!

Todo

My new blog was an exercise in how to write React apps that blend the server/client distinction. As its my first app of this type, it's quite terrible in some ways. There's a lot of things I could clean up, so don't focus on the details.

I think the overall structure is pretty sound, however. A few things I want to improve:

  • Testing. Right now I only test the server-side API. I'd like to learn slimerjs and how to integrate it with mocha.
  • Data dependencies. The fetchData method on components was a good starting point, but I think it's a little awkward and it would probably be good to have very basic Flux-style stores instead.
  • Async. I also used this as an excuse to try js-csp on a real project, and it was quite wonderful. But I also saw some glaring sore spots and I'm going to fix them.
  • Cleanup. Many of the utility functions and a few other things are still from my old code, and are pretty ugly.

I hope you learned something. I know I had fun.

Categorieën: Mozilla-nl planet

James Long: Presenting The Most Over-Engineered Blog Ever

do, 15/01/2015 - 01:00

Several months ago I posted about plans to rebuild this blog. After a few false starts, I finally finished and launched the new version two weeks ago. The new version uses React and is way better (and I open-sourced it).

Notably, using React my app is split into components that can all be rendered on the client or the server. I have full power to control what gets rendered on each side.

And it feels weird.

It's what people call an "isomorphic" app, which is a fancy way of saying that generally I don't have to think about the server or the client when writing code; it just works in both places. When we finally got JavaScript on the server, this is what everyone dreamed about, but until React there hasn't been a great way to realize this.

I really enjoyed this exercise. I was so embedded with the notion that the server and client are completely separate that it was awkward and weird for a while. It took me while to figure out how to even structure my project. Eventually, I learned something new that will greatly impact all of my future projects (which is the best kind of learning!).

If you want to see what it's like logged in, I setup a demo site, test.jlongster.com, which has admin access. You can test things like my simple markdown editor.

Yes, this is just a blog. Yes, this is absolutely over-engineering. But it's fun, and I learned. If we can't even over-engineer our own side projects, well, I just don't want to live in that world.

This is quick post-mortem of my experience and some explanation of how it works. The code is up on github, but beware it is still quite messy as I did all of this in a small amount of time.

One thing I should note is that I use js-csp (soon to be renamed) channels for all my async work. I find this to be the best way to do anything asynchronous, and you can read my article about it if interested.

The Server & Client Dance

You might be wondering why this is so exciting, since we've been rendering complex pages statically from the server and hooking them up on the client-side for ages. The problem is that you used to have to write code completely separately, one file for the server and one for the client, even though your describing the same components/behaviors/what have you. That turns out to be a disaster for complex apps (hence the push for fully client-side apps that pull data from APIs).

Unfortunately, full client-side apps (or "single page apps") suffer from slow startup time and lack of discoverability from search engines.

We really want to write components that aren't bound to either the server or the client. And React lets us do that:

let dom = React.DOM; let Toolbar = React.createClass({ load: function() { // loading functionality... }, render: function() { return dom.div( { className: 'toolbar' }, dom.button({ onClick: this.load }, 'Load' }) ); } });

This looks like a front-end component, but it's super simple to render on the back-end: React.renderToString(Toolbar()), which would return something like <div class="toolbar"><button>Load</button></div>. The coolest part is when the browser loads the rendered HTML, you can just do React.render(Toolbar(), element), and React won't touch the DOM except to simply hook up your event handlers (like the onClick). element would be the DOM element wherever the toolbar was prerendered.

It's not that hard to build a workflow on top of this that can fully prerender a complex app so that it loads instantly on the client, but additionally all the event handlers get hooked up appropriately. To do this, you do need to figure out how to specify data dependencies so that the server can pull in everything it needs to render (see later sections), but there are libraries to help with this. I'm never doing $('.date-picker').datePicker() again, but I'm also not bound to a fully client-side technology like Web Components or Angular (Ember is finally working on server-side rendering).

Full prerendering is nice, but you probably don't need quite all of that. Most likely, you want to prerender some of the basic structure, but let the client-side pull in the rest. The beauty of React's component approach is that it's easy (once you have server-side rendering going with routes & data dependencies) to fine-tune precisely what gets rendered where. Each component can configure itself to be server-renderable or not, and the client basically picks up wherever the server left off. It depends on how you set it up, so I won't go into detail about it, but I certainly felt empowered with control to fine-tune everything.

Not to mention that anything server renderable is easily testable!

A Quick Glance at Code

React provides a great infrastructure for server-rendering, but you need a lot more. You need to be able to run the same routes server-side and figure out which data your components need. This is where react-router comes in. This is the critical piece for complex React apps.

It's a great router for the client-side, but it also provides the pieces for server-rendering. For my blog, I specify the routes in routes.js, and the router is run in the bootstrap file. The server and client call this run function. The router tells me the components that are required for the specific URL.

For data handling, I copied an approach from the react-router async data example. Each component can define a fetchData static method, and you can see also in the bootstrap file a method to run through all the required components and gather all the data from these methods. It attaches the fetched data as a property to each component.

This is simplistic. More complex apps use an architecture like Flux. I'm not entirely happy with the fetchData approach, but it works alright for small apps like a blog. The point here is that you have the infrastructure to do this without a whole lot of work.

Ditching Client-Side Page Transitions

With this setup, instead of refreshing the entire page whenever you click a link, it can just fetch any new data it needs and only update parts of the page that need to be changed. react-router especially helps with this, as it takes care of all of the pushState work to make it feel like the page actually changed. This makes the site pretty snappy.

Although it feels a little weird to do that for a blog, I had it working at one point. The page never refreshed; it only fetched data over XHR and updated the page contents. In fact, I enabled that mode on the demo site, test.jlongster.com, so you can play with it there.

I ended up disabling it though. The main reason is that many of my demos mutate the DOM directly, so you couldn't reliably enter and leave a post page, as there would be side effects. In general, I realized that it was just too much work for a simple blog. I'm really glad I learned how to set this up, but rendering everything on the sever is nice and simple.

It turns out that writing React server apps is completely awesome. I didn't expect to end up here, but think about it, I'm writing in React but my whole site acts as if it were a site from the 90s where a request is made, data is fetched, and HTML is rendered. Rendering transitions on the client without refreshing the page is just an optimization.

There is a still a React piece on the client which "renders" each page, but all it is doing is hooking up all the event handlers.

Implementation Notes

Here's a few more details about how everything works.

Folder Structure

The src folder is the core of the app and everything in there can be rendered on the server or the client. The server folder holds the express server and the API implementation, and the static/js folder holds the client-side bootstrapping code.

Both sides pull in the src directory with relative imports, like require('../src/routes'). The components within src each fetch the data they need, but this needs to work on the client and the server. My blog runs everything only on the server now, but I'm discussing apps that support client-side rendering too.

The problem is that components in src need to pull in different modules if they are on the server or the client. If they are on the server, they can call API methods directly, but on the client they need to use XHR. I solve this by creating an implementation folder impl on the server and the client, with the same modules that implement the same APIs. Components can require impl/api.js and they will load the right API implementation, as seen here.

In node, this require works because I symlink server/impl as impl in my node_modules folder. On the client, I configure webpack to resolve the impl folder to the client-side implementation. All of the database methods are implemented in the server-side api.js, and the same API is implemented on the client-side api.js but it calls the back-end API over XHR.

I tried to munge NODE_PATHS at first, but I found the above setup rather elegant.

Large Static HTML Chunks

There are a couple places on my blog where the content is simply a large static chunk of HTML like the projects section. I don't use JSX, and I didn't really feel like wrapping them up in components anyway. I simply dump this content in the static folder and created server and client-side implementations of a statics.js module that loads in this content. To render it, I just tell React to load it as raw HTML.

Gulp & Webpack

I use 6to5 to write ES6 code and compile it to ES5. I set up a gulp workflow to build everything on the server-side, run the app and restart it on changes. For the client, I use webpack to bundle everything together into a single js file (mostly, I use code splitting to separate out a few modules into other files). Both run 6to5 on all the code.

I like this setup, but it does feel like there is duplicate work going on. It'd be nice to somehow use webpack for node modules too, and only have a single build process.

Ansible/Docker

In addition to all of this, I completely rebuilt my server and now use ansible and docker. Both are amazing; I can use ansible to bootstrap a new machine and then docker to run any number of apps on it. This deserves its own post.

I told you I over-engineered this right?!

Todo

My new blog was an exercise in how to write React apps that blend the server/client distinction. As its my first app of this type, it's quite terrible in some ways. There's a lot of things I could clean up, so don't focus on the details.

I think the overall structure is pretty sound, however. A few things I want to improve:

  • Testing. Right now I only test the server-side API. I'd like to learn slimerjs and how to integrate it with mocha.
  • Data dependencies. The fetchData method on components was a good starting point, but I think it's a little awkward and it would probably be good to have very basic Flux-style stores instead.
  • Async. I also used this as an excuse to try js-csp on a real project, and it was quite wonderful. But I also saw some glaring sore spots and I'm going to fix them.
  • Cleanup. Many of the utility functions and a few other things are still from my old code, and are pretty ugly.

I hope you learned something. I know I had fun.

Categorieën: Mozilla-nl planet

Alex Gibson: How to help find a regression range in Firefox Nightly

do, 15/01/2015 - 01:00

I recently spotted a visual glitch in a CSS animation that was only happening in Firefox Nightly. I was pretty confident the animation played fine just a couple of weeks ago, so after some debugging and ruling out any obvious wrong-doing in the code, I was pretty confident that a recent change in Firefox must have somehow caused a regression. Not knowing quite what else to do, I decided to file a bug to see if anyone else could figure out what was going wrong.

After some initial discussion it turned out the animation was only broken in Firefox on OSX, so definitely a bug! It could have been caused by any number of code changes in the previous few weeks and could not be reproduced on other platforms. So how could I go about helping to find the cause of the regression?

It was then someone pointed me to a tool I hadn't heard of before, called mozregression. It's an interactive regression range finder for Mozilla nightly and inbound builds. Once installed, all you need to do is pass in a last known "good date" together with a known "bad date" and a URL to test. The tool then automates downloading and running different nightly builds against the affected URL.

mozregression --good=2014-10-01 --bad=2014-10-02 -a "https://example.com"

After each run, mozregression asks you if the build is "good" or "bad" and then continues to narrow down the regression range until it finds when the bug was introduced. The process takes a while to run, but in the end it then spits out a pushlog like this.

This helped to narrow down the cause of the regression considerably, and together with a reduced test case we we're then able to work out which commit was the cause.

The resulting patch also turned out to fix another bug that was effecting Leaflet.js maps in Firefox. Result!

Categorieën: Mozilla-nl planet

Niko Matsakis: Little Orphan Impls

wo, 14/01/2015 - 20:03

We’ve recently been doing a lot of work on Rust’s orphan rules, which are an important part of our system for guaranteeing trait coherence. The idea of trait coherence is that, given a trait and some set of types for its type parameters, there should be exactly one impl that applies. So if we think of the trait Show, we want to guarantee that if we have a trait reference like MyType : Show, we can uniquely identify a particular impl. (The alternative to coherence is to have some way for users to identify which impls are in scope at any time. It has its own complications; if you’re curious for more background on why we use coherence, you might find this rust-dev thread from a while back to be interesting reading.)

The role of the orphan rules in particular is basically to prevent you from implementing external traits for external types. So continuing our simple example of Show, if you are defining your own library, you could not implement Show for Vec<T>, because both Show and Vec are defined in the standard library. But you can implement Show for MyType, because you defined MyType. However, if you define your own trait MyTrait, then you can implement MyTrait for any type you like, including external types like Vec<T>. To this end, the orphan rule intuitively says “either the trait must be local or the self-type must be local”.

More precisely, the orphan rules are targeting the case of two “cousin” crates. By cousins I mean that the crates share a common ancestor (i.e., they link to a common library crate). This would be libstd, if nothing else. That ancestor defines some trait. Both of the crates are implementing this common trait using their own local types (and possibly types from ancestor crates, which may or may not be in common). But neither crate is an ancestor of the other: if they were, the problem is much easier, because the descendant crate can see the impls from the ancestor crate.

When we extended the trait system to support multidispatch, I confess that I originally didn’t give the orphan rules much thought. It seemed like it would be straightforward to adapt them. Boy was I wrong! (And, I think, our original rules were kind of unsound to begin with.)

The purpose of this post is to lay out the current state of my thinking on these rules. It sketches out a number of variations and possible rules and tries to elaborate on the limitations of each one. It is intended to serve as the seed for a discussion in the Rust discusstion forums.

The first, totally wrong, attempt

The first attempt at the orphan rules was just to say that an impl is legal if a local type appears somewhere. So, for example, suppose that I define a type MyBigInt and I want to make it addable to integers:

1 2 impl Add<i32> for MyBigInt { ... } impl Add<MyBigInt> for i32 { ... }

Under these rules, these two impls are perfectly legal, because MyBigInt is local to the current crate. However, the rules also permit an impl like this one:

1 impl<T> Add<T> for MyBigInt { ... }

Now the problems arise because those same rules also permit an impl like this one (in another crate):

1 impl<T> Add<YourBigInt> for T { ... }

Now we have a problem because both impls are applicable to Add<YourBigInt> for MyBigInt.

In fact, we don’t need multidispatch to have this problem. The same situation can arise with Show and tuples:

1 2 impl<T> Show for (T, MyBigInt) { ... } // Crate A impl<T> Show for (YourBigInt, T) { ... } // Crate B

(In fact, multidispatch is really nothing than a compiler-supported version of implementing a trait for a tuple.)

The root of the problem here lies in our definition of “local”, which completely ignored type parameters. Because type parameters can be instantiated to arbitrary types, they are obviously special, and must be considered carefully.

The ordered rule

This problem was first brought to our attention by arielb1, who filed Issue 19470. To resolve it, he proposed a rule that I will call the ordered rule. The ordered rule goes like this:

  1. Write out all the type parameters to the trait, starting with Self.
  2. The name of some local struct or enum must appear on that line before the first type parameter.
    • More formally: When visiting the types in pre-order, a local type must be visited before any type parameter.

In terms of the examples I gave above, this rule permits the following impls:

1 2 3 impl Add<i32> for MyBigInt { ... } impl Add<MyBigInt> for i32 { ... } impl<T> Add<T> for MyBigInt { ... }

However, it avoids the quandry we saw before because it rejects this impl:

1 impl<T> Add<YourBigInt> for T { ... }

This is because, if we wrote out the type parameters in a list, we would get:

1 T, YourBigInt

and, as you can see, T comes first.

This rule is actually pretty good. It meets most of the requirements I’m going to unearth. But it has some problems. The first is that it feels strange; it feels like you should be able to reorder the type parameters on a trait without breaking everything (we will see that this is not, in fact, obviously true, but it was certainly my first reaction).

Another problem is that the rule is kind of fragile. It can easily reject impls that don’t seem particularly different from impls that it accepts. For example, consider the case of the Modifier trait that is used in hyper and iron. As you can see in this issue, iron wants to be able to define a Modifier impl like the following:

1 2 3 struct Response; ... impl Modifier<Response> for Vec<u8> { .. }

This impl is accepted by the ordered rule (thre are no type parameters at all, in fact). However, the following impl, which seems very similar and equally likely (in the abstract), would not be accepted:

1 2 3 struct Response; ... impl<T> Modifier<Response> for Vec<T> { .. }

This is because the type parameter T appears before the local type (Response). Hmm. It doesn’t really matter if T appears in the local type, either; the following would also be rejected:

1 2 3 struct MyHeader<T> { .. } ... impl<T> Modifier<MyHeader<T>> for Vec<T> { .. }

Another trait that couldn’t be handled properly is the BorrowFrom trait in the standard library. There a number of impls like this one:

1 impl<T> BorrowFrom<Rc<T>> for T

This impl fails the ordered check because T comes first. We can make it pass by switching the order of the parameters, so that the BorrowFrom trait becomes Borrow.

A final “near-miss” occurred in the standard library with the Cow type. Here is an impl from libcollections of FromIterator for a copy-on-write vector:

1 impl<'a, T> FromIterator<T> for Cow<'a, Vec<T>, [T]>

Note that Vec is a local type here. This impl obeys the ordered rule, but somewhat by accident. If the type parameters of the Cow trait were in a different order, it would not, because then [T] would precede Vec<T>.

The covered rule

In response to these shortcomings, I proposed an alternative rule that I’ll call the covered rule. The idea of the covered rule was to say that (1) the impl must have a local type somewhere and (2) a type parameter can only appear in the impl if the type parameter is covered by a local type. Covered means that it appears “inside” the type: so T is covered by MyVec in the type MyVec<T> or MyBox<Box<T>>, but not in (T, MyVec<int>). This rule has the advantage of having nothing to do with ordering and it has a certain intution to it; any type parameters that appear in your impls have to be tied to something local.

This rule [turns out to give us the required orphan rule guarantees][proof]. To see why, consider this example:

1 2 impl<T> Foo<T> for A<T> // Crate A impl<U> Foo<B<U>> for U // Crate B

If you tried to make these two impls apply to the same type, you wind up with infinite types. After all, T = B<U>, but U=A<T>, and hence you get T = B<A<T>>.

Unlike the previous rule, this rule happily accepts the BorrowFrom trait impls:

1 impl<T> BorrowFrom<Rc<T>> for T

The reason is that the type parameter T here is covered by the (local) type Rc.

However, after implementing this rule, we found out that it actually prohibits a lot of other useful patterns. The most important of them is the so-called auxiliary pattern, in which a trait takes a type parameter that is a kind of “configuration” and is basically orthogonal to the types that the trait is implemented for. An example is the Hash trait:

1 impl<H> Hash<H> for MyStruct

The type H here represents the hashing function that is being used. As you can imagine, for most types, they will work with any hashing function. Sadly, this impl is rejected, because H is not covered by any local type. You could make it work by adding a parameter H to MyStruct:

1 impl<H> Hash<H> for MyStruct<H>

But that is very weird, because now when we create our struct we are also deciding which hash functions can be used with it. You can also make it work by moving the hash function parameter H to the hash method itself, but then that is limiting. It makes the Hash trait not object safe, for one thing, and it also prohibits us from writing types that are specialized to particular hash functions.

Another similar example is indexing. Many people want to make types indexable by any integer-like thing, for example:

1 2 3 impl<I:Int, T> Index<I> for Vec<T> { type Output = T; }

Here the type parameter I is also uncovered.

Ordered vs Covered

By now I’ve probably lost you in the ins and outs, so let’s see a summary. Here’s a table of all the examples I’ve covered so far. I’ve tweaked the names so that, in all cases, any type that begins with My is considered local to the current crate:

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 +----------------------------------------------------------+---+---+ | Impl Header | O | C | +----------------------------------------------------------+---+---+ | impl Add<i32> for MyBigInt | X | X | | impl Add<MyBigInt> for i32 | X | X | | impl<T> Add<T> for MyBigInt | X | | | impl<U> Add<MyBigInt> for U | | | | impl<T> Modifier<MyType> for Vec<u8> | X | X | | impl<T> Modifier<MyType> for Vec<T> | | | | impl<'a, T> FromIterator<T> for Cow<'a, MyVec<T>, [T]> | X | X | | impl<'a, T> FromIterator<T> for Cow<'a, [T], MyVec<T>> | | X | | impl<T> BorrowFrom<Rc<T>> for T | | X | | impl<T> Borrow<T> for Rc<T> | X | X | | impl<H> Hash<H> for MyStruct | X | | | impl<I:Int,T> Index<I> for MyVec<T> | X | | +----------------------------------------------------------+---+---+

As you can see, both of these have their advantages. However, the ordered rule comes out somewhat ahead. In particular, the places where it fails can often be worked around by reordering parameters, but there is no answer that permits the covered rule to handle the Hash example (and there are a number of other traits that fit that pattern in the standard library).

Hybrid approach #1: Covered self

You might be wondering – if neither rule is perfect, is there a way to combine them? In fact, the rule that is current implemented is such a hybrid. It imposes the covered rules, but only on the Self parameter. That means that there must be a local type somewhere in Self, and any type parameters appearing in Self must be covered by a local type. Let’s call this hybrid CS, for “covered apply to Self”.

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 +----------------------------------------------------------+---+---+---+ | Impl Header | O | C | S | +----------------------------------------------------------+---+---+---| | impl Add<i32> for MyBigInt | X | X | X | | impl Add<MyBigInt> for i32 | X | X | | | impl<T> Add<T> for MyBigInt | X | | X | | impl<U> Add<MyBigInt> for U | | | | | impl<T> Modifier<MyType> for Vec<u8> | X | X | | | impl<T> Modifier<MyType> for Vec<T> | | | | | impl<'a, T> FromIterator<T> for Cow<'a, MyVec<T>, [T]> | X | X | X | | impl<'a, T> FromIterator<T> for Cow<'a, [T], MyVec<T>> | | X | X | | impl<T> BorrowFrom<Rc<T>> for T | | X | | | impl<T> Borrow<T> for Rc<T> | X | X | X | | impl<H> Hash<H> for MyStruct | X | | X | | impl<I:Int,T> Index<I> for MyVec<T> | X | | X | +----------------------------------------------------------+---+---+---+ O - Ordered / C - Covered / S - Covered Self

As you can see, the CS hybrid turns out to miss some important cases that the pure ordered full achieves. Notably, it prohibits:

  • impl Add<MyBigInt> for i32
  • impl Modifier<MyType> for Vec<u8>

This is not really good enough.

Hybrid approach #2: Covered First

We can improve the covered self approach by saying that some type parameter of the trait must meet the rules (some local type; impl type params covered by a local type), but not necessarily Self. Any type parameters which precede this covered parameter must consist exclusively of remote types (no impl type parameters, in particular).

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 +----------------------------------------------------------+---+---+---+---+ | Impl Header | O | C | S | F | +----------------------------------------------------------+---+---+---|---| | impl Add<i32> for MyBigInt | X | X | X | X | | impl Add<MyBigInt> for i32 | X | X | | X | | impl<T> Add<T> for MyBigInt | X | | X | X | | impl<U> Add<MyBigInt> for U | | | | | | impl<T> Modifier<MyType> for Vec<u8> | X | X | | X | | impl<T> Modifier<MyType> for Vec<T> | | | | | | impl<'a, T> FromIterator<T> for Cow<'a, MyVec<T>, [T]> | X | X | X | X | | impl<'a, T> FromIterator<T> for Cow<'a, [T], MyVec<T>> | | X | X | X | | impl<T> BorrowFrom<Rc<T>> for T | | X | | | | impl<T> Borrow<T> for Rc<T> | X | X | X | X | | impl<H> Hash<H> for MyStruct | X | | X | X | | impl<I:Int,T> Index<I> for MyVec<T> | X | | X | X | +----------------------------------------------------------+---+---+---+---+ O - Ordered / C - Covered / S - Covered Self / F - Covered First

As you can see, this is a strict improvement over the other appraoches. The only thing it can’t handle that the other rules can is the BorrowFrom rule.

An alternative approach: distinguishing “self-like” vs “auxiliary” parameters

One disappointment about the hybrid rules I presented thus far is that they are inherently ordered. It runs somewhat against my intuition, which is that the order of the trait type parameters shouldn’t matter that much. In particular it feels that, for a commutative trait like Add, the role of the left-hand-side type (Self) and right-hand-side type should be interchangable (below, I will argue that in fact some kind of order may well be essential to the notion of coherence as a whole, but for now let’s assume we want Add to treat the left- and right-hand-side as equivalent).

However, there are definitely other traits where the parameters are not equivalent. Consider the Hash trait example we saw before. In the case of Hash, the type parameter H refers to the hashing algorithm and thus is inherently not going to be covered by the type of the value being hashed. It is in some sense completely orthogonal to the Self type. For this reason, we’d like to define impls that apply to any hasher, like this one:

1 impl<H> Hash<H> for MyType { ... }

The problem is, if we permit this impl, then we can’t allow another crate to define an impl with the same parameters, but in a different order:

1 impl<H> Hash<MyType> for H { ... }

One way to permit the first impl and not the second without invoking ordering is to classify type parameters as self-like and auxiliary.

The orphan rule would require that at least one self-like parameter references a local type and that all impl type parameters appearing in self-like types would be covered. The Self type is always self-like, but other types would be auxiliary unless declared to be self-like (or perhaps the default would be the opposite).

Here is a table showing how this new “explicit” rule would work, presuming that the type parameters on Add and Modifier were declared as self-like. The Hash and Index parameters would be declared as auxiliary.

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 +----------------------------------------------------------+---+---+---+---+---+ | Impl Header | O | C | S | F | E | +----------------------------------------------------------+---+---+---|---|---+ | impl Add<i32> for MyBigInt | X | X | X | X | X | | impl Add<MyBigInt> for i32 | X | X | | X | X | | impl<T> Add<T> for MyBigInt | X | | X | X | | | impl<U> Add<MyBigInt> for U | | | | | | | impl<T> Modifier<MyType> for Vec<u8> | X | X | | X | X | | impl<T> Modifier<MyType> for Vec<T> | | | | | | | impl<'a, T> FromIterator<T> for Cow<'a, MyVec<T>, [T]> | X | X | X | X | X | | impl<'a, T> FromIterator<T> for Cow<'a, [T], MyVec<T>> | | X | X | X | X | | impl<T> BorrowFrom<Rc<T>> for T | | X | | | X | | impl<T> Borrow<T> for Rc<T> | X | X | X | X | X | | impl<H> Hash<H> for MyStruct | X | | X | X | X | | impl<I:Int,T> Index<I> for MyVec<T> | X | | X | X | X | +----------------------------------------------------------+---+---+---+---+---+ O - Ordered / C - Covered / S - Covered Self / F - Covered First E - Explicit Declarations

You can see that it’s quite expressive, though it is very restrictive about generic impls for Add. However, it would push quite a bit of complexity onto the users, because now when you create a trait, you must classify its type parameter as self.

In defense of ordering

Whereas at first I felt that having the rules take ordering into account was unnatural, I have come to feel that ordering is, to some extent, inherent in coherence. To see what I mean, let’s consider an example of a new vector type, MyVec<T>. It might be reasonable to permit MyVec<T> to be addable to anything can converted into an iterator over T elements. Naturally, since we’re overloading +, we’d prefer for it to be commutative:

1 2 3 4 5 6 7 8 impl<T,I> Add<I> for MyVec<T> where I : IntoIterator<Output=T> { type Output = MyVec<T>; ... } impl<T,I> Add<MyVec<T>> for I where I : IntoIterator<Output=T> { type Output = MyVec<T>; ... }

Now, given that MyVec<T> is a vector, it should be iterable as well:

1 2 3 4 impl<T> IntoIterator for MyVec<T> { type Output = T; ... }

The problem is that these three impls are inherently overlapping. After all, if I try to add two MyVec instances, which impl do I get?

Now, this isn’t a problem for any of the rules I proposed in this thread, because all of them reject that pair of impls. In fact, both the “Covered” and “Explicit Declarations” rules go farther: they reject both impls. This is because the type parameter I is uncovered; since the rules don’t consider ordering, they can’t allow an uncovered iterator I on either the left- or the right-hand-side.

The other variations (“Ordered”, “Covered Self”, and “Covered First”), on the other hand, allow only one of those impls: the one where MyVec<T> appears on the left. This seems pretty reasonable. After all, if we allow you to define an overloaded + that applies to an open-ended set of types (those that are iterable), there is the possibility that others will do the same. And if I try to add a MyVec<int> and a YourVec<int>, both of which are iterable, who wins? The ordered rules give a clear answer: the left-hand-side wins.

There are other blanket cases that also get prohibited which might on their face seem to be reasonable. For example, if I have a BigInt type, the ordered rules allow me to write impls that permit BigInt to be added to any concrete int type, no matter which side that concrete type appears on:

1 2 3 4 5 impl Add<BigInt> for i8 { type Output = BigInt; ... } impl Add<i8> for BigInt { type Output = BigInt; ... } ... impl Add<BigInt> for i64 { type Output = BigInt; ... } impl Add<i64> for BigInt { type Output = BigInt; ... }

It might be nice, if I could just write the following two impls:

1 2 impl<R:Int> Add<BigInt> for R { type Output = BigInt; ... } impl<L:Int> Add<L> for BigInt { type Output = BigInt; ... }

Now, this makes some measure of sense because Int is a trait that is only intended to be implemented for the primitive integers. In principle all bigints could use these same rules without conflict, so long as none of them implement Int. But in fact, nothing prevents them from implementing Int. Moreover, it’s not hard to imagine other crates creating comparable impls that would overlap with the ones above:

1 2 3 4 struct PrintedInt(i32); impl Int for PrintedInt; impl<R:Show> Add<PrintedInt> for R { type Output = BigInt; ... } impl<L:Show> Add<L> for PrintedInt { type Output = BigInt; ... }

Assuming that BigInt implements Show, we now have a problem!

In the future, it may be interesting to provide a way to use traits to create “strata” so that we can say things like “it’s ok to use an Int-bounded type parameter on the LHS so long as the RHS is bounded by Foo, which is incompatible with Int”, but it’s a subtle and tricky issue (as the Show example demonstrates).

So ordering basically means that when you define your traits, you should put the “principal” type as Self, and then order the other type parameters such that those which define the more “principal” behavior come afterwards in order.

The problem with ordering

Currently I lean towards the “Covered First” rule, but it bothers me that it allows something like

1 impl Modifier<MyType> for Vec<u8>

but not

1 impl<T> Modifier<MyType> for Vec<T>

However, this limitation seems to be pretty inherent to any rules that do not explicitly identify “auxiliary” type parameters. The reason is that the ordering variations all use the first occurrence of a local type as a “signal” that auxiliary type parameters should be permitted afterwards. This implies that another crate will be able to do something like:

1 impl<U> Modifier<U> for Vec<YourType>

In that case, both impls apply to Modifier<MyType> for Vec<YourType>.

Conclusion

This is a long post, and it covers a lot of ground. As I wrote in the introduction, the orphan rules turn out to be hiding quite a lot of complexity. Much more than I imagined at first. My goal here is mostly to lay out all the things that aturon and I have been talking about in a comprehensive way.

I feel like this all comes down to a key question: how do we identify the “auxiliary” input type parameters? Ordering-based rules identify this for each impl based on where the first “local” type appears. Coverage-based rules seem to require some sort of explicit declaration on the trait.

I am deeply concerned about asking people to understand this “auxiliary” vs “self-like” distinction when declaring a trait. On the other hand, there is no silver bullet: under ordering-based rules, they will be required to sometimes reorder their type parameters just to pacify the seemingly random ordering rule. (But I have the feeling that people intuitively put the most “primary” type first, as Self, and the auxiliary type parameters later.)

Categorieën: Mozilla-nl planet

Marco Zehe: Quickly check your website for common accessibility problems with tenon.io

wo, 14/01/2015 - 19:12

Tenon.io is a new tool to test web sites against some of the Web Content Accessibility Guidelines criteria. While this does not guarantee the usability of a web site, it gives you an idea of where you may have some problems. Due to its API, it can be integrated into workflows for test automation and other building steps for web projects.

However, sometimes you’ll just quickly want to check your web site and get an overview if something you did has the desired effect.

The Tenon team released a first version of a Chrome extension in December. But because there was no equivalent for Firefox, my ambition was peaked, and I set out to build my first ever Firefox extension.

And guess what? It does even a bit more than the Chrome one! In addition to a tool bar button, it gives Firefox users a context menu item for every page type so keyboard users and those using screen readers have equal access to the functionality. The extension grabs the URL of the currently open tab and submits that to Tenon. It opens a new tab where the Tenon page will display the results.

For the technically interested: I used the Node.js implementation of the Firefox Add-On SDK, called JPM, to build the extension. I was heavily inspired by this blog post published in December about building Firefox extensions the painless way. As I moved along, I wanted to try out io.js, but ran into issues in two modules, so while working on the extension, I contributed bug reports to both JPM and jszip. Did I ever mention that I love working in open source? ;)

So without further due: Here’s the Firefox extension! And if you like it, a positive review is certainly appreciated!

Have fun!

Categorieën: Mozilla-nl planet

Doug Belshaw: How we're building v1.5 of Mozilla's Web Literacy Map in Q1 2015

wo, 14/01/2015 - 17:24

The Web Literacy Map constitutes the skills and competencies required to read, write and participate on the web. It currently stands at version 1.1 and you can see a more graphical overview of the competency layer in the Webmaker resources section.

Minecraft building

In Q1 2015 (January-March) we’ll be working with the community to update the Web Literacy Map to version 1.5. This is the result of a consultation process that initially aimed at a v2.0 but was re-scoped following community input. Find out more about the interviews, survey and calls that were part of that arc on the Mozilla wiki or in this tumblr post.

Some of what we’ll be discussing and working on has already been scoped out, while some will be emergent. We’ll definitely be focusing the following:

  • Reviewing the existing skills and competencies (i.e. names/descriptors)
  • Linking to the Mozilla manifesto (where appropriate)
  • Decide whether we want to include ‘levels’ in the map (e.g. Beginner / Intermediate / Advanced)
  • Explore ways to iterate on the visual design of the competency layer

After asking the community when the best time for a call would be, the first one is scheduled for tomorrow (Thursday 15th January 2015, 4pm UTC). Join us! Details of the other calls can be found here.

In addition to these calls, we’ll almost certainly have 'half-hour hack’ sessions where we get stuff done. This might include re-writing skills/competencies and work on other things that need doing - rather than discussing. These will likely be Mondays at the same time.

Questions? Comments? Tweet me or email me

Categorieën: Mozilla-nl planet

Soledad Penades: Introduction to Web Components

wo, 14/01/2015 - 17:13

I had the pleasure and honour to be the opening speaker for the first ever London Web Components meetup! Yay!

There was no video recording, but I remembered to record a screencast! It’s a bit messy and noisy, but if you couldn’t attend, this is better than nothing.

It also includes all the Q&A!

Some of the things people are worried about, which I think are interesting if you’re working on Web Components in any way:

  • How can I use them in production reliably?
  • What’s the best way to get started i.e. where do I start? do you migrate the whole thing with a new framework? or do you start little by little?
  • How would them affect SEO and accessibility? the best option is probably to extend existing elements where possible using the is="" idiom so you can add to the existing functionality
  • How do Web Components interact with other libraries? e.g. jQuery or React. Why would one use Web Components instead of Angular directives for example?
  • And if we use jQuery with components aren’t we back to “square one”?
  • What are examples of web components in production we can look at? e.g. the famous GitHub time element custom element.
  • Putting the whole app in just one tag yes/no: verging towards the NO, makes people uneasy
  • How does the hyphen thing work? It’s for preventing people registering existing elements, and also casual namespacing. It’s not perfect and won’t avoid clashes, some idea is to allow a way to delay the registration until the name of the element is provided so you can register it in the same way that you can require() something in node and don’t care what the internal name of such module is.

Only one person in the audience was using Web Components in production (that would be Wilson with Firefox OS, tee hee!) and about 10 or so were using them to play around and experiment, and consistently using Polymer… except Firefox OS, which uses just vanilla JS.

Slides are here and here’s the source code.

I’m really glad that I convinced my awesome colleague Wilson Page to join us too, as he has loads of experience implementing Web Components in Firefox OS and so he could provide lots of interesting commentary. Hopefully he will speak at a future event!

Join the meet-up so you can be informed when there’s a new one happening!

flattr this!

Categorieën: Mozilla-nl planet

Pete Moore: Weekly review 2015-01-14

wo, 14/01/2015 - 16:30

I am still alive.

Or, as the great Mark Twain once said: "The reports of my death have been greatly exaggerated."

Highlights from this week

This week I have been learning Go! And it has been a joy. Mostly.

My code doodles: https://github.com/petemoore/go_tutorial/

This article got me curious about Erlang: http://blog.erlware.org/some-thoughts-on-go-and-erlang/

Other than that I have been playing with docker, installed on my Mac, and have set up a VMware environment and acquired Windows Server 2008 x64 for running Go on Windows.

Plans for next week

Start work on porting the taskcluster-client library to Go. See:

Other matters

  • VCS Sync issues this week for l10n gecko
  • Found an interesting Go conference to attend this year
Categorieën: Mozilla-nl planet

Daniel Stenberg: My talks at FOSDEM 2015

wo, 14/01/2015 - 15:48

fosdem

Saturday 13:30, embedded room (Lameere)

Tile: Internet all the things – using curl in your device

Embedded devices are very often network connected these days. Network connected embedded devices often need to transfer data to and from them as clients, using one or more of the popular internet protocols.

libcurl is the world’s most used and most popular internet transfer library, already used in every imaginable sort of embedded device out there. How did this happen and how do you use libcurl to transfer data to or from your device?

Sunday, 09:00 Mozilla room (UD2.218A)

Title: HTTP/2 right now

HTTP/2 is the new version of the web’s most important and used protocol. Version 2 is due to be out very soon after FOSDEM and I want to inform the audience about what’s going on with the protocol, why it matters to most web developers and users and not the last what its status is at the time of FOSDEM.

Categorieën: Mozilla-nl planet

Henrik Skupin: Firefox Automation report – week 47/48 2014

wo, 14/01/2015 - 14:57

In this post you can find an overview about the work happened in the Firefox Automation team during week 47 and 48.

Highlights

Most of the work during those two weeks made by myself were related to get [Jenkins](http://jenkins-ci.org/ upgraded on our Mozmill CI systems to the most recent LTS version 1.580.1. This was a somewhat critical task given the huge number of issue as mentioned in my last Firefox Automation report. On November 17th we were finally able to get all the code changes landed on our production machine after testing it for a couple of days on staging.

The upgrade was not that easy given that lots of code had to be touched, and the new LTS release still showed some weird behavior when connecting slave nodes via JLNP. As result we had to stop using this connection method in favor of the plain java command. This change was actually not that bad because it’s better to automate and doesn’t bring up the connection warning dialog.

Surprisingly the huge HTTP session usage as reported by the Monitoring plugin was a problem introduced by this plugin itself. So a simple upgrade to the latest plugin version solved this problem, and we will no longer get an additional HTTP connection whenever a slave node connects and which never was released. Once we had a total freeze of the machine because of that.

Another helpful improvement in Jenkins was the fix for a JUnit plugin bug, which caused concurrent builds to hang, until the former build in the queue has been finished. This added a large pile of waiting time to our Mozmill test jobs, which was very annoying for QA’s release testing work – especially for the update tests. Since this upgrade the problem is gone and we can process builds a lot faster.

Beside the upgrade work, I also noticed that one of the Jenkins plugins in use, it’s actually the XShell plugin, failed to correctly kill the running application on the slave machine in case of an job is getting aborted. The result of that is that following tests will fail on that machine until the not killed job has been finished. I filed a Jenkins bug and did a temporary backout of the offending change in that plugin.

Individual Updates

For more granular updates of each individual team member please visit our weekly team etherpad for week 47 and week 48.

Meeting Details

If you are interested in further details and discussions you might also want to have a look at the meeting agenda, the video recording, and notes from the Firefox Automation meetings of week 47 and week 48.

Categorieën: Mozilla-nl planet

Matjaž Horvat: Pontoon report 2014: Make your translations better

wo, 14/01/2015 - 11:03

This post is part of a series of blog posts outlining Pontoon development in 2014. I’ll mostly focus on new features targeting translators. If you’re more interested in developer oriented updates, please have a look at the release notes.

Part 1. User interface
Part 2. Backend
Part 3. Meet our top contributors
Part 4. Make your translations betteryou are here
Part 5. Demo project

Some new features have been added to Pontoon, some older tools have been improved, all helping translators be more efficient and make translations more consistent, more accurate and simply better.

History
History tab displays previously suggested translations, including submissions from other users. Privileged translators can pick approved translation or delete the ones they find inappropriate.

Machinery
The next tab provides automated suggestions from several sources: Pontoon translation memory, Transvision (Mozilla), amagama (open source projects), Microsoft Terminology and machine translation by Bing Translator. Using machinery will make your translations more consistent.

Quality checks
Pontoon reviews every submitted translation by running Translate Toolkit pofilter tests that check for several issues that can affect the quality of your translations. Those checks are locale specific and can be turned off by translator.

Placeables
Some pieces of strings are not supposed to be translated. Think HTML markup or variables for example. Pontoon colorizes those pieces (called placeables) and allows you to easily insert them into your translation by clicking on them.

Get involved
Are you a developer, interested in Pontoon? Learn how to get your hands dirty.

Categorieën: Mozilla-nl planet

Julian Seward: Valgrind support for MacOS X 10.10 (Yosemite)

wo, 14/01/2015 - 10:13

Valgrind support on Yosemite has improved significantly in the past few months. It is now feasible to run Firefox on Valgrind on Yosemite. Support for 10.9 (Mavericks) has also improved, but 64-bit Yosemite remains the primary focus of support work.

For various reasons, MacOS is a difficult target for Valgrind. So you’ll find it little slower, and possibly more flaky, than running on Linux. Occasionally, the MacOS kernel will panic, for unknown reasons.  But it does work.

Full details of running Firefox on Valgrind on Yosemite are at https://developer.mozilla.org/en-US/docs/Mozilla/Testing/Valgrind.  For serious use, you need to read that page.  In the meantime, here is the minimal getting-started recipe.

First, build Valgrind. You’ll need to use the trunk, since Yosemite support didn’t make it in time for the current stable (3.10.x) line.

svn co svn://svn.valgrind.org/valgrind/trunk valgrind-trunk cd valgrind-trunk ./autogen.sh && ./configure --prefix=`pwd`/Inst make -j8 && make -j8 install

Make the headers available for building Firefox:

cd /usr/include sudo ln -s /path/to/valgrind-trunk/Inst/include/valgrind .

Now build your Firefox tree. I recommend a mozconfig containing
these:

ac_add_options --disable-debug ac_add_options --enable-optimize="-g -O2" ac_add_options --disable-jemalloc ac_add_options --enable-valgrind

And run, being sure to run the real binary (firefox-bin):

/path/to/valgrind-trunk/Inst/bin/valgrind --smc-check=all-non-file \ --vex-iropt-register-updates=allregs-at-mem-access --dsymutil=yes \ ./objdir/dist/Nightly.app/Contents/MacOS/firefox-bin

 

Categorieën: Mozilla-nl planet

Andy McKay: Mozilla uses what

wo, 14/01/2015 - 09:00

Today there was a blog post about Angular. One of the main things in there seems to be that:

Google has a good name in web technology, so if they push a JavaScript library it must be good ... right?

I've seen this happen multiple times, especially in an development agency staffed by non-technical managers. Managers ask developers to pick a framework and justify it. They can write a long list, but in the end they'll get to a bullet point like "Google uses it" or "Google built it".

The problem with that - any large organisation uses a large amounts of differing and competing technologies. Mozilla uses a huge number of open source projects and libraries. I'm not even going to list the things we use in just our web applications because it's ridiculously long. And that's just one very small part of Mozilla.

More than that, not only do we use a lot of projects - we write a lot of open source projects. Have a look at the Mozilla github repo and again, that's one very small part of Mozilla.

It's easy to assume that Google makes Angular and make associations with Gmail and so many other great things. But Google doesn't use Angular for everything. Just like Mozilla doesn't use node.js for everything.

Choose an open source technology for the right reasons, but avoid linking a technology to a brand and clouding your judgement.

Categorieën: Mozilla-nl planet

Chris McDonald: Beeminding ZenIRCBot 3.0

wo, 14/01/2015 - 08:02

ZenIRCBot is a topic I’ve written on a few times. Once upon a time it was my primary project outside of work. These days it has mostly become an unmaintained heap of code. I’ve grown significantly as a developer since I wrote it. It’s time for a revival and a rewrite.

I have a project I started about 7-8 months ago called clean-zenircbot which is a reimplementation of the bot from scratch using a custom IRC library I hope to spin out as a separate project. The goals of the project are things like being able to test the bot itself as well as making it possible for service developers to reasonably test their services. As well as documenting the whole thing to make myself no longer ashamed when I suggest someone write a service for an instance of the bot.

Going along with having written about ZenIRCBot in the past, I’ve also promised 3.0 a few times. So what makes this time different? The primary change is that I’ve started using beeminder (click to see how I’m doing), a habit tracking app with some nice features. My goal will be to fix at least 2 issues a week, or more which grants me a bit of a buffer. This will keep me pushing forward on getting a new version out the door.

I’m not sure how many issues or how long it will take, but constant progress will be made. I currently have two issues open on the repo. First one is to assess the status of the code base. Second is to go through every issue on all of the ZenIRCBot repos and pull in all of them that make sense for the new bot. This includes going through closed issues to remind myself why the old bot had some interesting design decisions.

Below is a graph of this so far. I’m starting with a flat line for the first week to give myself time to build up a buffer and work out a way to fit it into my schedule. Hopefully if you are viewing this in February or March (or even later!) the data will be going up and to the right much like the yellow road I should be staying on.


Categorieën: Mozilla-nl planet

Jorge Villalobos: Webmakering in Belize

wo, 14/01/2015 - 05:49

Just a few months ago, I was approached by fellow Mozillian Christopher Arnold about a very interesting and fairly ambitious idea. He wanted to organize a Webmaker-like event for kids in a somewhat remote area of Belize. He had made some friends in the area who encouraged him to embark on this journey, and he was on board, so it was up to me to decide if I wanted to join in.

As a Costa Rican, I’ve always been very keen on helping out in anything that involves Latin America, and I’m especially motivated when it comes to the easy to forget region of Central America. Just last October I participated in ECSL, a regional free software congress, where I helped bootstrap the Mozilla communities in a couple of Central American countries. I was hoping I could do the same in Belize, so I accepted without much hesitation. However, even I had to do some reading on Belize, since it’s a country we hear so little about.  Its status as a former British colony, English being its official language, and its diminutive size even by our standards contributes to its relative isolation in the region. This made this event even more challenging and appealing.

After forming an initial team, Christopher took on the task to crowdfund the event. Indiegogo is a great platform for this kind of thing and we all contributed some personal videos and did our fair share of promoting the event. We didn’t quite reach our goal, but raised significant funds to cover our travel. If it isn’t clear by now, I’ll point out that this wasn’t an official Mozilla event, so this all came together thanks to Christopher. He did a fantastic job setting everything up and getting the final team together: Shane Caraveo, Matthew Ruttley and I. A community member from Mexico also meant to attend, but had to cancel shortly before.

Traveling to our venue was a bit unusual. Getting to Belize takes two flights even from Costa Rica, and then it took two more internal flights to make it to Corozal, due to its remoteness. Then it took about an hour drive, including a ride on a hand-cranked ferry, to reach our venue (which was also our hotel during our stay). Years of constant travel and some previous experience on propeller planes made this all much easier for me.

Belize from a tiny plane
I made it to Corozal, as planned, on December 28th. I could only stay for a couple of days because I wanted to make it back home for New Year’s, so we planned accordingly. I would be taking care of the first sessions, with Shane helping out and dealing with some introductory portions, and then Matthew would arrive later and he and Shane would lead the course for the rest of the week.

Part of our logistics involved handing out Firefox OS phones and setting up some laptops for the kids to use. It didn’t take long before things got… interesting.

Charging 6 Firefox OS phones
Having only a couple of power strips and power outlets made juggling all of the hardware a bit tricky, and since our location was a very eco-friendly, self-sustaining lodge, this meant that we couldn’t leave stuff plugged overnight. But this isn’t really the “interesting” part, it just added to it. What really got to us and kept Shane working furiously in the background during the sessions was that the phones had different versions of Firefox OS, none of them very recent, and half of them were crashing constantly. We managed to get the non-crashy ones updated, but by the time I left we had yet to find a solution for the rest. Flashing these “old” ZTE phones isn’t a trivial task, it seems.

Then Monday came and the course began. We got about 30 very enthusiastic and smart kids, so putting things in motion was a breeze.

First day of classA critical factor that made things easy for me was attending a Teach The Web event that held in Costa Rica just a couple of weeks earlier. This event was lead by Melissa Romaine, and to say that I took some ideas from it would not be doing it justice. I essentially copied everything she did and it’s because of this that I think my sessions were successful. So, thanks, Melissa!

So, here’s how Melissa’s my sessions went. I showed the kids what a decision tree is and asked them to draw a simple tree of their own, in groups, for a topic I gave each group. After that I showed them a simple app created in Appmaker that implements a decision tree as a fun quiz app. They were then asked to remix (hack on) the example application and adapt it to their own tree. Then they were asked to share their apps and play around with them. This all worked out great and it was a surprisingly easy way to get people acquainted with logic flows and error cases.

Kids hacking away

The next day we got back to pen and paper, this time to design an app from scratch. We asked everyone to come up with their own ideas, and then grouped them again to create wireframes for the different screens and states their apps would have. I was very happy to see very diverse ideas, from alternatives to social networking, to shopping, and more local solutions like sea trip planning. Once they had their mockups ready, it was back to their laptops and Appmaker, to come up with a prototype of their app.

Unfortunately, my time was up and I wasn’t able to see their finished apps. I did catch a glance of what they were working on, and it was excellent. The great thing about kids is that they put up no resistance when it comes to learning new things. Different tools, completely new topics… no problem! It was too bad I had to leave early. Matthew arrived just as I was leaving, so I got to talk to him all of 30 seconds.

But my trip wasn’t over. Due to the logistics of the 4 flights (!) it takes to go back home, I couldn’t make it back in one go, so I chose to spend the night of December 30th in Belize City, in the hopes of finding people to meet who were interested in forming a Mozilla community. I did some poking around in the weeks leading to the event, but couldn’t find any contacts. However, word of our event got around and we were approached by a government official to talk about what we were doing. So, I got to have a nice conversation with Judene Tingling, Science and Technology Coordinator for the Government of Belize. She was very interested in Webmaker and the event we did, and was very keen in repeating it on a larger scale. I hope that we can work with her and her country to get some more Webmaker events going over there.

On my last day I finally got some rest and managed to squeeze in a quick visit to Altun Ha, which is fairly small but still very impressive.

Mayan ruins!

I’ll wrap up this post with some important lessons I learned, more as a note to self if I go back:

  • While the official language is English, a significant amount of people living outside the city centers speak Spanish in their homes (school is taught in English, though). In the cities Spanish is a secondary language at best.
  • When in doubt, bring your own post-its, markers, etc.
  • Tools that require accounts pose a significant hurdle when it comes to children. It’s not a good idea to have them set up accounts or email addresses without parental consent, so be prepared with your own accounts. I ended up creating a bunch of fake email addresses with spamgourmet so I could have enough Webmaker accounts for all computers (Persona, why so many clicks??).
  • If you’re ever in a remote jungle area, wear socks and ideally pants. Being hot is truly insignificant next to being eaten alive by giant bugs that are impervious to repellents. Two weeks later I still have a constellation of bug bites that prove it.

Many thanks to Christopher for setting all of this up, our hosts Bill and Jen at the Cerros Beach Resort, Shane and Matthew for all the hard work, Mike Poessy for setting up the laptops we used, and everyone else who helped out with this event, assistant teachers and students.

Categorieën: Mozilla-nl planet

Michelle Thorne: Diving into PADI’s learning model

wo, 14/01/2015 - 01:23

padi 1

For the last few years, Joi Ito has been blogging about learning to dive with PADI. It wasn’t until I became certified as a diver myself that I really understood how much we can learn from PADI’s educational model.

Here’s a summary of how PADI works, including ideas that we could apply to Webmaker.

With Webmaker at the moment, we’re testing how to empower and train local learning centers to teach the web on an ongoing basis. This is why I’m quite interested in how other certification and learning & engagement models work.

padi 2

PADI’s purpose

The Professional Association of Diving Instructors (PADI) has been around since the late 1960’s. It  trained over 130,000 diving instructors to issue millions of learning certifications to divers around the world. Many instructors run their own local businesses, who’s main service is to rent out gear and run tours for certified divers, or to certify people learning how to dive.

Through its certification service, PADI became the diving community’s de facto standard-bearer and educational hub. Nearly all diving equipment, training and best practices align with PADI.

No doubt, PADI is a moneymaking machine. Every rung of their engagement ladder comes with a hefty price tag. Diving is not an access-for-all sport. For example, part of the PADI training is about learning how to make informed consumer choices about the dive equipment, which they will later sell to you.

Nevertheless, I do think there is lots of learn from their economic and engagement model.

Blended learning with PADI

PADI uses blended learning to certify its divers.

They mix a multi-hour online theoretical part (regrettably, it’s just memorization) with several in-person skills trainings in the pool and open water. Divers pay a fee ($200-500) to access the learning materials and to work with an instructor. They also send you a physical kit with stickers, pamphlets and a logbook you can use on future dives.

Dive instructors teach new divers in very small groups (mine was 1:1 to maximum of 1:3). It’s very hands-on and tailored to the learner’s pace. Nevertheless, it has a pretty tight script. The instructor has a checklist of things to teach in order to certify the learner, and you work through those quite methodically. The online theory complements the lessons in the water, although for my course they could’ve cut about 3 hours of video nerding out on dive equipment.

There is room for instructor discretion and lots of local adaptation. For example, you are taught to understand local dive practices and conditions, like currents and visibility, which inform how you adapt the PADI international diving standard to your local dives. This gives the instructor some agency and adaptability.

Having a point of view

PADI makes its point of view very clear. Their best practices are so explicit, and oft-repeated, that as a learner you really internalize their perspective. In the water, you immediately flag any detraction from The PADI Way.

Mainly, these mantras are for your own safety: breathe deeply and regularly, always dive with a buddy, etc. But by distilling their best practices so simply and embedding them deeply and regularly in the training, as a learner you become an advocate for these practices as well.

Learning with a buddy

The buddy system is particularly interesting. It automatically builds in peer learning and also responsibility for yourself and your buddy. You’re taught to rely on each other, not the dive instructor. You solve each others problems, and this helps you become empowered in the water.

Pathways!

Furthermore, PADI makes its learning pathways very explicit and achievable. After doing one of the entry level certification, Open Water Diving, I feel intrigued to take on the next level and trying out some of the specializations, like cave diving and night diving.

Throughout the course, you see glimpses of what is possible with further training. You can see more advanced skills and environments becoming unlocked as you gather more experience. The PADI system revolves around tiers of certifications unlocking gear and new kinds of dives, which they do a good job of making visible and appealing.

You can teach, too.

What’s even more impressive is that the combination of the buddy/peer learning model and the clear pathways makes becoming an instructor seem achievable and aspirational—even when you just started learning.

As a beginner diver, I already felt excited by the possibility of teaching others to dive. Becoming a PADI instructor seems cool and rewarding. And it feels very accessible within the educational offering: you share skills with your buddy; with time and experience, you can teach more skills and people.

Training the trainers

padi engagement ladder

Speaking of instructors, PADI trains them in an interesting way as well. Like new divers, instructors are on a gamification path: you earn points for every diver you certify and for doing various activities in the community. With enough points, you qualify for select in-person instructor trainings or various gear promotions.

Instructors are trained in the same model that they teach: it’s blended, with emphasis on in-person training with a small group of people. You observe a skill, then do it yourself, and then teach it. PADI flies about 100 instructors-to-be to a good dive destination and teaches them in-person for a week or so. Instructors pay for the flights and the training.

At some point, you can earn enough points and training as an instructor that you can certify other instructors. This is the pinnacle of the PADI engagement ladder. We’re doing something similar with Webmaker: the top of the engagement ladder is a Webmaker Super Mentor. That’s someone who trains other mentors. It’s meta, and only appeals to a small subset of people, but it’s a very impactful group.

What’s the role of PADI staff? This is a question we often ask ourselves in the Webmaker context. Mainly, PADI staff are administrators. Some will visit local dive centers to conduct quality control or write up new training modules. They are generally responsible for coordinating instructors and modeling PADI practices.

Local learning, global community

The local diver centers and certified instructors are PADI’s distribution model.

Divers go to a local shop to buy gear, take tours and trainings. The local shop is a source of economic revenue for the instructors and for PADI. As divers level up within the PADI system, they can access more gear and dive tours from these shops.

Stewardship

Lastly, PADI imparts its learners with a sense of stewardship of the ocean. It empowers you in a new ecosystem and then teaches you to be an ambassador for it. You feel responsibility and care for the ocean, once you’ve experienced it in this new way.

Importantly, this empowerment relies on experiential learning. You don’t feel it just by reading about the ocean. It’s qualitatively different to have seen the coral and sea turtles and schools of fish yourself.

The theory and practice dives in the pool ready you for the stewardship. But you have to do a full dive, in the full glory of the open water, to really get it.

I think this is hugely relevant for Webmaker as well: it’s all good to read about the value of the open web. But it’s not until you’re in the midst of exploring and making in the open web do you realize how important that ecosystem is. Real experience begets responsibility.

Giving back

PADI encourages several ways for you to give back and put your stewardship to use: pick up litter, do aquatic life surveys, teach others about the waters, etc.

They show you that there is a community of divers that you are now a part of. It strikes a good balance between unlocking experiences for you personally and then showing you how you can act upon them to benefit a larger effort.

Going clubbing

As mentioned, there are many shortcomings to the PADI system. It’s always pay-to-play, it’s educational materials are closed and ridiculously not remixable, it’s not accessible in many parts of the world due to (understandable) environmental limitations. Advocacy for the ocean is a by-product of their offering, not its mission.

Still, aspects of their economic and learning model are worth considering for other social enterprises. How can instructors make revenue so they can teach full-time and as a career? How can gear be taught and sold so that divers get quality equipment they know how to use? How can experiential learning be packaged so that you know the value of what you’re getting and skills along the way?

I’m pretty inspired by having experienced the PADI Open Water Diving certification process. In the coming months, I’d like to test and apply some of these practices to our local learning center model, the Webmaker Clubs.

If you have more insights on how to do this, or other models worth looking at, share them here!

Categorieën: Mozilla-nl planet

Benoit Girard: CallGraph Added to the Gecko Profiler

wo, 14/01/2015 - 00:40

In the profiler you’ll now find a new tab called ‘CallGraph’. This will construct a call graph from the sample data. This is the same information that you can extract from the tree view and the timeline but just formatted so that it can be scanned better. Keep in mind that this is only a call graph of what occurred between sample points and not a fully instrumented Call Graph dump. This has a lower collection overhead but missing anything that occurs between sample points. You’ll still want to use the Tree view to get aggregate costs. You can interact with the view using your mouse or with the W/A/S/D equivalent keys of your keyboard layout.

Profiler CallGraph

Profiler CallGraph

Big thanks to Victor Porof for writing the initial widget. This visualization will be coming to the devtools profiler shortly.


Categorieën: Mozilla-nl planet

Pagina's