mozilla

Mozilla Nederland LogoDe Nederlandse
Mozilla-gemeenschap

Gervase Markham: Consumer Security Advice

Mozilla planet - to, 15/01/2015 - 13:21

Here’s an attempt at consumer security advice that I saw at a railway station recently. Apparently, secure sites are denoted by “https//” (sic). And it conflates a secure connection with trustworthiness. It’s good that people are trying, but we have a way to go…

Categorieën: Mozilla-nl planet

Chris Double: Decentralized Websites with ZeroNet

Mozilla planet - to, 15/01/2015 - 09:00

ZeroNet is a new project that aims to deliver a decentralized web. It uses a combination of bittorrent, a custom file server and a web based user interface to do this and manages to provide a pretty useable experience.

Users run a ZeroNet node and do their web browsing via the local proxy it provides. Website addresses are public keys, generated using the same algorithm as used for bitcoin addresses. A request for a website key results in the node looking in the bittorrent network for peers that are seeding the site. Peers are selected and ZeroNet connects to the peer directly to a custom file server that it implements. This is used to download the files required for the site. Bittorrent is only used for selecting peers, not for the site contents.

Once a site is retrieved the node then starts acting as a peer serving the sites content to users. The more users browsing your site, the more peers become available to provide the data. If the original site goes down the remaining peers can still serve the content.

Site updates are done by the owner making changes and then signing these changes with the private key for the site address. It then starts getting distributed to the peers that are seeding it.

Browsing is done through a standard web browser. The interface uses Websockets to communicate with the local node and receive real time information about site updates. The interface uses a sandboxed iframe to display websites.

Running

ZeroNet is open source and hosted on github. Everything is done through the one zeronet.py command. To run a node:

$ python zeronet.py ...output...

This will start the node and the file server. A check is made to see if the file server is available for connections externally. If this fails it displays a warning but the system still works. You won’t seed sites or get real time notification of site updates however. The fix for this is to open port 15441 in your firewall. ZeroNet can use UPNP to do this automatically but it requires a MiniUPNP binary for this to work. See the --upnpc command line switch for details.

The node can be accessed from a web browser locally using port 43110. Providing a site address as the path will access a particular ZeroNet site. For example, 1EU1tbG9oC1A8jz2ouVwGZyQ5asrNsE4Vr is the main ‘hello’ site that is first displayed. To access it you’d use the URL http://127.0.0.1:43110/1EU1tbG9oC1A8jz2ouVwGZyQ5asrNsE4Vr.

Creating a site

To create a site you first need to shut down your running node (using ctrl+c will do it) then run the siteCreate command:

$ python zeronet.py siteCreate ... - Site private key: ...private key... - Site address: ...site address... ... - Site created!

You should record the private key and address as you will need them when updating the site. The command results in a data/address directory being created, where ‘address’ is the site address that siteCreate produced. Inside that is a couple of default files. One of these, content.json, contains JSON data listing the files contained within the site and signing information. This gets updated automatically when you sign your site after doing updates. If you edit the title key in this file you can give your site a title that appears in the user interface instead of the address.

Another flie that gets modified during this site creation process is the sites.json file in the data directory. It contains the list of all the sites and some metadata about them.

If you visit http://127.0.0.1:43110/siteaddress in your browser, where siteaddress is the address created with siteCreate, then you’ll see the default website that is created. If your node is peering successfully and you access this address from another node it will download the site, display it, and start seeding it. This is how the site data spreads through the network.

Updating a site

To change a site you must first store your files in the data/siteaddress directory. Any HTML, CSS, JavaScript, etc can be put here. It’s like a standard website root directory. Just don’t delete the config.json file that’s there. Once you’ve added, modified or removed files you run the siteSign command. First shut down your node then (replace siteaddress with the actual address):

$ python zeronet.py siteSign siteaddress - Signing site: siteaddress... Private key (input hidden):

Now you enter the private key that was displayed (and hopefully you saved) when you ran siteCreate. The site gets signed, information stored in config.json and eventually published to any peers that are currently serving it.

Deleting a site

You can pause seeding a site from the user interface but you can’t delete it. To do that you must shutdown the node and delete the sites data/siteaddress directory manually. You will also need to remove its entry from data/sites.json. When you restart the node it will no longer appear.

Site tips

Because the website is displayed in a sandboxed iframe there are some restrictions in what it can do. The most obvious is that only relative URLs work in anchor elements. If you click on an absolute URL it does nothing. The sandboxed iframe has the allow-top-navigation option which means you can link to external pages or other ZeroNet sites if you use the target attribute of the anchor element and set it to _top. So this will work:

<a href="http://bluishcoder.co.nz/" target="_top">click me</a>

But this will not:

<a href="http://bluishcoder.co.nz/">click me</a>

Dynamic websites are supported, but requires help using centralized services. The ZeroNet node includes an example of a dynamic website called ‘ZeroBoard’. This site allows users to enter a message in a form and it’s published to a list of messages which all peering nodes will see. It does this by posting the message to an external web application that the author runs on the standard internet. This web app updates a file inside the sites ZeroNet directory and then signs it. The result is published to all peers and they automatically get the update through the Websocket interface.

Although this works it’s unfortunate the it relies on a centralized web application. The ZeroNet author has posted that they are looking at decentralized ways of doing this, maybe using bitmessage or some other system. Something involving peer to peer WebRTC would be interesting.

Conclusion

ZeroNet seems to be most similar to tor, i2p or freenet. Compared to these it lacks the anonymity and encryption aspects. But it decentralizes the site content which tor and i2p don’t. Freenet provides decentralization too but does not allow JavaScript in sites. ZeroNet does allow JavaScript but this has the usual security and tracking concerns.

Site addresses are in the same format as bitcoin addresses. It should be possible to import the private key into bitcoin and then bitcoins sent to the public address of a site would be accessed by the site owner. I haven’t tested this but I don’t see why it couldn’t be made to work. Maybe this could be leveraged somehow to enable a web payment method.

ZeroNet’s lack of encyption or obfuscation of the site contents could be a problem. A peer holds the entire site in a local directory. If this contains malicious or illegal content it can be accidentally run or viewed. Or it could be picked up in automated scans and the user held responsible. Even if the site originally had harmless content the site author could push an update out that contains problematic material. That’s a bit scary.

It’s early days for the project and hopefully some of these issues can be addressed. As it is though it works well, is very useable, and is an interesting experiement on decentralizing websites. Some links for more information:

Categorieën: Mozilla-nl planet

Ian Bicking: A Product Journal: Conception

Mozilla planet - to, 15/01/2015 - 07:00

I’m going to try to journal the process of a new product that I’m developing in Mozilla Cloud Services

When Labs closed and I entered management I decided not to do any programming for a while. I had a lot to learn about management, and that’s what I needed to focus on. Whether I learned what I need to I don’t know, but I have been getting a bit tired.

We went through a fairly extensive planning process towards the end of 2014. I thought it was a good process. We didn’t end up where we started, which is a good sign – often planning processes are just documenting the conventional wisdom and status quo of a group or project, but in a critically engaged process you are open to considering and reconsidering your goals and commitments.

Mozilla is undergoing some stress right now. We have a new search deal, which is good, but we’ve been seeing declining marketshare which is bad. And then when you consider that desktop browsers are themselves a decreasing share of the market it looks worse.

The first planning around this has been to decrease attrition among our existing users. Longer term much of the focus has been in increasing the quality of our product. A noble goal of course, but does it lead to growth? I suspect it can only address attrition, the people who don’t use Firefox but could won’t have an opportunity to see what we are making. If you have other growth techniques then focusing on attrition can be sufficient. Chrome for instance does significant advertising and has deals to side-load Chrome onto people’s computers. Mozilla doesn’t have the same resources for that kind of growth.

When finished up the planning process I realized, damn, all our plans were about product quality. And I liked our plan! But something was missing.

This perplexed me for a while, but I didn’t really know what to make of it. Talking with a friend about it he asked then what do you want to make? – a seemingly obvious question that no one had asked me, and somehow hearing the question coming at me was important.

Talking through ideas, I reluctantly kept coming back to sharing. It’s the most incredibly obvious growth-oriented product area, since every use of a product is a way to implore non-users to switch. But sharing is so competitive. When I first started with Mozilla we would obsess over the problem of Facebook and Twitter and silos, and then think about it until we threw our hands up in despair.

But I’ve had this trick up my sleeve that I pull out for one project after another because I think it’s a really good trick: make a static copy of the live DOM. Mostly you just iterate over the elements, get rid of scripts and stuff, do a few other clever things, use <base href> and you are done! It’s like a screenshot, but it’s also still a webpage. I’ve been trying to do something with this for a long time. This time let’s use it for sharing…?

So, the first attempt at a concept: freeze the page as though it’s a fancy screenshot, upload it somewhere with a URL, maybe add some fun features because now it’s disassociated from its original location. The resulting page won’t 404, you can save personalized or dynamic content, we could add highlighting or other features.

The big difference with past ideas I’ve encountered is that here we’re not trying to compete with how anyone shares things, this is a tool to improve what you share. That’s compatible with Facebook and Twitter and SMS and anything.

If you think pulling a technology out of your back pocket and building a product around it is like putting the cart in front of the horse, well maybe… but you have to start somewhere.

[I’ll add a link here to the next post once it is written]

Categorieën: Mozilla-nl planet

Benjamin Kerensa: Call for Help: Mentors Wanted!

Mozilla planet - to, 15/01/2015 - 05:11

Western Oregon UniversityThis is very last minute as I have not been able to find enough people interested by directly approaching folks, but I have a great mentoring opportunity for Mozillians. One of my friends is a professor at Western Oregon University and tries to expose her students to a different Open Source project each term and up to bat this term is the Mozilla Project.

So I am looking for mentors from across the project who would be willing to correspond a couple times a week and answer questions from students who are learning about Firefox for Android or Firefox for Desktop.

It is ok not to be an expert on all the questions coming your way but if you do not know then you would help find the right person and get them the answers they need so they do not hit a roadblock.

This opportunity is open to both staff and contributors and the time commitment should not exceed an hour or two a week but realistically could be as little as twenty minutes or so a week to exchange emails.

Not only does this opportunity help expose these students to Open Source but also to contributing to our project. In the past, I have mentored students from WOU and the end result was many from the class continued on as contributors.

Interested? Get in touch!

Categorieën: Mozilla-nl planet

Michael Verdi: Refresh from web in Firefox 35

Mozilla planet - to, 15/01/2015 - 01:32

refresh
Back in July, I mentioned working on making download pages offer a reset (now named “Refresh”) when you are trying to download the same exact version of Firefox that you already have. Well, this is now live with Firefox 35 (released yesterday) and it works on our main download page (pictured above) and on the product support page. In addition, our support documentation can now include refresh buttons. This should make the refresh feature easier to discover and use and let people recover from problems quickly.

Categorieën: Mozilla-nl planet

James Long: Presenting The Most Over-Engineered Blog Ever

Mozilla planet - to, 15/01/2015 - 01:00

Several months ago I posted about plans to rebuild this blog. After a few false starts, I finally finished and launched the new version two weeks ago. The new version uses React and is way better (and I open-sourced it).

Notably, using React my app is split into components that can all be rendered on the client or the server. I have full power to control what gets rendered on each side.

And it feels weird.

It's what people call an "isomorphic" app, which is a fancy way of saying that generally I don't have to think about the server or the client when writing code; it just works in both places. When we finally got JavaScript on the server, this is what everyone dreamed about, but until React there hasn't been a great way to realize this.

I really enjoyed this exercise. I was so embedded with the notion that the server and client are completely separate that it was awkward and weird for a while. It took me while to figure out how to even structure my project. Eventually, I learned something new that will greatly impact all of my future projects (which is the best kind of learning!).

If you want to see what it's like logged in, I setup a demo site, test.jlongster.com, which has admin access. You can test things like my simple markdown editor.

Yes, this is just a blog. Yes, this is absolutely over-engineering. But it's fun, and I learned. If we can't even over-engineer our own side projects, well, I just don't want to live in that world.

This is quick post-mortem of my experience and some explanation of how it works. The code is up on github, but beware it is still quite messy as I did all of this in a small amount of time.

One thing I should note is that I use js-csp (soon to be renamed) channels for all my async work. I find this to be the best way to do anything asynchronous, and you can read my article about it if interested.

The Server & Client Dance

You might be wondering why this is so exciting, since we've been rendering complex pages statically from the server and hooking them up on the client-side for ages. The problem is that you used to have to write code completely separately, one file for the server and one for the client, even though your describing the same components/behaviors/what have you. That turns out to be a disaster for complex apps (hence the push for fully client-side apps that pull data from APIs).

Unfortunately, full client-side apps (or "single page apps") suffer from slow startup time and lack of discoverability from search engines.

We really want to write components that aren't bound to either the server or the client. And React lets us do that:

let dom = React.DOM; let Toolbar = React.createClass({ load: function() { // loading functionality... }, render: function() { return dom.div( { className: 'toolbar' }, dom.button({ onClick: this.load }, 'Load' }) ); } });

This looks like a front-end component, but it's super simple to render on the back-end: React.renderToString(Toolbar()), which would return something like <div class="toolbar"><button>Load</button></div>. The coolest part is when the browser loads the rendered HTML, you can just do React.render(Toolbar(), element), and React won't touch the DOM except to simply hook up your event handlers (like the onClick). element would be the DOM element wherever the toolbar was prerendered.

It's not that hard to build a workflow on top of this that can fully prerender a complex app so that it loads instantly on the client, but additionally all the event handlers get hooked up appropriately. To do this, you do need to figure out how to specify data dependencies so that the server can pull in everything it needs to render (see later sections), but there are libraries to help with this. I'm never doing $('.date-picker').datePicker() again, but I'm also not bound to a fully client-side technology like Web Components or Angular (Ember is finally working on server-side rendering).

Full prerendering is nice, but you probably don't need quite all of that. Most likely, you want to prerender some of the basic structure, but let the client-side pull in the rest. The beauty of React's component approach is that it's easy (once you have server-side rendering going with routes & data dependencies) to fine-tune precisely what gets rendered where. Each component can configure itself to be server-renderable or not, and the client basically picks up wherever the server left off. It depends on how you set it up, so I won't go into detail about it, but I certainly felt empowered with control to fine-tune everything.

Not to mention that anything server renderable is easily testable!

A Quick Glance at Code

React provides a great infrastructure for server-rendering, but you need a lot more. You need to be able to run the same routes server-side and figure out which data your components need. This is where react-router comes in. This is the critical piece for complex React apps.

It's a great router for the client-side, but it also provides the pieces for server-rendering. For my blog, I specify the routes in routes.js, and the router is run in the bootstrap file. The server and client call this run function. The router tells me the components that are required for the specific URL.

For data handling, I copied an approach from the react-router async data example. Each component can define a fetchData static method, and you can see also in the bootstrap file a method to run through all the required components and gather all the data from these methods. It attaches the fetched data as a property to each component.

This is simplistic. More complex apps use an architecture like Flux. I'm not entirely happy with the fetchData approach, but it works alright for small apps like a blog. The point here is that you have the infrastructure to do this without a whole lot of work.

Ditching Client-Side Page Transitions

With this setup, instead of refreshing the entire page whenever you click a link, it can just fetch any new data it needs and only update parts of the page that need to be changed. react-router especially helps with this, as it takes care of all of the pushState work to make it feel like the page actually changed. This makes the site pretty snappy.

Although it feels a little weird to do that for a blog, I had it working at one point. The page never refreshed; it only fetched data over XHR and updated the page contents. In fact, I enabled that mode on the demo site, test.jlongster.com, so you can play with it there.

I ended up disabling it though. The main reason is that many of my demos mutate the DOM directly, so you couldn't reliably enter and leave a post page, as there would be side effects. In general, I realized that it was just too much work for a simple blog. I'm really glad I learned how to set this up, but rendering everything on the sever is nice and simple.

It turns out that writing React server apps is completely awesome. I didn't expect to end up here, but think about it, I'm writing in React but my whole site acts as if it were a site from the 90s where a request is made, data is fetched, and HTML is rendered. Rendering transitions on the client without refreshing the page is just an optimization.

There is a still a React piece on the client which "renders" each page, but all it is doing is hooking up all the event handlers.

Implementation Notes

Here's a few more details about how everything works.

Folder Structure

The src folder is the core of the app and everything in there can be rendered on the server or the client. The server folder holds the express server and the API implementation, and the static/js folder holds the client-side bootstrapping code.

Both sides pull in the src directory with relative imports, like require('../src/routes'). The components within src each fetch the data they need, but this needs to work on the client and the server. My blog runs everything only on the server now, but I'm discussing apps that support client-side rendering too.

The problem is that components in src need to pull in different modules if they are on the server or the client. If they are on the server, they can call API methods directly, but on the client they need to use XHR. I solve this by creating an implementation folder impl on the server and the client, with the same modules that implement the same APIs. Components can require impl/api.js and they will load the right API implementation, as seen here.

In node, this require works because I symlink server/impl as impl in my node_modules folder. On the client, I configure webpack to resolve the impl folder to the client-side implementation. All of the database methods are implemented in the server-side api.js, and the same API is implemented on the client-side api.js but it calls the back-end API over XHR.

I tried to munge NODE_PATHS at first, but I found the above setup rather elegant.

Large Static HTML Chunks

There are a couple places on my blog where the content is simply a large static chunk of HTML like the projects section. I don't use JSX, and I didn't really feel like wrapping them up in components anyway. I simply dump this content in the static folder and created server and client-side implementations of a statics.js module that loads in this content. To render it, I just tell React to load it as raw HTML.

Gulp & Webpack

I use 6to5 to write ES6 code and compile it to ES5. I set up a gulp workflow to build everything on the server-side, run the app and restart it on changes. For the client, I use webpack to bundle everything together into a single js file (mostly, I use code splitting to separate out a few modules into other files). Both run 6to5 on all the code.

I like this setup, but it does feel like there is duplicate work going on. It'd be nice to somehow use webpack for node modules too, and only have a single build process.

Ansible/Docker

In addition to all of this, I completely rebuilt my server and now use ansible and docker. Both are amazing; I can use ansible to bootstrap a new machine and then docker to run any number of apps on it. This deserves its own post.

I told you I over-engineered this right?!

Todo

My new blog was an exercise in how to write React apps that blend the server/client distinction. As its my first app of this type, it's quite terrible in some ways. There's a lot of things I could clean up, so don't focus on the details.

I think the overall structure is pretty sound, however. A few things I want to improve:

  • Testing. Right now I only test the server-side API. I'd like to learn slimerjs and how to integrate it with mocha.
  • Data dependencies. The fetchData method on components was a good starting point, but I think it's a little awkward and it would probably be good to have very basic Flux-style stores instead.
  • Async. I also used this as an excuse to try js-csp on a real project, and it was quite wonderful. But I also saw some glaring sore spots and I'm going to fix them.
  • Cleanup. Many of the utility functions and a few other things are still from my old code, and are pretty ugly.

I hope you learned something. I know I had fun.

Categorieën: Mozilla-nl planet

James Long: Presenting The Most Over-Engineered Blog Ever

Mozilla planet - to, 15/01/2015 - 01:00

Several months ago I posted about plans to rebuild this blog. After a few false starts, I finally finished and launched the new version two weeks ago. The new version uses React and is way better (and I open-sourced it).

Notably, using React my app is split into components that can all be rendered on the client or the server. I have full power to control what gets rendered on each side.

And it feels weird.

It's what people call an "isomorphic" app, which is a fancy way of saying that generally I don't have to think about the server or the client when writing code; it just works in both places. When we finally got JavaScript on the server, this is what everyone dreamed about, but until React there hasn't been a great way to realize this.

I really enjoyed this exercise. I was so embedded with the notion that the server and client are completely separate that it was awkward and weird for a while. It took me while to figure out how to even structure my project. Eventually, I learned something new that will greatly impact all of my future projects (which is the best kind of learning!).

If you want to see what it's like logged in, I setup a demo site, test.jlongster.com, which has admin access. You can test things like my simple markdown editor.

Yes, this is just a blog. Yes, this is absolutely over-engineering. But it's fun, and I learned. If we can't even over-engineer our own side projects, well, I just don't want to live in that world.

This is quick post-mortem of my experience and some explanation of how it works. The code is up on github, but beware it is still quite messy as I did all of this in a small amount of time.

One thing I should note is that I use js-csp (soon to be renamed) channels for all my async work. I find this to be the best way to do anything asynchronous, and you can read my article about it if interested.

The Server & Client Dance

You might be wondering why this is so exciting, since we've been rendering complex pages statically from the server and hooking them up on the client-side for ages. The problem is that you used to have to write code completely separately, one file for the server and one for the client, even though your describing the same components/behaviors/what have you. That turns out to be a disaster for complex apps (hence the push for fully client-side apps that pull data from APIs).

Unfortunately, full client-side apps (or "single page apps") suffer from slow startup time and lack of discoverability from search engines.

We really want to write components that aren't bound to either the server or the client. And React lets us do that:

let dom = React.DOM; let Toolbar = React.createClass({ load: function() { // loading functionality... }, render: function() { return dom.div( { className: 'toolbar' }, dom.button({ onClick: this.load }, 'Load' }) ); } });

This looks like a front-end component, but it's super simple to render on the back-end: React.renderToString(Toolbar()), which would return something like <div class="toolbar"><button>Load</button></div>. The coolest part is when the browser loads the rendered HTML, you can just do React.render(Toolbar(), element), and React won't touch the DOM except to simply hook up your event handlers (like the onClick). element would be the DOM element wherever the toolbar was prerendered.

It's not that hard to build a workflow on top of this that can fully prerender a complex app so that it loads instantly on the client, but additionally all the event handlers get hooked up appropriately. To do this, you do need to figure out how to specify data dependencies so that the server can pull in everything it needs to render (see later sections), but there are libraries to help with this. I'm never doing $('.date-picker').datePicker() again, but I'm also not bound to a fully client-side technology like Web Components or Angular (Ember is finally working on server-side rendering).

Full prerendering is nice, but you probably don't need quite all of that. Most likely, you want to prerender some of the basic structure, but let the client-side pull in the rest. The beauty of React's component approach is that it's easy (once you have server-side rendering going with routes & data dependencies) to fine-tune precisely what gets rendered where. Each component can configure itself to be server-renderable or not, and the client basically picks up wherever the server left off. It depends on how you set it up, so I won't go into detail about it, but I certainly felt empowered with control to fine-tune everything.

Not to mention that anything server renderable is easily testable!

A Quick Glance at Code

React provides a great infrastructure for server-rendering, but you need a lot more. You need to be able to run the same routes server-side and figure out which data your components need. This is where react-router comes in. This is the critical piece for complex React apps.

It's a great router for the client-side, but it also provides the pieces for server-rendering. For my blog, I specify the routes in routes.js, and the router is run in the bootstrap file. The server and client call this run function. The router tells me the components that are required for the specific URL.

For data handling, I copied an approach from the react-router async data example. Each component can define a fetchData static method, and you can see also in the bootstrap file a method to run through all the required components and gather all the data from these methods. It attaches the fetched data as a property to each component.

This is simplistic. More complex apps use an architecture like Flux. I'm not entirely happy with the fetchData approach, but it works alright for small apps like a blog. The point here is that you have the infrastructure to do this without a whole lot of work.

Ditching Client-Side Page Transitions

With this setup, instead of refreshing the entire page whenever you click a link, it can just fetch any new data it needs and only update parts of the page that need to be changed. react-router especially helps with this, as it takes care of all of the pushState work to make it feel like the page actually changed. This makes the site pretty snappy.

Although it feels a little weird to do that for a blog, I had it working at one point. The page never refreshed; it only fetched data over XHR and updated the page contents. In fact, I enabled that mode on the demo site, test.jlongster.com, so you can play with it there.

I ended up disabling it though. The main reason is that many of my demos mutate the DOM directly, so you couldn't reliably enter and leave a post page, as there would be side effects. In general, I realized that it was just too much work for a simple blog. I'm really glad I learned how to set this up, but rendering everything on the sever is nice and simple.

It turns out that writing React server apps is completely awesome. I didn't expect to end up here, but think about it, I'm writing in React but my whole site acts as if it were a site from the 90s where a request is made, data is fetched, and HTML is rendered. Rendering transitions on the client without refreshing the page is just an optimization.

There is a still a React piece on the client which "renders" each page, but all it is doing is hooking up all the event handlers.

Implementation Notes

Here's a few more details about how everything works.

Folder Structure

The src folder is the core of the app and everything in there can be rendered on the server or the client. The server folder holds the express server and the API implementation, and the static/js folder holds the client-side bootstrapping code.

Both sides pull in the src directory with relative imports, like require('../src/routes'). The components within src each fetch the data they need, but this needs to work on the client and the server. My blog runs everything only on the server now, but I'm discussing apps that support client-side rendering too.

The problem is that components in src need to pull in different modules if they are on the server or the client. If they are on the server, they can call API methods directly, but on the client they need to use XHR. I solve this by creating an implementation folder impl on the server and the client, with the same modules that implement the same APIs. Components can require impl/api.js and they will load the right API implementation, as seen here.

In node, this require works because I symlink server/impl as impl in my node_modules folder. On the client, I configure webpack to resolve the impl folder to the client-side implementation. All of the database methods are implemented in the server-side api.js, and the same API is implemented on the client-side api.js but it calls the back-end API over XHR.

I tried to munge NODE_PATHS at first, but I found the above setup rather elegant.

Large Static HTML Chunks

There are a couple places on my blog where the content is simply a large static chunk of HTML like the projects section. I don't use JSX, and I didn't really feel like wrapping them up in components anyway. I simply dump this content in the static folder and created server and client-side implementations of a statics.js module that loads in this content. To render it, I just tell React to load it as raw HTML.

Gulp & Webpack

I use 6to5 to write ES6 code and compile it to ES5. I set up a gulp workflow to build everything on the server-side, run the app and restart it on changes. For the client, I use webpack to bundle everything together into a single js file (mostly, I use code splitting to separate out a few modules into other files). Both run 6to5 on all the code.

I like this setup, but it does feel like there is duplicate work going on. It'd be nice to somehow use webpack for node modules too, and only have a single build process.

Ansible/Docker

In addition to all of this, I completely rebuilt my server and now use ansible and docker. Both are amazing; I can use ansible to bootstrap a new machine and then docker to run any number of apps on it. This deserves its own post.

I told you I over-engineered this right?!

Todo

My new blog was an exercise in how to write React apps that blend the server/client distinction. As its my first app of this type, it's quite terrible in some ways. There's a lot of things I could clean up, so don't focus on the details.

I think the overall structure is pretty sound, however. A few things I want to improve:

  • Testing. Right now I only test the server-side API. I'd like to learn slimerjs and how to integrate it with mocha.
  • Data dependencies. The fetchData method on components was a good starting point, but I think it's a little awkward and it would probably be good to have very basic Flux-style stores instead.
  • Async. I also used this as an excuse to try js-csp on a real project, and it was quite wonderful. But I also saw some glaring sore spots and I'm going to fix them.
  • Cleanup. Many of the utility functions and a few other things are still from my old code, and are pretty ugly.

I hope you learned something. I know I had fun.

Categorieën: Mozilla-nl planet

Alex Gibson: How to help find a regression range in Firefox Nightly

Mozilla planet - to, 15/01/2015 - 01:00

I recently spotted a visual glitch in a CSS animation that was only happening in Firefox Nightly. I was pretty confident the animation played fine just a couple of weeks ago, so after some debugging and ruling out any obvious wrong-doing in the code, I was pretty confident that a recent change in Firefox must have somehow caused a regression. Not knowing quite what else to do, I decided to file a bug to see if anyone else could figure out what was going wrong.

After some initial discussion it turned out the animation was only broken in Firefox on OSX, so definitely a bug! It could have been caused by any number of code changes in the previous few weeks and could not be reproduced on other platforms. So how could I go about helping to find the cause of the regression?

It was then someone pointed me to a tool I hadn't heard of before, called mozregression. It's an interactive regression range finder for Mozilla nightly and inbound builds. Once installed, all you need to do is pass in a last known "good date" together with a known "bad date" and a URL to test. The tool then automates downloading and running different nightly builds against the affected URL.

mozregression --good=2014-10-01 --bad=2014-10-02 -a "https://example.com"

After each run, mozregression asks you if the build is "good" or "bad" and then continues to narrow down the regression range until it finds when the bug was introduced. The process takes a while to run, but in the end it then spits out a pushlog like this.

This helped to narrow down the cause of the regression considerably, and together with a reduced test case we we're then able to work out which commit was the cause.

The resulting patch also turned out to fix another bug that was effecting Leaflet.js maps in Firefox. Result!

Categorieën: Mozilla-nl planet

Pages