mozilla

Mozilla Nederland LogoDe Nederlandse
Mozilla-gemeenschap

Mozilla Firefox 35 Now Available with Firefox Hello Conversations, Better ... - Tech Times

Nieuws verzameld via Google - do, 15/01/2015 - 10:27

Tech Times

Mozilla Firefox 35 Now Available with Firefox Hello Conversations, Better ...
Tech Times
Mozilla has just released Firefox version 35 along with a slew of performance improvements, fixes and an enhanced version of Firefox Hello, the company's free cross-browser video chat app. The new version also shares Wi-Fi and cellular signals of ...

Categorieën: Mozilla-nl planet

Mozilla launches Firefox 35 with WebRTC aboard - Inquirer

Nieuws verzameld via Google - do, 15/01/2015 - 09:23

Mozilla launches Firefox 35 with WebRTC aboard
Inquirer
MOZILLA HAS RELEASED Firefox 35 after a successful beta period. The new edition of Firefox the first to offer WebRTC service Firefox Hello, which allows users of compatible browsers to natter to each other in real time without add-ons or separate programs.

Categorieën: Mozilla-nl planet

Chris Double: Decentralized Websites with ZeroNet

Mozilla planet - do, 15/01/2015 - 09:00

ZeroNet is a new project that aims to deliver a decentralized web. It uses a combination of bittorrent, a custom file server and a web based user interface to do this and manages to provide a pretty useable experience.

Users run a ZeroNet node and do their web browsing via the local proxy it provides. Website addresses are public keys, generated using the same algorithm as used for bitcoin addresses. A request for a website key results in the node looking in the bittorrent network for peers that are seeding the site. Peers are selected and ZeroNet connects to the peer directly to a custom file server that it implements. This is used to download the files required for the site. Bittorrent is only used for selecting peers, not for the site contents.

Once a site is retrieved the node then starts acting as a peer serving the sites content to users. The more users browsing your site, the more peers become available to provide the data. If the original site goes down the remaining peers can still serve the content.

Site updates are done by the owner making changes and then signing these changes with the private key for the site address. It then starts getting distributed to the peers that are seeding it.

Browsing is done through a standard web browser. The interface uses Websockets to communicate with the local node and receive real time information about site updates. The interface uses a sandboxed iframe to display websites.

Running

ZeroNet is open source and hosted on github. Everything is done through the one zeronet.py command. To run a node:

$ python zeronet.py ...output...

This will start the node and the file server. A check is made to see if the file server is available for connections externally. If this fails it displays a warning but the system still works. You won’t seed sites or get real time notification of site updates however. The fix for this is to open port 15441 in your firewall. ZeroNet can use UPNP to do this automatically but it requires a MiniUPNP binary for this to work. See the --upnpc command line switch for details.

The node can be accessed from a web browser locally using port 43110. Providing a site address as the path will access a particular ZeroNet site. For example, 1EU1tbG9oC1A8jz2ouVwGZyQ5asrNsE4Vr is the main ‘hello’ site that is first displayed. To access it you’d use the URL http://127.0.0.1:43110/1EU1tbG9oC1A8jz2ouVwGZyQ5asrNsE4Vr.

Creating a site

To create a site you first need to shut down your running node (using ctrl+c will do it) then run the siteCreate command:

$ python zeronet.py siteCreate ... - Site private key: ...private key... - Site address: ...site address... ... - Site created!

You should record the private key and address as you will need them when updating the site. The command results in a data/address directory being created, where ‘address’ is the site address that siteCreate produced. Inside that is a couple of default files. One of these, content.json, contains JSON data listing the files contained within the site and signing information. This gets updated automatically when you sign your site after doing updates. If you edit the title key in this file you can give your site a title that appears in the user interface instead of the address.

Another flie that gets modified during this site creation process is the sites.json file in the data directory. It contains the list of all the sites and some metadata about them.

If you visit http://127.0.0.1:43110/siteaddress in your browser, where siteaddress is the address created with siteCreate, then you’ll see the default website that is created. If your node is peering successfully and you access this address from another node it will download the site, display it, and start seeding it. This is how the site data spreads through the network.

Updating a site

To change a site you must first store your files in the data/siteaddress directory. Any HTML, CSS, JavaScript, etc can be put here. It’s like a standard website root directory. Just don’t delete the config.json file that’s there. Once you’ve added, modified or removed files you run the siteSign command. First shut down your node then (replace siteaddress with the actual address):

$ python zeronet.py siteSign siteaddress - Signing site: siteaddress... Private key (input hidden):

Now you enter the private key that was displayed (and hopefully you saved) when you ran siteCreate. The site gets signed, information stored in config.json and eventually published to any peers that are currently serving it.

Deleting a site

You can pause seeding a site from the user interface but you can’t delete it. To do that you must shutdown the node and delete the sites data/siteaddress directory manually. You will also need to remove its entry from data/sites.json. When you restart the node it will no longer appear.

Site tips

Because the website is displayed in a sandboxed iframe there are some restrictions in what it can do. The most obvious is that only relative URLs work in anchor elements. If you click on an absolute URL it does nothing. The sandboxed iframe has the allow-top-navigation option which means you can link to external pages or other ZeroNet sites if you use the target attribute of the anchor element and set it to _top. So this will work:

<a href="http://bluishcoder.co.nz/" target="_top">click me</a>

But this will not:

<a href="http://bluishcoder.co.nz/">click me</a>

Dynamic websites are supported, but requires help using centralized services. The ZeroNet node includes an example of a dynamic website called ‘ZeroBoard’. This site allows users to enter a message in a form and it’s published to a list of messages which all peering nodes will see. It does this by posting the message to an external web application that the author runs on the standard internet. This web app updates a file inside the sites ZeroNet directory and then signs it. The result is published to all peers and they automatically get the update through the Websocket interface.

Although this works it’s unfortunate the it relies on a centralized web application. The ZeroNet author has posted that they are looking at decentralized ways of doing this, maybe using bitmessage or some other system. Something involving peer to peer WebRTC would be interesting.

Conclusion

ZeroNet seems to be most similar to tor, i2p or freenet. Compared to these it lacks the anonymity and encryption aspects. But it decentralizes the site content which tor and i2p don’t. Freenet provides decentralization too but does not allow JavaScript in sites. ZeroNet does allow JavaScript but this has the usual security and tracking concerns.

Site addresses are in the same format as bitcoin addresses. It should be possible to import the private key into bitcoin and then bitcoins sent to the public address of a site would be accessed by the site owner. I haven’t tested this but I don’t see why it couldn’t be made to work. Maybe this could be leveraged somehow to enable a web payment method.

ZeroNet’s lack of encyption or obfuscation of the site contents could be a problem. A peer holds the entire site in a local directory. If this contains malicious or illegal content it can be accidentally run or viewed. Or it could be picked up in automated scans and the user held responsible. Even if the site originally had harmless content the site author could push an update out that contains problematic material. That’s a bit scary.

It’s early days for the project and hopefully some of these issues can be addressed. As it is though it works well, is very useable, and is an interesting experiement on decentralizing websites. Some links for more information:

Categorieën: Mozilla-nl planet

Rumbling Edge - Thunderbird: 2015-01-14 Calendar builds

Thunderbird - do, 15/01/2015 - 08:52

Common (excluding Website bugs)-specific: (16)

  • Fixed: 432675 – Revise layout of Alarms option pane in preference dialog
  • Fixed: 639284 – Metadata of “Provider for Google Calendar” extension are not translated at AMO
  • Fixed: 909183 – calIDateTime.compare returns incorrect result with floating timezone
  • Fixed: 941425 – Yearly rule “Last day of a month” can’t be set with the UI and is wrongly displayed in the views.
  • Fixed: 958978 – Yearly recurrences with BYMONTH and more BYDAY are displayed wrongly if the last day of the month is not displayed in the view
  • Fixed: 985114 – Make use of CSS variables
  • Fixed: 1072815 – Multiple locales were missing in Lightning 3.3.1 release
  • Fixed: 1080659 – Converting email into task/event fails when localization uses regular expression special characters
  • Fixed: 1082286 – [icaljs] Date/Time Picker seems to have a timezone error
  • Fixed: 1107388 – No auth prompt is shown when subscribing to CalDAV calendars [domWin.document is null]
  • Fixed: 1112502 – Right clicking on a recurring event, and bringing up the attendance sub-menu gives unreadable titles
  • Fixed: 1114504 – Extra localization notes for bug 493389 – Provider for Google Calendar cannot sync tasks
  • Fixed: 1115965 – Provide filename and line number in cal.WARN and cal.ERROR
  • Fixed: 1117324 – Improve stack trace for calListenerBag
  • Fixed: 1117456 – Run unit tests on ical.js as well as libical
  • Fixed: 1118489 – promisifyCalendar mis-invokes Proxy constructor

Sunbird will no longer be actively developed by the Calendar team.

Windows builds Official Windows

Linux builds Official Linux (i686), Official Linux (x86_64)

Mac builds Official Mac

Categorieën: Mozilla-nl planet

Rumbling Edge - Thunderbird: 2015-01-14 Thunderbird comm-central builds

Thunderbird - do, 15/01/2015 - 08:51

Thunderbird-specific: (30)

  • Fixed: 486501 – edit an address after autocomplete and autocomplete reselects the first choice, even reverts to a different address (involving quoted “Display Name”)
  • Fixed: 505721 – Thunderbird theme on Linux feels cluttered (inappropriate spacing, etc.)
  • Fixed: 532067 – (Windows 7 theme) Icon for Sent folder to match qute and gnomestripe metaphor
  • Fixed: 733856 – [meta] Australis OSX tracker bug
  • Fixed: 735318 – Chatting notification only show the selected conversation not the one notifying me.
  • Fixed: 925746 – Option to Open the Preferences in a Tab
  • Fixed: 947656 – [meta] Shared Themes
  • Fixed: 1025684 – With mail.identity.default.autocompleteToMyDomain=true, edit an address after autocomplete and autocomplete reselects the first choice, even reverts to a different address (only for speedy corrections!)
  • Fixed: 1094706 – Thunderbird changes needed due to the web installer interfaces now using browsers instead of DOM windows
  • Fixed: 1095893 – opened attachments (using “Open with”) are no longer set read only
  • Fixed: 1099068 – Switch to new event constructors in Thunderbird
  • Fixed: 1105841 – Lightweight themes don’t change styling properly on OS X 10.10.
  • Fixed: 1106796 – reference to undefined property this.lastMessage.sourceFolder in resource:///modules/activity/moveCopy.js
  • Fixed: 1107844 – Recipient autocomplete: For multiple matches, select dropdown result other than first using mouse click, confirm with Tab or Enter, and TB uses 1st result instead (i.e. private msg gets easily addressed and sent to random recipients)
  • Fixed: 1110095 – Messages -> Create filter from message broken by bug 1085205
  • Fixed: 1110389 – Allow “create filter from message” also for other fields in message header
  • Fixed: 1113035 – Adapt Debugger Server startup code for changes in bug 1059001. error DebuggerServer.openListener is not a function
  • Fixed: 1113298 – Recipient autocomplete: From results dropdown, select any but 1st entry via mouse click, then Ctrl+Enter to send message immediately: sudden change to another recipient (1st result)
  • Fixed: 1113610 – New version of other actions button breaks CompactHeader addon
  • Fixed: 1115018 – On startup, error “Windows cannot find … uninstall\helper.exe”
  • Fixed: 1115034 – Recipient type selectors (To, CC, etc.) design nits on WinXP theme: showing two dropdown arrows, inconsistent background color and hover behaviour
  • Fixed: 1115189 – TEST-UNEXPECTED-FAIL | /builds/slave/test/build/mozmill/content-tabs/test-content-tab.js | test-content-tab.js::test_content_tab_context_menu
  • Fixed: 1115990 – Add an option to Prefs/Advanced to enable/disable the hardware acceleration
  • Fixed: 1116958 – Win7+: treelines on selected treechildren should have the text color
  • Fixed: 1117089 – preference.value is null opening the font settings.
  • Fixed: 1118703 – TEST-UNEXPECTED-FAIL | toolkit/forgetaboutsite/test/unit/test_removeDataFromDomain.js | xpcshell return code: 0
  • Fixed: 1119468 – Port Bug 1118032 to TB [the word “Automatic” does not convey any information on what the choice actually does]
  • Fixed: 1119512 – mozilla/mach fails to configure without python in path
  • Fixed: 1119911 – autosync.js, line 75: ReferenceError: reference to undefined property this.autoSyncManager
  • Fixed: 1119959 – TEST-UNEXPECTED-FAIL | toolkit/components/telemetry/tests/unit/test_TelemetryPing.js | xpcshell return code: 0

MailNews Core-specific: (14)

  • Fixed: 11039 – Filter outgoing/Sent messages (perhaps to use a different Sent/FCC folder)
  • Fixed: 570711 – When going online, send outbox first
  • Fixed: 695671 – Filtering stops — Only 1 filter executes for “Run Filters on Folder” (after execution of action=”Delete”, further/different actions on other mails is not executed and filter log is not written to filterlog.html).
  • Fixed: 741340 – Port |Bug 739188 – Allow crosscompiling for Windows without NSIS| to comm-central
  • Fixed: 872357 – Perma-orange on Windows: TEST-UNEXPECTED-FAIL | ../../../resources/mailTestUtils.js:444 | Error: CreateFile failed for c:\users\cltbld\appdata\local\temp\tmpmddsgp\mailtest\Mail\Local Folders\Inbox, error 32
  • Fixed: 998191 – Introduce the structured header concept to nsIMsgCompFields
  • Fixed: 1070525 – applying the ‘delete’ operator to an unqualified name is deprecated in Feed reader
  • Fixed: 1114328 – Remove some useless variables
  • Fixed: 1115113 – fix signed/unsigned comparison warnings in mailnews/local/src/nsLocalUndoTxn.cpp
  • Fixed: 1115145 – Convert some occurences of ns*Array.IndexOf(elem) != kNotFound to ns*Array.Contains(elem)
  • Fixed: 1116561 – filter after the fact should return an error if any filter failed
  • Fixed: 1116959 – Removing search terms blanks out results list in address book quick search and contacts side bar search
  • Fixed: 1116982 – TEST-UNEXPECTED-FAIL | mailnews/compose/test/unit/test_messageHeaders.js | xpcshell return code: -11
  • Fixed: 1120093 – mailnews/imap/test/unit/test_imapSearch.js, line 260: SyntaxError: test for equality (==) mistyped

Windows builds Official Windows, Official Windows installer

Linux builds Official Linux (i686), Official Linux (x86_64)

Mac builds Official Mac

Categorieën: Mozilla-nl planet

Firefox 35 zum Download: Neuer Mozilla-Browser will chatten - T-Online

Nieuws verzameld via Google - do, 15/01/2015 - 08:26

T-Online

Firefox 35 zum Download: Neuer Mozilla-Browser will chatten
T-Online
Firefox 35 hat an der Oberfläche kaum Veränderungen erfahren, sondern wurde vor allem technisch weiter verbessert – der neue Mozilla-Browser wurde weiter beschleunigt und abgesichert. Insgesamt neun Sicherheitslücken werden geschlossen. Für den ...
Mozilla stellt Firefox 35 vorZDNet.de
Mozilla veröffentlicht Firefox 35soeren-hentzschel.at
Firefox 35: Neuer Mozilla-Browser steht zum Download bereitSTERN
DIE WELT -PC Games
alle 37 nieuwsartikelen »
Categorieën: Mozilla-nl planet

Ian Bicking: A Product Journal: Conception

Mozilla planet - do, 15/01/2015 - 07:00

I’m going to try to journal the process of a new product that I’m developing in Mozilla Cloud Services

When Labs closed and I entered management I decided not to do any programming for a while. I had a lot to learn about management, and that’s what I needed to focus on. Whether I learned what I need to I don’t know, but I have been getting a bit tired.

We went through a fairly extensive planning process towards the end of 2014. I thought it was a good process. We didn’t end up where we started, which is a good sign – often planning processes are just documenting the conventional wisdom and status quo of a group or project, but in a critically engaged process you are open to considering and reconsidering your goals and commitments.

Mozilla is undergoing some stress right now. We have a new search deal, which is good, but we’ve been seeing declining marketshare which is bad. And then when you consider that desktop browsers are themselves a decreasing share of the market it looks worse.

The first planning around this has been to decrease attrition among our existing users. Longer term much of the focus has been in increasing the quality of our product. A noble goal of course, but does it lead to growth? I suspect it can only address attrition, the people who don’t use Firefox but could won’t have an opportunity to see what we are making. If you have other growth techniques then focusing on attrition can be sufficient. Chrome for instance does significant advertising and has deals to side-load Chrome onto people’s computers. Mozilla doesn’t have the same resources for that kind of growth.

When finished up the planning process I realized, damn, all our plans were about product quality. And I liked our plan! But something was missing.

This perplexed me for a while, but I didn’t really know what to make of it. Talking with a friend about it he asked then what do you want to make? – a seemingly obvious question that no one had asked me, and somehow hearing the question coming at me was important.

Talking through ideas, I reluctantly kept coming back to sharing. It’s the most incredibly obvious growth-oriented product area, since every use of a product is a way to implore non-users to switch. But sharing is so competitive. When I first started with Mozilla we would obsess over the problem of Facebook and Twitter and silos, and then think about it until we threw our hands up in despair.

But I’ve had this trick up my sleeve that I pull out for one project after another because I think it’s a really good trick: make a static copy of the live DOM. Mostly you just iterate over the elements, get rid of scripts and stuff, do a few other clever things, use <base href> and you are done! It’s like a screenshot, but it’s also still a webpage. I’ve been trying to do something with this for a long time. This time let’s use it for sharing…?

So, the first attempt at a concept: freeze the page as though it’s a fancy screenshot, upload it somewhere with a URL, maybe add some fun features because now it’s disassociated from its original location. The resulting page won’t 404, you can save personalized or dynamic content, we could add highlighting or other features.

The big difference with past ideas I’ve encountered is that here we’re not trying to compete with how anyone shares things, this is a tool to improve what you share. That’s compatible with Facebook and Twitter and SMS and anything.

If you think pulling a technology out of your back pocket and building a product around it is like putting the cart in front of the horse, well maybe… but you have to start somewhere.

[I’ll add a link here to the next post once it is written]

Categorieën: Mozilla-nl planet

Benjamin Kerensa: Call for Help: Mentors Wanted!

Mozilla planet - do, 15/01/2015 - 05:11

Western Oregon UniversityThis is very last minute as I have not been able to find enough people interested by directly approaching folks, but I have a great mentoring opportunity for Mozillians. One of my friends is a professor at Western Oregon University and tries to expose her students to a different Open Source project each term and up to bat this term is the Mozilla Project.

So I am looking for mentors from across the project who would be willing to correspond a couple times a week and answer questions from students who are learning about Firefox for Android or Firefox for Desktop.

It is ok not to be an expert on all the questions coming your way but if you do not know then you would help find the right person and get them the answers they need so they do not hit a roadblock.

This opportunity is open to both staff and contributors and the time commitment should not exceed an hour or two a week but realistically could be as little as twenty minutes or so a week to exchange emails.

Not only does this opportunity help expose these students to Open Source but also to contributing to our project. In the past, I have mentored students from WOU and the end result was many from the class continued on as contributors.

Interested? Get in touch!

Categorieën: Mozilla-nl planet

Michael Verdi: Refresh from web in Firefox 35

Mozilla planet - do, 15/01/2015 - 01:32

refresh
Back in July, I mentioned working on making download pages offer a reset (now named “Refresh”) when you are trying to download the same exact version of Firefox that you already have. Well, this is now live with Firefox 35 (released yesterday) and it works on our main download page (pictured above) and on the product support page. In addition, our support documentation can now include refresh buttons. This should make the refresh feature easier to discover and use and let people recover from problems quickly.

Categorieën: Mozilla-nl planet

James Long: Presenting The Most Over-Engineered Blog Ever

Mozilla planet - do, 15/01/2015 - 01:00

Several months ago I posted about plans to rebuild this blog. After a few false starts, I finally finished and launched the new version two weeks ago. The new version uses React and is way better (and I open-sourced it).

Notably, using React my app is split into components that can all be rendered on the client or the server. I have full power to control what gets rendered on each side.

And it feels weird.

It's what people call an "isomorphic" app, which is a fancy way of saying that generally I don't have to think about the server or the client when writing code; it just works in both places. When we finally got JavaScript on the server, this is what everyone dreamed about, but until React there hasn't been a great way to realize this.

I really enjoyed this exercise. I was so embedded with the notion that the server and client are completely separate that it was awkward and weird for a while. It took me while to figure out how to even structure my project. Eventually, I learned something new that will greatly impact all of my future projects (which is the best kind of learning!).

If you want to see what it's like logged in, I setup a demo site, test.jlongster.com, which has admin access. You can test things like my simple markdown editor.

Yes, this is just a blog. Yes, this is absolutely over-engineering. But it's fun, and I learned. If we can't even over-engineer our own side projects, well, I just don't want to live in that world.

This is quick post-mortem of my experience and some explanation of how it works. The code is up on github, but beware it is still quite messy as I did all of this in a small amount of time.

One thing I should note is that I use js-csp (soon to be renamed) channels for all my async work. I find this to be the best way to do anything asynchronous, and you can read my article about it if interested.

The Server & Client Dance

You might be wondering why this is so exciting, since we've been rendering complex pages statically from the server and hooking them up on the client-side for ages. The problem is that you used to have to write code completely separately, one file for the server and one for the client, even though your describing the same components/behaviors/what have you. That turns out to be a disaster for complex apps (hence the push for fully client-side apps that pull data from APIs).

Unfortunately, full client-side apps (or "single page apps") suffer from slow startup time and lack of discoverability from search engines.

We really want to write components that aren't bound to either the server or the client. And React lets us do that:

let dom = React.DOM; let Toolbar = React.createClass({ load: function() { // loading functionality... }, render: function() { return dom.div( { className: 'toolbar' }, dom.button({ onClick: this.load }, 'Load' }) ); } });

This looks like a front-end component, but it's super simple to render on the back-end: React.renderToString(Toolbar()), which would return something like <div class="toolbar"><button>Load</button></div>. The coolest part is when the browser loads the rendered HTML, you can just do React.render(Toolbar(), element), and React won't touch the DOM except to simply hook up your event handlers (like the onClick). element would be the DOM element wherever the toolbar was prerendered.

It's not that hard to build a workflow on top of this that can fully prerender a complex app so that it loads instantly on the client, but additionally all the event handlers get hooked up appropriately. To do this, you do need to figure out how to specify data dependencies so that the server can pull in everything it needs to render (see later sections), but there are libraries to help with this. I'm never doing $('.date-picker').datePicker() again, but I'm also not bound to a fully client-side technology like Web Components or Angular (Ember is finally working on server-side rendering).

Full prerendering is nice, but you probably don't need quite all of that. Most likely, you want to prerender some of the basic structure, but let the client-side pull in the rest. The beauty of React's component approach is that it's easy (once you have server-side rendering going with routes & data dependencies) to fine-tune precisely what gets rendered where. Each component can configure itself to be server-renderable or not, and the client basically picks up wherever the server left off. It depends on how you set it up, so I won't go into detail about it, but I certainly felt empowered with control to fine-tune everything.

Not to mention that anything server renderable is easily testable!

A Quick Glance at Code

React provides a great infrastructure for server-rendering, but you need a lot more. You need to be able to run the same routes server-side and figure out which data your components need. This is where react-router comes in. This is the critical piece for complex React apps.

It's a great router for the client-side, but it also provides the pieces for server-rendering. For my blog, I specify the routes in routes.js, and the router is run in the bootstrap file. The server and client call this run function. The router tells me the components that are required for the specific URL.

For data handling, I copied an approach from the react-router async data example. Each component can define a fetchData static method, and you can see also in the bootstrap file a method to run through all the required components and gather all the data from these methods. It attaches the fetched data as a property to each component.

This is simplistic. More complex apps use an architecture like Flux. I'm not entirely happy with the fetchData approach, but it works alright for small apps like a blog. The point here is that you have the infrastructure to do this without a whole lot of work.

Ditching Client-Side Page Transitions

With this setup, instead of refreshing the entire page whenever you click a link, it can just fetch any new data it needs and only update parts of the page that need to be changed. react-router especially helps with this, as it takes care of all of the pushState work to make it feel like the page actually changed. This makes the site pretty snappy.

Although it feels a little weird to do that for a blog, I had it working at one point. The page never refreshed; it only fetched data over XHR and updated the page contents. In fact, I enabled that mode on the demo site, test.jlongster.com, so you can play with it there.

I ended up disabling it though. The main reason is that many of my demos mutate the DOM directly, so you couldn't reliably enter and leave a post page, as there would be side effects. In general, I realized that it was just too much work for a simple blog. I'm really glad I learned how to set this up, but rendering everything on the sever is nice and simple.

It turns out that writing React server apps is completely awesome. I didn't expect to end up here, but think about it, I'm writing in React but my whole site acts as if it were a site from the 90s where a request is made, data is fetched, and HTML is rendered. Rendering transitions on the client without refreshing the page is just an optimization.

There is a still a React piece on the client which "renders" each page, but all it is doing is hooking up all the event handlers.

Implementation Notes

Here's a few more details about how everything works.

Folder Structure

The src folder is the core of the app and everything in there can be rendered on the server or the client. The server folder holds the express server and the API implementation, and the static/js folder holds the client-side bootstrapping code.

Both sides pull in the src directory with relative imports, like require('../src/routes'). The components within src each fetch the data they need, but this needs to work on the client and the server. My blog runs everything only on the server now, but I'm discussing apps that support client-side rendering too.

The problem is that components in src need to pull in different modules if they are on the server or the client. If they are on the server, they can call API methods directly, but on the client they need to use XHR. I solve this by creating an implementation folder impl on the server and the client, with the same modules that implement the same APIs. Components can require impl/api.js and they will load the right API implementation, as seen here.

In node, this require works because I symlink server/impl as impl in my node_modules folder. On the client, I configure webpack to resolve the impl folder to the client-side implementation. All of the database methods are implemented in the server-side api.js, and the same API is implemented on the client-side api.js but it calls the back-end API over XHR.

I tried to munge NODE_PATHS at first, but I found the above setup rather elegant.

Large Static HTML Chunks

There are a couple places on my blog where the content is simply a large static chunk of HTML like the projects section. I don't use JSX, and I didn't really feel like wrapping them up in components anyway. I simply dump this content in the static folder and created server and client-side implementations of a statics.js module that loads in this content. To render it, I just tell React to load it as raw HTML.

Gulp & Webpack

I use 6to5 to write ES6 code and compile it to ES5. I set up a gulp workflow to build everything on the server-side, run the app and restart it on changes. For the client, I use webpack to bundle everything together into a single js file (mostly, I use code splitting to separate out a few modules into other files). Both run 6to5 on all the code.

I like this setup, but it does feel like there is duplicate work going on. It'd be nice to somehow use webpack for node modules too, and only have a single build process.

Ansible/Docker

In addition to all of this, I completely rebuilt my server and now use ansible and docker. Both are amazing; I can use ansible to bootstrap a new machine and then docker to run any number of apps on it. This deserves its own post.

I told you I over-engineered this right?!

Todo

My new blog was an exercise in how to write React apps that blend the server/client distinction. As its my first app of this type, it's quite terrible in some ways. There's a lot of things I could clean up, so don't focus on the details.

I think the overall structure is pretty sound, however. A few things I want to improve:

  • Testing. Right now I only test the server-side API. I'd like to learn slimerjs and how to integrate it with mocha.
  • Data dependencies. The fetchData method on components was a good starting point, but I think it's a little awkward and it would probably be good to have very basic Flux-style stores instead.
  • Async. I also used this as an excuse to try js-csp on a real project, and it was quite wonderful. But I also saw some glaring sore spots and I'm going to fix them.
  • Cleanup. Many of the utility functions and a few other things are still from my old code, and are pretty ugly.

I hope you learned something. I know I had fun.

Categorieën: Mozilla-nl planet

James Long: Presenting The Most Over-Engineered Blog Ever

Mozilla planet - do, 15/01/2015 - 01:00

Several months ago I posted about plans to rebuild this blog. After a few false starts, I finally finished and launched the new version two weeks ago. The new version uses React and is way better (and I open-sourced it).

Notably, using React my app is split into components that can all be rendered on the client or the server. I have full power to control what gets rendered on each side.

And it feels weird.

It's what people call an "isomorphic" app, which is a fancy way of saying that generally I don't have to think about the server or the client when writing code; it just works in both places. When we finally got JavaScript on the server, this is what everyone dreamed about, but until React there hasn't been a great way to realize this.

I really enjoyed this exercise. I was so embedded with the notion that the server and client are completely separate that it was awkward and weird for a while. It took me while to figure out how to even structure my project. Eventually, I learned something new that will greatly impact all of my future projects (which is the best kind of learning!).

If you want to see what it's like logged in, I setup a demo site, test.jlongster.com, which has admin access. You can test things like my simple markdown editor.

Yes, this is just a blog. Yes, this is absolutely over-engineering. But it's fun, and I learned. If we can't even over-engineer our own side projects, well, I just don't want to live in that world.

This is quick post-mortem of my experience and some explanation of how it works. The code is up on github, but beware it is still quite messy as I did all of this in a small amount of time.

One thing I should note is that I use js-csp (soon to be renamed) channels for all my async work. I find this to be the best way to do anything asynchronous, and you can read my article about it if interested.

The Server & Client Dance

You might be wondering why this is so exciting, since we've been rendering complex pages statically from the server and hooking them up on the client-side for ages. The problem is that you used to have to write code completely separately, one file for the server and one for the client, even though your describing the same components/behaviors/what have you. That turns out to be a disaster for complex apps (hence the push for fully client-side apps that pull data from APIs).

Unfortunately, full client-side apps (or "single page apps") suffer from slow startup time and lack of discoverability from search engines.

We really want to write components that aren't bound to either the server or the client. And React lets us do that:

let dom = React.DOM; let Toolbar = React.createClass({ load: function() { // loading functionality... }, render: function() { return dom.div( { className: 'toolbar' }, dom.button({ onClick: this.load }, 'Load' }) ); } });

This looks like a front-end component, but it's super simple to render on the back-end: React.renderToString(Toolbar()), which would return something like <div class="toolbar"><button>Load</button></div>. The coolest part is when the browser loads the rendered HTML, you can just do React.render(Toolbar(), element), and React won't touch the DOM except to simply hook up your event handlers (like the onClick). element would be the DOM element wherever the toolbar was prerendered.

It's not that hard to build a workflow on top of this that can fully prerender a complex app so that it loads instantly on the client, but additionally all the event handlers get hooked up appropriately. To do this, you do need to figure out how to specify data dependencies so that the server can pull in everything it needs to render (see later sections), but there are libraries to help with this. I'm never doing $('.date-picker').datePicker() again, but I'm also not bound to a fully client-side technology like Web Components or Angular (Ember is finally working on server-side rendering).

Full prerendering is nice, but you probably don't need quite all of that. Most likely, you want to prerender some of the basic structure, but let the client-side pull in the rest. The beauty of React's component approach is that it's easy (once you have server-side rendering going with routes & data dependencies) to fine-tune precisely what gets rendered where. Each component can configure itself to be server-renderable or not, and the client basically picks up wherever the server left off. It depends on how you set it up, so I won't go into detail about it, but I certainly felt empowered with control to fine-tune everything.

Not to mention that anything server renderable is easily testable!

A Quick Glance at Code

React provides a great infrastructure for server-rendering, but you need a lot more. You need to be able to run the same routes server-side and figure out which data your components need. This is where react-router comes in. This is the critical piece for complex React apps.

It's a great router for the client-side, but it also provides the pieces for server-rendering. For my blog, I specify the routes in routes.js, and the router is run in the bootstrap file. The server and client call this run function. The router tells me the components that are required for the specific URL.

For data handling, I copied an approach from the react-router async data example. Each component can define a fetchData static method, and you can see also in the bootstrap file a method to run through all the required components and gather all the data from these methods. It attaches the fetched data as a property to each component.

This is simplistic. More complex apps use an architecture like Flux. I'm not entirely happy with the fetchData approach, but it works alright for small apps like a blog. The point here is that you have the infrastructure to do this without a whole lot of work.

Ditching Client-Side Page Transitions

With this setup, instead of refreshing the entire page whenever you click a link, it can just fetch any new data it needs and only update parts of the page that need to be changed. react-router especially helps with this, as it takes care of all of the pushState work to make it feel like the page actually changed. This makes the site pretty snappy.

Although it feels a little weird to do that for a blog, I had it working at one point. The page never refreshed; it only fetched data over XHR and updated the page contents. In fact, I enabled that mode on the demo site, test.jlongster.com, so you can play with it there.

I ended up disabling it though. The main reason is that many of my demos mutate the DOM directly, so you couldn't reliably enter and leave a post page, as there would be side effects. In general, I realized that it was just too much work for a simple blog. I'm really glad I learned how to set this up, but rendering everything on the sever is nice and simple.

It turns out that writing React server apps is completely awesome. I didn't expect to end up here, but think about it, I'm writing in React but my whole site acts as if it were a site from the 90s where a request is made, data is fetched, and HTML is rendered. Rendering transitions on the client without refreshing the page is just an optimization.

There is a still a React piece on the client which "renders" each page, but all it is doing is hooking up all the event handlers.

Implementation Notes

Here's a few more details about how everything works.

Folder Structure

The src folder is the core of the app and everything in there can be rendered on the server or the client. The server folder holds the express server and the API implementation, and the static/js folder holds the client-side bootstrapping code.

Both sides pull in the src directory with relative imports, like require('../src/routes'). The components within src each fetch the data they need, but this needs to work on the client and the server. My blog runs everything only on the server now, but I'm discussing apps that support client-side rendering too.

The problem is that components in src need to pull in different modules if they are on the server or the client. If they are on the server, they can call API methods directly, but on the client they need to use XHR. I solve this by creating an implementation folder impl on the server and the client, with the same modules that implement the same APIs. Components can require impl/api.js and they will load the right API implementation, as seen here.

In node, this require works because I symlink server/impl as impl in my node_modules folder. On the client, I configure webpack to resolve the impl folder to the client-side implementation. All of the database methods are implemented in the server-side api.js, and the same API is implemented on the client-side api.js but it calls the back-end API over XHR.

I tried to munge NODE_PATHS at first, but I found the above setup rather elegant.

Large Static HTML Chunks

There are a couple places on my blog where the content is simply a large static chunk of HTML like the projects section. I don't use JSX, and I didn't really feel like wrapping them up in components anyway. I simply dump this content in the static folder and created server and client-side implementations of a statics.js module that loads in this content. To render it, I just tell React to load it as raw HTML.

Gulp & Webpack

I use 6to5 to write ES6 code and compile it to ES5. I set up a gulp workflow to build everything on the server-side, run the app and restart it on changes. For the client, I use webpack to bundle everything together into a single js file (mostly, I use code splitting to separate out a few modules into other files). Both run 6to5 on all the code.

I like this setup, but it does feel like there is duplicate work going on. It'd be nice to somehow use webpack for node modules too, and only have a single build process.

Ansible/Docker

In addition to all of this, I completely rebuilt my server and now use ansible and docker. Both are amazing; I can use ansible to bootstrap a new machine and then docker to run any number of apps on it. This deserves its own post.

I told you I over-engineered this right?!

Todo

My new blog was an exercise in how to write React apps that blend the server/client distinction. As its my first app of this type, it's quite terrible in some ways. There's a lot of things I could clean up, so don't focus on the details.

I think the overall structure is pretty sound, however. A few things I want to improve:

  • Testing. Right now I only test the server-side API. I'd like to learn slimerjs and how to integrate it with mocha.
  • Data dependencies. The fetchData method on components was a good starting point, but I think it's a little awkward and it would probably be good to have very basic Flux-style stores instead.
  • Async. I also used this as an excuse to try js-csp on a real project, and it was quite wonderful. But I also saw some glaring sore spots and I'm going to fix them.
  • Cleanup. Many of the utility functions and a few other things are still from my old code, and are pretty ugly.

I hope you learned something. I know I had fun.

Categorieën: Mozilla-nl planet

Alex Gibson: How to help find a regression range in Firefox Nightly

Mozilla planet - do, 15/01/2015 - 01:00

I recently spotted a visual glitch in a CSS animation that was only happening in Firefox Nightly. I was pretty confident the animation played fine just a couple of weeks ago, so after some debugging and ruling out any obvious wrong-doing in the code, I was pretty confident that a recent change in Firefox must have somehow caused a regression. Not knowing quite what else to do, I decided to file a bug to see if anyone else could figure out what was going wrong.

After some initial discussion it turned out the animation was only broken in Firefox on OSX, so definitely a bug! It could have been caused by any number of code changes in the previous few weeks and could not be reproduced on other platforms. So how could I go about helping to find the cause of the regression?

It was then someone pointed me to a tool I hadn't heard of before, called mozregression. It's an interactive regression range finder for Mozilla nightly and inbound builds. Once installed, all you need to do is pass in a last known "good date" together with a known "bad date" and a URL to test. The tool then automates downloading and running different nightly builds against the affected URL.

mozregression --good=2014-10-01 --bad=2014-10-02 -a "https://example.com"

After each run, mozregression asks you if the build is "good" or "bad" and then continues to narrow down the regression range until it finds when the bug was introduced. The process takes a while to run, but in the end it then spits out a pushlog like this.

This helped to narrow down the cause of the regression considerably, and together with a reduced test case we we're then able to work out which commit was the cause.

The resulting patch also turned out to fix another bug that was effecting Leaflet.js maps in Firefox. Result!

Categorieën: Mozilla-nl planet

Niko Matsakis: Little Orphan Impls

Mozilla planet - wo, 14/01/2015 - 20:03

We’ve recently been doing a lot of work on Rust’s orphan rules, which are an important part of our system for guaranteeing trait coherence. The idea of trait coherence is that, given a trait and some set of types for its type parameters, there should be exactly one impl that applies. So if we think of the trait Show, we want to guarantee that if we have a trait reference like MyType : Show, we can uniquely identify a particular impl. (The alternative to coherence is to have some way for users to identify which impls are in scope at any time. It has its own complications; if you’re curious for more background on why we use coherence, you might find this rust-dev thread from a while back to be interesting reading.)

The role of the orphan rules in particular is basically to prevent you from implementing external traits for external types. So continuing our simple example of Show, if you are defining your own library, you could not implement Show for Vec<T>, because both Show and Vec are defined in the standard library. But you can implement Show for MyType, because you defined MyType. However, if you define your own trait MyTrait, then you can implement MyTrait for any type you like, including external types like Vec<T>. To this end, the orphan rule intuitively says “either the trait must be local or the self-type must be local”.

More precisely, the orphan rules are targeting the case of two “cousin” crates. By cousins I mean that the crates share a common ancestor (i.e., they link to a common library crate). This would be libstd, if nothing else. That ancestor defines some trait. Both of the crates are implementing this common trait using their own local types (and possibly types from ancestor crates, which may or may not be in common). But neither crate is an ancestor of the other: if they were, the problem is much easier, because the descendant crate can see the impls from the ancestor crate.

When we extended the trait system to support multidispatch, I confess that I originally didn’t give the orphan rules much thought. It seemed like it would be straightforward to adapt them. Boy was I wrong! (And, I think, our original rules were kind of unsound to begin with.)

The purpose of this post is to lay out the current state of my thinking on these rules. It sketches out a number of variations and possible rules and tries to elaborate on the limitations of each one. It is intended to serve as the seed for a discussion in the Rust discusstion forums.

The first, totally wrong, attempt

The first attempt at the orphan rules was just to say that an impl is legal if a local type appears somewhere. So, for example, suppose that I define a type MyBigInt and I want to make it addable to integers:

1 2 impl Add<i32> for MyBigInt { ... } impl Add<MyBigInt> for i32 { ... }

Under these rules, these two impls are perfectly legal, because MyBigInt is local to the current crate. However, the rules also permit an impl like this one:

1 impl<T> Add<T> for MyBigInt { ... }

Now the problems arise because those same rules also permit an impl like this one (in another crate):

1 impl<T> Add<YourBigInt> for T { ... }

Now we have a problem because both impls are applicable to Add<YourBigInt> for MyBigInt.

In fact, we don’t need multidispatch to have this problem. The same situation can arise with Show and tuples:

1 2 impl<T> Show for (T, MyBigInt) { ... } // Crate A impl<T> Show for (YourBigInt, T) { ... } // Crate B

(In fact, multidispatch is really nothing than a compiler-supported version of implementing a trait for a tuple.)

The root of the problem here lies in our definition of “local”, which completely ignored type parameters. Because type parameters can be instantiated to arbitrary types, they are obviously special, and must be considered carefully.

The ordered rule

This problem was first brought to our attention by arielb1, who filed Issue 19470. To resolve it, he proposed a rule that I will call the ordered rule. The ordered rule goes like this:

  1. Write out all the type parameters to the trait, starting with Self.
  2. The name of some local struct or enum must appear on that line before the first type parameter.
    • More formally: When visiting the types in pre-order, a local type must be visited before any type parameter.

In terms of the examples I gave above, this rule permits the following impls:

1 2 3 impl Add<i32> for MyBigInt { ... } impl Add<MyBigInt> for i32 { ... } impl<T> Add<T> for MyBigInt { ... }

However, it avoids the quandry we saw before because it rejects this impl:

1 impl<T> Add<YourBigInt> for T { ... }

This is because, if we wrote out the type parameters in a list, we would get:

1 T, YourBigInt

and, as you can see, T comes first.

This rule is actually pretty good. It meets most of the requirements I’m going to unearth. But it has some problems. The first is that it feels strange; it feels like you should be able to reorder the type parameters on a trait without breaking everything (we will see that this is not, in fact, obviously true, but it was certainly my first reaction).

Another problem is that the rule is kind of fragile. It can easily reject impls that don’t seem particularly different from impls that it accepts. For example, consider the case of the Modifier trait that is used in hyper and iron. As you can see in this issue, iron wants to be able to define a Modifier impl like the following:

1 2 3 struct Response; ... impl Modifier<Response> for Vec<u8> { .. }

This impl is accepted by the ordered rule (thre are no type parameters at all, in fact). However, the following impl, which seems very similar and equally likely (in the abstract), would not be accepted:

1 2 3 struct Response; ... impl<T> Modifier<Response> for Vec<T> { .. }

This is because the type parameter T appears before the local type (Response). Hmm. It doesn’t really matter if T appears in the local type, either; the following would also be rejected:

1 2 3 struct MyHeader<T> { .. } ... impl<T> Modifier<MyHeader<T>> for Vec<T> { .. }

Another trait that couldn’t be handled properly is the BorrowFrom trait in the standard library. There a number of impls like this one:

1 impl<T> BorrowFrom<Rc<T>> for T

This impl fails the ordered check because T comes first. We can make it pass by switching the order of the parameters, so that the BorrowFrom trait becomes Borrow.

A final “near-miss” occurred in the standard library with the Cow type. Here is an impl from libcollections of FromIterator for a copy-on-write vector:

1 impl<'a, T> FromIterator<T> for Cow<'a, Vec<T>, [T]>

Note that Vec is a local type here. This impl obeys the ordered rule, but somewhat by accident. If the type parameters of the Cow trait were in a different order, it would not, because then [T] would precede Vec<T>.

The covered rule

In response to these shortcomings, I proposed an alternative rule that I’ll call the covered rule. The idea of the covered rule was to say that (1) the impl must have a local type somewhere and (2) a type parameter can only appear in the impl if the type parameter is covered by a local type. Covered means that it appears “inside” the type: so T is covered by MyVec in the type MyVec<T> or MyBox<Box<T>>, but not in (T, MyVec<int>). This rule has the advantage of having nothing to do with ordering and it has a certain intution to it; any type parameters that appear in your impls have to be tied to something local.

This rule [turns out to give us the required orphan rule guarantees][proof]. To see why, consider this example:

1 2 impl<T> Foo<T> for A<T> // Crate A impl<U> Foo<B<U>> for U // Crate B

If you tried to make these two impls apply to the same type, you wind up with infinite types. After all, T = B<U>, but U=A<T>, and hence you get T = B<A<T>>.

Unlike the previous rule, this rule happily accepts the BorrowFrom trait impls:

1 impl<T> BorrowFrom<Rc<T>> for T

The reason is that the type parameter T here is covered by the (local) type Rc.

However, after implementing this rule, we found out that it actually prohibits a lot of other useful patterns. The most important of them is the so-called auxiliary pattern, in which a trait takes a type parameter that is a kind of “configuration” and is basically orthogonal to the types that the trait is implemented for. An example is the Hash trait:

1 impl<H> Hash<H> for MyStruct

The type H here represents the hashing function that is being used. As you can imagine, for most types, they will work with any hashing function. Sadly, this impl is rejected, because H is not covered by any local type. You could make it work by adding a parameter H to MyStruct:

1 impl<H> Hash<H> for MyStruct<H>

But that is very weird, because now when we create our struct we are also deciding which hash functions can be used with it. You can also make it work by moving the hash function parameter H to the hash method itself, but then that is limiting. It makes the Hash trait not object safe, for one thing, and it also prohibits us from writing types that are specialized to particular hash functions.

Another similar example is indexing. Many people want to make types indexable by any integer-like thing, for example:

1 2 3 impl<I:Int, T> Index<I> for Vec<T> { type Output = T; }

Here the type parameter I is also uncovered.

Ordered vs Covered

By now I’ve probably lost you in the ins and outs, so let’s see a summary. Here’s a table of all the examples I’ve covered so far. I’ve tweaked the names so that, in all cases, any type that begins with My is considered local to the current crate:

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 +----------------------------------------------------------+---+---+ | Impl Header | O | C | +----------------------------------------------------------+---+---+ | impl Add<i32> for MyBigInt | X | X | | impl Add<MyBigInt> for i32 | X | X | | impl<T> Add<T> for MyBigInt | X | | | impl<U> Add<MyBigInt> for U | | | | impl<T> Modifier<MyType> for Vec<u8> | X | X | | impl<T> Modifier<MyType> for Vec<T> | | | | impl<'a, T> FromIterator<T> for Cow<'a, MyVec<T>, [T]> | X | X | | impl<'a, T> FromIterator<T> for Cow<'a, [T], MyVec<T>> | | X | | impl<T> BorrowFrom<Rc<T>> for T | | X | | impl<T> Borrow<T> for Rc<T> | X | X | | impl<H> Hash<H> for MyStruct | X | | | impl<I:Int,T> Index<I> for MyVec<T> | X | | +----------------------------------------------------------+---+---+

As you can see, both of these have their advantages. However, the ordered rule comes out somewhat ahead. In particular, the places where it fails can often be worked around by reordering parameters, but there is no answer that permits the covered rule to handle the Hash example (and there are a number of other traits that fit that pattern in the standard library).

Hybrid approach #1: Covered self

You might be wondering – if neither rule is perfect, is there a way to combine them? In fact, the rule that is current implemented is such a hybrid. It imposes the covered rules, but only on the Self parameter. That means that there must be a local type somewhere in Self, and any type parameters appearing in Self must be covered by a local type. Let’s call this hybrid CS, for “covered apply to Self”.

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 +----------------------------------------------------------+---+---+---+ | Impl Header | O | C | S | +----------------------------------------------------------+---+---+---| | impl Add<i32> for MyBigInt | X | X | X | | impl Add<MyBigInt> for i32 | X | X | | | impl<T> Add<T> for MyBigInt | X | | X | | impl<U> Add<MyBigInt> for U | | | | | impl<T> Modifier<MyType> for Vec<u8> | X | X | | | impl<T> Modifier<MyType> for Vec<T> | | | | | impl<'a, T> FromIterator<T> for Cow<'a, MyVec<T>, [T]> | X | X | X | | impl<'a, T> FromIterator<T> for Cow<'a, [T], MyVec<T>> | | X | X | | impl<T> BorrowFrom<Rc<T>> for T | | X | | | impl<T> Borrow<T> for Rc<T> | X | X | X | | impl<H> Hash<H> for MyStruct | X | | X | | impl<I:Int,T> Index<I> for MyVec<T> | X | | X | +----------------------------------------------------------+---+---+---+ O - Ordered / C - Covered / S - Covered Self

As you can see, the CS hybrid turns out to miss some important cases that the pure ordered full achieves. Notably, it prohibits:

  • impl Add<MyBigInt> for i32
  • impl Modifier<MyType> for Vec<u8>

This is not really good enough.

Hybrid approach #2: Covered First

We can improve the covered self approach by saying that some type parameter of the trait must meet the rules (some local type; impl type params covered by a local type), but not necessarily Self. Any type parameters which precede this covered parameter must consist exclusively of remote types (no impl type parameters, in particular).

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 +----------------------------------------------------------+---+---+---+---+ | Impl Header | O | C | S | F | +----------------------------------------------------------+---+---+---|---| | impl Add<i32> for MyBigInt | X | X | X | X | | impl Add<MyBigInt> for i32 | X | X | | X | | impl<T> Add<T> for MyBigInt | X | | X | X | | impl<U> Add<MyBigInt> for U | | | | | | impl<T> Modifier<MyType> for Vec<u8> | X | X | | X | | impl<T> Modifier<MyType> for Vec<T> | | | | | | impl<'a, T> FromIterator<T> for Cow<'a, MyVec<T>, [T]> | X | X | X | X | | impl<'a, T> FromIterator<T> for Cow<'a, [T], MyVec<T>> | | X | X | X | | impl<T> BorrowFrom<Rc<T>> for T | | X | | | | impl<T> Borrow<T> for Rc<T> | X | X | X | X | | impl<H> Hash<H> for MyStruct | X | | X | X | | impl<I:Int,T> Index<I> for MyVec<T> | X | | X | X | +----------------------------------------------------------+---+---+---+---+ O - Ordered / C - Covered / S - Covered Self / F - Covered First

As you can see, this is a strict improvement over the other appraoches. The only thing it can’t handle that the other rules can is the BorrowFrom rule.

An alternative approach: distinguishing “self-like” vs “auxiliary” parameters

One disappointment about the hybrid rules I presented thus far is that they are inherently ordered. It runs somewhat against my intuition, which is that the order of the trait type parameters shouldn’t matter that much. In particular it feels that, for a commutative trait like Add, the role of the left-hand-side type (Self) and right-hand-side type should be interchangable (below, I will argue that in fact some kind of order may well be essential to the notion of coherence as a whole, but for now let’s assume we want Add to treat the left- and right-hand-side as equivalent).

However, there are definitely other traits where the parameters are not equivalent. Consider the Hash trait example we saw before. In the case of Hash, the type parameter H refers to the hashing algorithm and thus is inherently not going to be covered by the type of the value being hashed. It is in some sense completely orthogonal to the Self type. For this reason, we’d like to define impls that apply to any hasher, like this one:

1 impl<H> Hash<H> for MyType { ... }

The problem is, if we permit this impl, then we can’t allow another crate to define an impl with the same parameters, but in a different order:

1 impl<H> Hash<MyType> for H { ... }

One way to permit the first impl and not the second without invoking ordering is to classify type parameters as self-like and auxiliary.

The orphan rule would require that at least one self-like parameter references a local type and that all impl type parameters appearing in self-like types would be covered. The Self type is always self-like, but other types would be auxiliary unless declared to be self-like (or perhaps the default would be the opposite).

Here is a table showing how this new “explicit” rule would work, presuming that the type parameters on Add and Modifier were declared as self-like. The Hash and Index parameters would be declared as auxiliary.

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 +----------------------------------------------------------+---+---+---+---+---+ | Impl Header | O | C | S | F | E | +----------------------------------------------------------+---+---+---|---|---+ | impl Add<i32> for MyBigInt | X | X | X | X | X | | impl Add<MyBigInt> for i32 | X | X | | X | X | | impl<T> Add<T> for MyBigInt | X | | X | X | | | impl<U> Add<MyBigInt> for U | | | | | | | impl<T> Modifier<MyType> for Vec<u8> | X | X | | X | X | | impl<T> Modifier<MyType> for Vec<T> | | | | | | | impl<'a, T> FromIterator<T> for Cow<'a, MyVec<T>, [T]> | X | X | X | X | X | | impl<'a, T> FromIterator<T> for Cow<'a, [T], MyVec<T>> | | X | X | X | X | | impl<T> BorrowFrom<Rc<T>> for T | | X | | | X | | impl<T> Borrow<T> for Rc<T> | X | X | X | X | X | | impl<H> Hash<H> for MyStruct | X | | X | X | X | | impl<I:Int,T> Index<I> for MyVec<T> | X | | X | X | X | +----------------------------------------------------------+---+---+---+---+---+ O - Ordered / C - Covered / S - Covered Self / F - Covered First E - Explicit Declarations

You can see that it’s quite expressive, though it is very restrictive about generic impls for Add. However, it would push quite a bit of complexity onto the users, because now when you create a trait, you must classify its type parameter as self.

In defense of ordering

Whereas at first I felt that having the rules take ordering into account was unnatural, I have come to feel that ordering is, to some extent, inherent in coherence. To see what I mean, let’s consider an example of a new vector type, MyVec<T>. It might be reasonable to permit MyVec<T> to be addable to anything can converted into an iterator over T elements. Naturally, since we’re overloading +, we’d prefer for it to be commutative:

1 2 3 4 5 6 7 8 impl<T,I> Add<I> for MyVec<T> where I : IntoIterator<Output=T> { type Output = MyVec<T>; ... } impl<T,I> Add<MyVec<T>> for I where I : IntoIterator<Output=T> { type Output = MyVec<T>; ... }

Now, given that MyVec<T> is a vector, it should be iterable as well:

1 2 3 4 impl<T> IntoIterator for MyVec<T> { type Output = T; ... }

The problem is that these three impls are inherently overlapping. After all, if I try to add two MyVec instances, which impl do I get?

Now, this isn’t a problem for any of the rules I proposed in this thread, because all of them reject that pair of impls. In fact, both the “Covered” and “Explicit Declarations” rules go farther: they reject both impls. This is because the type parameter I is uncovered; since the rules don’t consider ordering, they can’t allow an uncovered iterator I on either the left- or the right-hand-side.

The other variations (“Ordered”, “Covered Self”, and “Covered First”), on the other hand, allow only one of those impls: the one where MyVec<T> appears on the left. This seems pretty reasonable. After all, if we allow you to define an overloaded + that applies to an open-ended set of types (those that are iterable), there is the possibility that others will do the same. And if I try to add a MyVec<int> and a YourVec<int>, both of which are iterable, who wins? The ordered rules give a clear answer: the left-hand-side wins.

There are other blanket cases that also get prohibited which might on their face seem to be reasonable. For example, if I have a BigInt type, the ordered rules allow me to write impls that permit BigInt to be added to any concrete int type, no matter which side that concrete type appears on:

1 2 3 4 5 impl Add<BigInt> for i8 { type Output = BigInt; ... } impl Add<i8> for BigInt { type Output = BigInt; ... } ... impl Add<BigInt> for i64 { type Output = BigInt; ... } impl Add<i64> for BigInt { type Output = BigInt; ... }

It might be nice, if I could just write the following two impls:

1 2 impl<R:Int> Add<BigInt> for R { type Output = BigInt; ... } impl<L:Int> Add<L> for BigInt { type Output = BigInt; ... }

Now, this makes some measure of sense because Int is a trait that is only intended to be implemented for the primitive integers. In principle all bigints could use these same rules without conflict, so long as none of them implement Int. But in fact, nothing prevents them from implementing Int. Moreover, it’s not hard to imagine other crates creating comparable impls that would overlap with the ones above:

1 2 3 4 struct PrintedInt(i32); impl Int for PrintedInt; impl<R:Show> Add<PrintedInt> for R { type Output = BigInt; ... } impl<L:Show> Add<L> for PrintedInt { type Output = BigInt; ... }

Assuming that BigInt implements Show, we now have a problem!

In the future, it may be interesting to provide a way to use traits to create “strata” so that we can say things like “it’s ok to use an Int-bounded type parameter on the LHS so long as the RHS is bounded by Foo, which is incompatible with Int”, but it’s a subtle and tricky issue (as the Show example demonstrates).

So ordering basically means that when you define your traits, you should put the “principal” type as Self, and then order the other type parameters such that those which define the more “principal” behavior come afterwards in order.

The problem with ordering

Currently I lean towards the “Covered First” rule, but it bothers me that it allows something like

1 impl Modifier<MyType> for Vec<u8>

but not

1 impl<T> Modifier<MyType> for Vec<T>

However, this limitation seems to be pretty inherent to any rules that do not explicitly identify “auxiliary” type parameters. The reason is that the ordering variations all use the first occurrence of a local type as a “signal” that auxiliary type parameters should be permitted afterwards. This implies that another crate will be able to do something like:

1 impl<U> Modifier<U> for Vec<YourType>

In that case, both impls apply to Modifier<MyType> for Vec<YourType>.

Conclusion

This is a long post, and it covers a lot of ground. As I wrote in the introduction, the orphan rules turn out to be hiding quite a lot of complexity. Much more than I imagined at first. My goal here is mostly to lay out all the things that aturon and I have been talking about in a comprehensive way.

I feel like this all comes down to a key question: how do we identify the “auxiliary” input type parameters? Ordering-based rules identify this for each impl based on where the first “local” type appears. Coverage-based rules seem to require some sort of explicit declaration on the trait.

I am deeply concerned about asking people to understand this “auxiliary” vs “self-like” distinction when declaring a trait. On the other hand, there is no silver bullet: under ordering-based rules, they will be required to sometimes reorder their type parameters just to pacify the seemingly random ordering rule. (But I have the feeling that people intuitively put the most “primary” type first, as Self, and the auxiliary type parameters later.)

Categorieën: Mozilla-nl planet

Marco Zehe: Quickly check your website for common accessibility problems with tenon.io

Mozilla planet - wo, 14/01/2015 - 19:12

Tenon.io is a new tool to test web sites against some of the Web Content Accessibility Guidelines criteria. While this does not guarantee the usability of a web site, it gives you an idea of where you may have some problems. Due to its API, it can be integrated into workflows for test automation and other building steps for web projects.

However, sometimes you’ll just quickly want to check your web site and get an overview if something you did has the desired effect.

The Tenon team released a first version of a Chrome extension in December. But because there was no equivalent for Firefox, my ambition was peaked, and I set out to build my first ever Firefox extension.

And guess what? It does even a bit more than the Chrome one! In addition to a tool bar button, it gives Firefox users a context menu item for every page type so keyboard users and those using screen readers have equal access to the functionality. The extension grabs the URL of the currently open tab and submits that to Tenon. It opens a new tab where the Tenon page will display the results.

For the technically interested: I used the Node.js implementation of the Firefox Add-On SDK, called JPM, to build the extension. I was heavily inspired by this blog post published in December about building Firefox extensions the painless way. As I moved along, I wanted to try out io.js, but ran into issues in two modules, so while working on the extension, I contributed bug reports to both JPM and jszip. Did I ever mention that I love working in open source? ;)

So without further due: Here’s the Firefox extension! And if you like it, a positive review is certainly appreciated!

Have fun!

Categorieën: Mozilla-nl planet

Doug Belshaw: How we're building v1.5 of Mozilla's Web Literacy Map in Q1 2015

Mozilla planet - wo, 14/01/2015 - 17:24

The Web Literacy Map constitutes the skills and competencies required to read, write and participate on the web. It currently stands at version 1.1 and you can see a more graphical overview of the competency layer in the Webmaker resources section.

Minecraft building

In Q1 2015 (January-March) we’ll be working with the community to update the Web Literacy Map to version 1.5. This is the result of a consultation process that initially aimed at a v2.0 but was re-scoped following community input. Find out more about the interviews, survey and calls that were part of that arc on the Mozilla wiki or in this tumblr post.

Some of what we’ll be discussing and working on has already been scoped out, while some will be emergent. We’ll definitely be focusing the following:

  • Reviewing the existing skills and competencies (i.e. names/descriptors)
  • Linking to the Mozilla manifesto (where appropriate)
  • Decide whether we want to include ‘levels’ in the map (e.g. Beginner / Intermediate / Advanced)
  • Explore ways to iterate on the visual design of the competency layer

After asking the community when the best time for a call would be, the first one is scheduled for tomorrow (Thursday 15th January 2015, 4pm UTC). Join us! Details of the other calls can be found here.

In addition to these calls, we’ll almost certainly have 'half-hour hack’ sessions where we get stuff done. This might include re-writing skills/competencies and work on other things that need doing - rather than discussing. These will likely be Mondays at the same time.

Questions? Comments? Tweet me or email me

Categorieën: Mozilla-nl planet

Soledad Penades: Introduction to Web Components

Mozilla planet - wo, 14/01/2015 - 17:13

I had the pleasure and honour to be the opening speaker for the first ever London Web Components meetup! Yay!

There was no video recording, but I remembered to record a screencast! It’s a bit messy and noisy, but if you couldn’t attend, this is better than nothing.

It also includes all the Q&A!

Some of the things people are worried about, which I think are interesting if you’re working on Web Components in any way:

  • How can I use them in production reliably?
  • What’s the best way to get started i.e. where do I start? do you migrate the whole thing with a new framework? or do you start little by little?
  • How would them affect SEO and accessibility? the best option is probably to extend existing elements where possible using the is="" idiom so you can add to the existing functionality
  • How do Web Components interact with other libraries? e.g. jQuery or React. Why would one use Web Components instead of Angular directives for example?
  • And if we use jQuery with components aren’t we back to “square one”?
  • What are examples of web components in production we can look at? e.g. the famous GitHub time element custom element.
  • Putting the whole app in just one tag yes/no: verging towards the NO, makes people uneasy
  • How does the hyphen thing work? It’s for preventing people registering existing elements, and also casual namespacing. It’s not perfect and won’t avoid clashes, some idea is to allow a way to delay the registration until the name of the element is provided so you can register it in the same way that you can require() something in node and don’t care what the internal name of such module is.

Only one person in the audience was using Web Components in production (that would be Wilson with Firefox OS, tee hee!) and about 10 or so were using them to play around and experiment, and consistently using Polymer… except Firefox OS, which uses just vanilla JS.

Slides are here and here’s the source code.

I’m really glad that I convinced my awesome colleague Wilson Page to join us too, as he has loads of experience implementing Web Components in Firefox OS and so he could provide lots of interesting commentary. Hopefully he will speak at a future event!

Join the meet-up so you can be informed when there’s a new one happening!

flattr this!

Categorieën: Mozilla-nl planet

Pete Moore: Weekly review 2015-01-14

Mozilla planet - wo, 14/01/2015 - 16:30

I am still alive.

Or, as the great Mark Twain once said: "The reports of my death have been greatly exaggerated."

Highlights from this week

This week I have been learning Go! And it has been a joy. Mostly.

My code doodles: https://github.com/petemoore/go_tutorial/

This article got me curious about Erlang: http://blog.erlware.org/some-thoughts-on-go-and-erlang/

Other than that I have been playing with docker, installed on my Mac, and have set up a VMware environment and acquired Windows Server 2008 x64 for running Go on Windows.

Plans for next week

Start work on porting the taskcluster-client library to Go. See:

Other matters

  • VCS Sync issues this week for l10n gecko
  • Found an interesting Go conference to attend this year
Categorieën: Mozilla-nl planet

Daniel Stenberg: My talks at FOSDEM 2015

Mozilla planet - wo, 14/01/2015 - 15:48

fosdem

Saturday 13:30, embedded room (Lameere)

Tile: Internet all the things – using curl in your device

Embedded devices are very often network connected these days. Network connected embedded devices often need to transfer data to and from them as clients, using one or more of the popular internet protocols.

libcurl is the world’s most used and most popular internet transfer library, already used in every imaginable sort of embedded device out there. How did this happen and how do you use libcurl to transfer data to or from your device?

Sunday, 09:00 Mozilla room (UD2.218A)

Title: HTTP/2 right now

HTTP/2 is the new version of the web’s most important and used protocol. Version 2 is due to be out very soon after FOSDEM and I want to inform the audience about what’s going on with the protocol, why it matters to most web developers and users and not the last what its status is at the time of FOSDEM.

Categorieën: Mozilla-nl planet

Henrik Skupin: Firefox Automation report – week 47/48 2014

Mozilla planet - wo, 14/01/2015 - 14:57

In this post you can find an overview about the work happened in the Firefox Automation team during week 47 and 48.

Highlights

Most of the work during those two weeks made by myself were related to get [Jenkins](http://jenkins-ci.org/ upgraded on our Mozmill CI systems to the most recent LTS version 1.580.1. This was a somewhat critical task given the huge number of issue as mentioned in my last Firefox Automation report. On November 17th we were finally able to get all the code changes landed on our production machine after testing it for a couple of days on staging.

The upgrade was not that easy given that lots of code had to be touched, and the new LTS release still showed some weird behavior when connecting slave nodes via JLNP. As result we had to stop using this connection method in favor of the plain java command. This change was actually not that bad because it’s better to automate and doesn’t bring up the connection warning dialog.

Surprisingly the huge HTTP session usage as reported by the Monitoring plugin was a problem introduced by this plugin itself. So a simple upgrade to the latest plugin version solved this problem, and we will no longer get an additional HTTP connection whenever a slave node connects and which never was released. Once we had a total freeze of the machine because of that.

Another helpful improvement in Jenkins was the fix for a JUnit plugin bug, which caused concurrent builds to hang, until the former build in the queue has been finished. This added a large pile of waiting time to our Mozmill test jobs, which was very annoying for QA’s release testing work – especially for the update tests. Since this upgrade the problem is gone and we can process builds a lot faster.

Beside the upgrade work, I also noticed that one of the Jenkins plugins in use, it’s actually the XShell plugin, failed to correctly kill the running application on the slave machine in case of an job is getting aborted. The result of that is that following tests will fail on that machine until the not killed job has been finished. I filed a Jenkins bug and did a temporary backout of the offending change in that plugin.

Individual Updates

For more granular updates of each individual team member please visit our weekly team etherpad for week 47 and week 48.

Meeting Details

If you are interested in further details and discussions you might also want to have a look at the meeting agenda, the video recording, and notes from the Firefox Automation meetings of week 47 and week 48.

Categorieën: Mozilla-nl planet

Matjaž Horvat: Pontoon report 2014: Make your translations better

Mozilla planet - wo, 14/01/2015 - 11:03

This post is part of a series of blog posts outlining Pontoon development in 2014. I’ll mostly focus on new features targeting translators. If you’re more interested in developer oriented updates, please have a look at the release notes.

Part 1. User interface
Part 2. Backend
Part 3. Meet our top contributors
Part 4. Make your translations betteryou are here
Part 5. Demo project

Some new features have been added to Pontoon, some older tools have been improved, all helping translators be more efficient and make translations more consistent, more accurate and simply better.

History
History tab displays previously suggested translations, including submissions from other users. Privileged translators can pick approved translation or delete the ones they find inappropriate.

Machinery
The next tab provides automated suggestions from several sources: Pontoon translation memory, Transvision (Mozilla), amagama (open source projects), Microsoft Terminology and machine translation by Bing Translator. Using machinery will make your translations more consistent.

Quality checks
Pontoon reviews every submitted translation by running Translate Toolkit pofilter tests that check for several issues that can affect the quality of your translations. Those checks are locale specific and can be turned off by translator.

Placeables
Some pieces of strings are not supposed to be translated. Think HTML markup or variables for example. Pontoon colorizes those pieces (called placeables) and allows you to easily insert them into your translation by clicking on them.

Get involved
Are you a developer, interested in Pontoon? Learn how to get your hands dirty.

Categorieën: Mozilla-nl planet

Pagina's