mozilla

Mozilla Nederland LogoDe Nederlandse
Mozilla gemeenschap

Abonneren op feed Mozilla planet
Planet Mozilla - http://planet.mozilla.org/
Bijgewerkt: 7 uur 37 min geleden

Benoit Girard: Improving Layer Dump Visualization

8 uur 9 min geleden

I’ve blogged before about adding a feature to visualize platforms log dumps including the layer tree. This week while working on bug 1097941 I had no idea which module the bug was coming from. I used this opportunity to improve the layer visualization features hoping that it would help me identify the bug. Here are the results (working for both desktop and mobile):

Layer Tree Visualization Demo
Layer Tree Visualization Demo – Maximize me

This tools works by parsing the output of layers.dump and layers.dump-texture (not yet landed). I reconstruct the data as DOM nodes which can quite trivially support the features of a layers tree because layers tree are designed to be mapped from CSS. From there some javascript or the browser devtools can be used to inspect the tree. In my case all I had to do was locat from which layer my bad texture data was coming from: 0xAC5F2C00.If you want to give it a spin just copy this pastebin and paste it here and hit ‘Parse’. Note: I don’t intend to keep backwards compatibility with this format so this pastebin may break after I go through the review for the new layers.dump-texture format.


Categorieën: Mozilla-nl planet

Christian Heilmann: What if everything is awesome?

do, 27/11/2014 - 23:54

this kid is awesome

These are the notes for my talk at Codemotion Madrid this year.
You can watch the screencast on YouTube and you can check out the slides at Slideshare.

An incredibly silly movie

The other day I watched Pacific Rim and was baffled by the awesomeness and the awesome inanity of this movie.

Let’s recap a bit:

  • There is a rift to another dimension under water that lets alien monsters escape into our world.
  • These monsters attack our cities and kill people. That is bad.
  • The most effective cause of action is that we build massive, erect walking robots on land to battle them.
  • These robots are controlled by pilots who walk inside them and throw punches to box these monsters.
  • These pilots all are super fit, ripped and beautiful and in general could probably take on these monsters in a fight with bare hands. The scientists helping them are helpless nerds.
  • We need to drop the robots with helicopters to where they are needed, because that looks awesome, too.

All in all the movie is borderline insane as if we had a rift like that under water, all we’d need is mine it. Or have some massive ships and submarines where the rift is ready to shoot and bomb anything that comes through. Which, of course, beats trying to communicate with it.

The issue is that this solution would not make for a good blockbuster 3D movie aimed at 13 year olds. Nothing fights or breaks in a fantastic manner and you can’t show crumbling buildings. We’d be stuck with mundane tasks. Like writing a coherent script, proper acting or even manual camera work and settings instead of green screen. We can’t have that.

Tech press hype

What does all that have to do with web development? Well, I get the feeling we got to a world where we try to be awesome for the sake of being awesome. And at the same time we seem to be missing out on the fact that what we have is pretty incredible as it is.

One thing I blame is the tech press. We still get weekly Cinderella stories of the lonely humble developer making it big with his first app (yes, HIS first app). We hear about companies buying each other for billions and everything being incredibly exciting.

Daily frustrations

In stark contrast to that, our daily lives as developers are full of frustrations. People not knowing what we do and thus not really giving us feedback on what we do, for example. We only appear on the scene when things break.

Our users can also be a bit of an annoyance as they do not upgrade the way we want them to and keep using things differently than we anticipated.

And even when we mess up, not much happens. We put our hearts and lots of efforts into our work. When we see something obviously broken or in dire need of improvement we want to fix it. The people above us in the hierarchy, however, are happy to see it as a glitch to fix later.

Flight into professionalism

Instead of working on the obvious broken communication between us and those who use our products and us and those who sell them (or even us and those who maintain our products) I hear a louder and louder call for “professional development”. This involves many abstractions and intelligent package managers and build scripts that automate a lot of the annoying cruft of our craft. Cruft in the form of extraneous code. Code that got there because of mistakes that our awesome new solutions make go away. But isn’t the real question why we still make so many mistakes in the first place?

Apps good, web bad!

One of the things we seem to be craving is to match the state of affairs of native platforms, especially the form factor of apps. Apps seem to be the new, modern form factor of software delivery. Fact is that they are a questionable success (and may even be a step back in software evolution as I put it in a TEDx talk. If you look at who earns something with them and how long they are in use on average it is hard to shout “hooray for apps”. On the web, there is a problem that there are so far no standards that define an app that are working across platforms. If you wonder how far that is going along, the current state of mobile apps on the web describes this in meticulous detail at the W3C.

Generic code is great code?

A lot of what we crave as developers is generic. We don’t want to write code that does one job well, we want to write code that can take any input and does something intelligent with it. This is feel good code. We are not only clever enough to be programmers We also write solutions that prevent people from making mistakes by predicting them.

Fredrik Noren wrote a brilliant piece on this called “On Generalisation“. In it he argues that writing generic code means trying to predict the future and that we are bad at that. He calls out for simpler, more modular and documented code that people can extend instead of catch-all solutions to simple problems.

I found myself nodding along reading this. There seems to be a general tendency to re-invent instead of improving existing solutions. This comes natural for developers – we want to create instead of read and understand. I also blame sites like hacker news which are a beauty pageant of small, quick and super intelligent technical solutions for every conceivable problem out there.

Want some proof? How about Static Site Generators listing 295 different ways to create static HTML pages? Let’s think about that: static HTML pages!

The web is obese!

We try to fix our world by stacking abstractions and creating generic solutions for small issues. The common development process and especially the maintenance process looks different, though.

People using Content Management Systems to upload lots of un-optimised photos are a problem. People using too many of our sleak and clever solutions also add to the fact that that web performance is still a big issue. According to the HTTP Archive the average web site is 2 MB in data delivered in 100(!) HTTP requests. And that years after we told people that each request is a massive cause of a slow and sluggish web experience. How can anyone explain things like the new LG G Watch site clocking in at 54 MB on the first loadLG G Watch site clocking in at 54 MB on the first load whilst being a responsive design?

Tools of awesome

There are no excuses. We have incredible tools that give us massive insight into our work. What we do is not black art any longer, we don’t hope that browsers do good things with our code. We can peek under the hood and see the parts moving.

Webpagetest.org is incredible. It gives us detailed insight into what is going right and wrong in our web sites right in the browser. You can test the performance of a site simulating different speeds and load it from servers all over the world. You get a page optimisation checklist, graphs about what got loaded when and when things started rendering. You even get a video of your page loading and getting ready for users to play with.

There are many resources how to use this tool and others that help us with fixing performance issues. Addy Osmani gave a great talk at CSS Conf in Berlin re-writing the JSConf web site live on stage using many of these tools.

Browsers are incredible tools

Browsers have evolved from simple web consumption tools to full-on development environments. Almost all browsers have some developer tools built in that not only allow you to see the code in the current page but also to debug. You have step-by-step debugging of JavaScript, CSS debugging and live previews of colours, animations, element dimensions, transforms and fonts. You have insight into what was loaded in what sequence, you can see what is in localStorage and you can do performance analysis and see the memory consumption.
The innovation in development tools of browsers is incredible and moves at an amazing speed. You can now even debug on devices connected via USB or wireless and Chrome allows you to simulate various devices and network/connectivity conditions.
Sooner or later this might mean that we won’t any other editors any more. Any user downloading a browser could also become a developer. And that is incredible. But what about older browsers?

Polyfills as a services

A lot of bloat on the web happens because of us trying to give new, cool effects to old, tired browsers. We do this because of a wrong understanding of the web. It is not about giving the same functionality to everybody, but instead to give a working experience to everybody.

The idea of a polyfill is genius: write a solution for an older environment to play with new functionality and get our UX ready for the time browsers support it. It fails to be genius when we never, ever remove the polyfills from our solutions. The Financial Times development team now had a great idea to offer polyfill as a service. This means you include one JavaScript file.

<script src="//cdn.polyfill.io/v1/polyfill.min.js" async defer> </script>

You can define which functionality you want to polyfill and it’ll be done that way. When the browser supports what you want, the stop-gap solution never gets included at all. How good is that?

Flexbox growing up

Another thing of awesome I saw the other day at CSS tricks. Chris Coyier uses Flexbox to create a toolbar that has fixed elements and others using up the rest of the space. It extends semantic HTML and does a great job being responsive.

rwd-flexbox

All the CSS code needed for it is this:

*, *:before, *:after { -moz-box-sizing: inherit; box-sizing: inherit; } html { -moz-box-sizing: border-box; box-sizing: border-box; } body { padding: 20px; font: 100% sans-serif; } .bar { display: -webkit-flex; display: -ms-flexbox; display: flex; -webkit-align-items: center; -ms-flex-align: center; align-items: center; width: 100%; background: #eee; padding: 20px; margin: 0 0 20px 0; } .bar > * { margin: 0 10px; } .icon { width: 30px; height: 30px; background: #ccc; border-radius: 50%; } .search { -webkit-flex: 1; -ms-flex: 1; flex: 1; } .search input { width: 100%; } .bar-2 .username { -webkit-order: 2; -ms-flex-order: 2; order: 2; } .bar-2 .icon-3 { -webkit-order: 3; -ms-flex-order: 3; order: 3; } .bar-3 .search { -webkit-order: -1; -ms-flex-order: -1; order: -1; } .bar-3 .username { -webkit-order: 1; -ms-flex-order: 1; order: 1; } .no-flexbox .bar { display: table; border-spacing: 15px; padding: 0; } .no-flexbox .bar > * { display: table-cell; vertical-align: middle; white-space: nowrap; } .no-flexbox .username { width: 1px; } @media (max-width: 650px) { .bar { -webkit-flex-wrap: wrap; -ms-flex-wrap: wrap; flex-wrap: wrap; } .icon { -webkit-order: 0 !important; -ms-flex-order: 0 !important; order: 0 !important; } .username { -webkit-order: 1 !important; -ms-flex-order: 1 !important; order: 1 !important; width: 100%; margin: 15px; }   .search { -webkit-order: 2 !important; -ms-flex-order: 2 !important; order: 2 !important; width: 100%; } }

That is pretty incredible, isn’t it?

More near-future tech of awesome

Other things that are brewing get me equally excited. WebRTC, WebGL, Web Audio and many more things are pointing to a high fidelity web. A web that allows for rich gaming experiences and productivity tools built right into the browser. We can video and audio chat with each other and send data in a peer-to-peer fashion without relying or burning up a server between us.

Service Workers will allow us to build a real offline experience. With AppCahse we’re hoping users will get something and not aggressively cache outdated information. If you want to know more about that watch these two amazing videos by Jake Archibald: The Service Worker: The Network layer that is yours to own and The Service worker is coming, look busy!

Web Components have been the near future for quite a while now and seem to be in a bit of a “let’s build a framework instead” rut. Phil Legetter has done and incredible job collecting what that looks like. It is true: the support of Shadow DOM across the board is still not quite there. But a lot of these frameworks offer incredible client-side functionality to go into the standard.

What can you do?

I think it is time to stop chasing the awesome of “soon we will be able to use that” and instead be more fearless about using what we have now. We love to write about just how broken things are when they are in their infancy. We tend to forget to re-visit them when they’ve matured more. Many things that were a fever dream a year ago are now ready for you to roll out – if you work with progressive enhancement. In general, this is a safe bet as the web will never be in a finished state. Even native platforms are only in a fixed state between major releases. Mattias Petter Johansson of Spotify put it quite succinctly in a thread why JavaScript is the only client side language:

Hating JavaScript is like hating the Internet.
The Internet is a cobweb of different technologies cobbled together with duct tape, string and chewing gum. It’s not elegantly designed in any way, because it’s more of a growing organism than it is a machine constructed with intent.

The web is chaotic, so much for sure, but it also aims to be longer lasting than other platforms. The in-built backwards compatibility of its technologies makes it a beautiful investment. As Paul Bakaus of Google put it:

If you build a web app today, it will run in browsers 10 years from now. Good luck trying the same with your favorite mobile OS (excluding Firefox OS).

The other issue we have to overcome is the dogma associated with some of our decisions. Yes, it would be excellent if we could use open web standards to build everything. It would be great if all solutions had their principles of distribution, readability and easy sharing. But we live in a world that has changed. In many ways in the mobile space we have to count our blessings. We can and should allow some closed technology take its course before we go back to these principles. We’ve done it with Flash, we can do it with others, too. My mantra these days is the following:

If you enable people world-wide to get a good experience and solve a problem they have, I like it. The technology you use is not the important part. How much you lock them in is. Don’t lock people in.

Go share and teach

One thing is for sure: we never had a more amazing environment to learn and share. Services like GitHub, JSFiddle, JSBin and Codepen make it easy to distribute and explain code. You can show instead of describe and you can fix instead of telling people that they are doing wrong. There is no better way to learn than to show and if you set out to teach you end up learning.

A great demo of this is together.js. Using this WebRTC based tool (or its implementation in JSFiddle by hitting the collaborate button) you can code together, with several cursors, audio chat or a text chat client directly in the browser. You explain in context and collaborate live. And you make each other learn something and get better. And this is what is really awesome.

Categorieën: Mozilla-nl planet

Allison Naaktgeboren: Applying Privacy Series: The 2nd meeting

do, 27/11/2014 - 06:04

The day after the first meeting…

Engineering Manager: Welcome DBA, Operations Engineer, and Privacy Officer. Did you all get a chance to look over the project wiki? What do you think?

Operations Engineer: I did.

DBA: Yup, and I have some questions.

Privacy Officer: Sounds really cool, as long as we’re careful.

Engineer: We’re always careful!

DBA: There are a lot of pages on the web, Keeping that much data is going to be expensive. I didn’t see anything on the wiki about evicting entries and for a table that big, we’ll need to do that regularly.

Privacy Officer: Also, when will we delete the device ids? Those are like a fingerprint for someone’s phone, so keeping them around longer than absolutely necessary increases risk for the user & the company’s risk.

Operations Engineer: The less we keep around, the less it costs to maintain.

Engineer: We know that most mobile users have only 1-3 pages open at any given time and we estimate no more than 50,000 users will be eligible for the service.

DBA: Well that does suggest a manageable load, but that doesn’t answer my question.

Engineer: Want to say if a page hasn’t been accessed in 48 hours we evict it from the server? And we can tune that knob as necessary?

Operations Engineer: As long as I can tune it in prod if something goes haywire.

Privacy Officer:: And device ids?

Engineer: Apply the same rule to them?

Engineering Manager: 48 hours would be too short. Not everyone uses their mobile browser every day. I’d be more comfortable with 90 days to start.

DBA: I imagine you’d want secure destruction for the ids.

Privacy Officer:: You got it!

DBA: what about the backup tapes? We back up the dbs regularly?

Privacy Officer:: are the backups online?

DBA: No, like I said, they’re on tape. Someone has to physically run ‘em through a machine. You’d need physical access to the backup storage facility.

Privacy Officer:: Then it’s probably fine if we don’t delete from the tapes.

Operations Engineer: What is the current timeline?

Engineer: End of the quarter, 8 weeks or so.

Operations Engineer: We’re under water right now, so it might be tight getting the hardware in & set up. New hardware orders usually take 6 weeks to arrive. I can’t promise the hardware will be ready in time.

Engineering Manager: We understand, please do your best and if we have to, Product Manager won’t be happy, but we’ll delay the feature if we need to.

Privacy Officer:: Who’s going to be responsible for the data on the stage & production servers?

Engineering Manager: Product Manager has final say.

DBA: thanks. good to know!

Engineer: I’ll draw up a plan  and send it around for feedback tomorrow.

 

Who brought up user data safety & privacy concerns in this conversation?

Privacy Officer is obvious. The DBA & Operations Engineer also raised privacy concerns.

Categorieën: Mozilla-nl planet

Robert Helmer: Better Source Code Browsing With FreeBSD and Mozilla DXR

do, 27/11/2014 - 05:30

Lately I've been reading about the design and implementation of the FreeBSD Operating System (great book, you should read it).

However I find browsing the source code quite painful. Using vim or emacs is fine for editing invidual files, but when you are trying to understand and browse around a large codebase, dropping to a shell and grepping/finding around gets old fast. I know about ctags and similar, but I also find editors uncomfortable for browsing large codebases for an extended amount of time - web pages tend to be easier on the eyes.

There's an LXR fork called FXR available, which is way better and I am very grateful for it - however it has all the same shortcomings LXR that we've become very familiar with on the Mozilla LXR fork (MXR):

  • based on regex, not static analysis of the code - sometimes it gets things wrong, and it doesn't really understand the difference between a variable with the same name in different files
  • not particularly easy on the eyes (shallow and easily fixable, I know)

I've been an admirer of Mozilla's next gen code browsing tool, DXR, for a long time now. DXR uses a clang plugin to do static analysis of the code, so it produces the real call graph - this means it doesn't need to guess at the definition of types or where a variable is used, it knows.

A good example is to contrast a file on MXR with the same file on DXR. Let's say you wanted to know where this macro was first defined, that's easy in DXR - just click on the word "NS_WARNING" and select "Jump to definition".

Now try that on MXR - clicking on "NS_WARNING" instead yields a search which is not particularly helpful, since it shows every place in the codebase that the word "NS_WARNING" appears (note that DXR has the ability to do this same type of search, in case that's really what you're after).

So that's what DXR is and why it's useful. I got frustrated enough with the status quo trying to grok the FreeBSD sources that I took a few days and the with help of folks in the #static channel on irc.mozilla.org (particularly Erik Rose) to get DXR running on FreeBSD and indexed a tiny part of the source tree as a proof-of-concept (the source for "/bin/cat"):

http://freebsdxr.rhelmer.org

This is running on a FreeBSD instance in AWS.

DXR is currently undergoing major changes, SQLite to ElasticSearch transition being the central one. I am tracking how to get the "es" branch of DXR going in this gist.

Currently I am able to get a LINT kernel build indexed on DXR master branch, but still working through issues on the "es" branch.

Overall, I feel like I've learned way more about static analysis, how DXR works, FreeBSD source code and produced some useful patches for the Mozilla and the DXR project and hopefully will provide a useful resource for the FreeBSD project, all along the way. Totally worth it, I highly recommended working with all of the aforementioned :)

Categorieën: Mozilla-nl planet

Brian R. Bondy: Developing and releasing the Khan Academy Firefox OS app

do, 27/11/2014 - 04:51

I'm happy to announce that the Khan Academy Firefox OS app is now available in the Firefox Marketplace!

Khan Academy’s mission is to provide a free world-class education for anyone anywhere. The goal of the Firefox OS app is to help with the “anyone anywhere” part of the KA mission.

Why?

There's something exciting about being able to hold a world class education in your pocket for the cheap price of a Firefox OS phone. Firefox OS devices are mostly deployed in countries where the cost of an iPhone or Android based smart phone is out of reach for most people.

The app enables developing countries, lower income families, and anyone else to take advantage of the Khan Academy content. A persistent internet connection is not required.

What's that.... you say you want another use case? Well OK, here goes: A parent wanting each of their kids to have access to Khan Academy at the same time could be very expensive in device costs. Not anymore.

Screenshots!

App features
  • Access to the full library of Khan Academy videos and articles.
  • Search for videos and articles.
  • Ability to sign into your account for:
  • Profile access.
  • Earning points for watching videos.
  • Continuing where you left off from previous partial video watches, even if that was on the live site.
  • Partial and full completion status of videos and articles.
  • Downloading videos, articles, or entire topics for later use.
  • Sharing functionality.
  • Significant effort was put in, to minify topic tree sizes for minimal memory use and faster loading.
  • Scrolling transcripts for videos as you watch.
  • The UI is highly influenced by the first generation Firefox OS iPhone app.
Development statistics
  • 340 commits
  • 4 months of consecutive commits with at least 1 commit per day
  • 30 minutes - 2 hours per day max
Technologies used

Technologies used to develop the app include:

Localization

The app is fully localized for English, Portuguese, French, and Spanish, and will use those locales automatically depending on the system locale. The content (videos, articles, subtitles) that the app hosts will also automatically change.

I was lucky enough to have several amazing and kind translators for the app volunteer their time.

The translations are hosted and managed on Transifex.

Want to contribute?

The Khan Academy Firefox OS app source is hosted in one of my github repositories and periodically mirrored on the Khan Academy github page.

If you'd like to contribute there's a lot of future tasks posted as issues on github.

Current minimum system requirements
  • Around 8MB of space.
  • 512 MB of RAM
Low memory devices

By default, apps on the Firefox marketplace are only served to devices with at least 500MB of RAM. To get them on 256MB devices, you need to do a low memory review.

One of the major enhancements I'd like to add next, is to add an option to use the YouTube player instead of HTML5 video. This may use less memory and may be a way onto 256MB devices.

How about exercises?

They're coming in a future release.

Getting preinstalled on devices

It's possible to request to get pre-installed on devices and I'll be looking into that in the near future after getting some more initial feedback.

Projects like Matchstick also seem like a great opportunity for this app.

Categorieën: Mozilla-nl planet

Hannah Kane: We are very engaging

wo, 26/11/2014 - 23:12

Yesterday someone asked me what the engagement team is up to, and it made me sad because I realized I need to do a waaaaay better job of broadcasting my team’s work. This team is dope and you need to know about it.

As a refresher, our work encompasses these areas:

  • Grantwriting
  • Institutional partnerships
  • Marketing and communications
  • Small dollar fundraising
  • Production work (i.e. Studio Mofo)

In short, we aim to support the Webmaker product and programs and our leadership pipelines any time we need to engage individuals or institutions.

What’s currently on our plate:

Pro-tip: You can always see what we’re up to by checking out the Engagement Team Workbench.

These days we’re spending our time on the following:

  • End of Year Fundraising: With the help of a slew of kick-ass engineers, Andrea and Kelli are getting to $2M. (view the Workbench).
  • Mozilla Gear launch: Andrea and Geoffrey are obsessed with branded hoodies. To complement our fundraising efforts, they just opened a brand new site for people to purchase Mozilla Gear (view the project management spreadsheet).
  • Fall Campaign: Remember the 10K contributor goal? We do! An-Me and Paul have been working with Claw, Amira, Michelle, and Lainie, among others, to close the gap through a partner-based strategy (view the Workbench).
  • Mobile Opportunity: Ben is helping to envision and build partnerships around this work, and Paul and Studio Mofo are providing marketing, comms, and production support (the Mobile Opportunity Workbench is here, the engagement-specific work will be detailed soon).
  • Building a Webmaker Marketing Plan for 2015: The site and programs aren’t going to market themselves! Paul is drafting a comprehensive marketing calendar for 2015 that complements the product and program strategies. (plan coming soon)
  • 2015 Grants Pipeline: Ben and An-Me are always on the lookout for opportunities, and Lynn is responsible for writing grants and reports to fund our various programs and initiatives.
  • Additional Studio Mofo projects: Erika, Mavis, and Sabrina are always working on something. In addition to their work supporting most of the above, you can see a full list of projects here.
  • Salesforce for grants and partnerships: We’ve completed a custom Salesforce installation and Ben has begun the process of training staff to use it. Much more to come to make it a meaningful part of our workflow (Workbench coming soon).
  • Open Web Fellows recruitment: We’re supporting our newest fellowship with marketing support (view the Hype Plan)

Categorieën: Mozilla-nl planet

Niko Matsakis: Purging proc

wo, 26/11/2014 - 22:58

The so-called “unboxed closure” implementation in Rust has reached the point where it is time to start using it in the standard library. As a starting point, I have a pull request that removes proc from the language. I started on this because I thought it’d be easier than replacing closures, but it turns out that there are a few subtle points to this transition.

I am writing this blog post to explain what changes are in store and give guidance on how people can port existing code to stop using proc. This post is basically targeted Rust devs who want to adapt existing code, though it also covers the closure design in general.

To some extent, the advice in this post is a snapshot of the current Rust master. Some of it is specifically targeting temporary limitations in the compiler that we aim to lift by 1.0 or shortly thereafter. I have tried to mention when that is the case.

The new closure design in a nutshell

For those who haven’t been following, Rust is moving to a powerful new closure design (sometimes called unboxed closures). This part of the post covers the highlight of the new design. If you’re already familiar, you may wish to skip ahead to the “Transitioning away from proc” section.

The basic idea of the new design is to unify closures and traits. The first part of the design is that function calls become an overloadable operator. There are three possible traits that one can use to overload ():

1 2 3 trait Fn<A,R> { fn call(&self, args: A) -> R }; trait FnMut<A,R> { fn call_mut(&mut self, args: A) -> R }; trait FnOnce<A,R> { fn call_once(self, args: A) -> R };

As you can see, these traits differ only in their “self” parameter. In fact, they correspond directly to the three “modes” of Rust operation:

  • The Fn trait is analogous to a “shared reference” – it means that the closure can be aliased and called freely, but in turn the closure cannot mutate its environment.
  • The FnMut trait is analogous to a “mutable reference” – it means that the closure cannot be aliased, but in turn the closure is permitted to mutate its environment. This is how || closures work in the language today.
  • The FnOnce trait is analogous to “ownership” – it means that the closure can only be called once. This allows the closure to move out of its environment. This is how proc closures work today.
Enabling static dispatch

One downside of the older Rust closure design is that closures and procs always implied virtual dispatch. In the case of procs, there was also an implied allocation. By using traits, the newer design allows the user to choose between static and virtual dispatch. Generic types use static dispatch but require monomorphization, and object types use dynamic dispatch and hence avoid monomorphization and grant somewhat more flexibility.

As an example, whereas before I might write a function that takes a closure argument as follows:

1 2 3 4 5 fn foo(hashfn: |&String| -> uint) { let x = format!("Foo"); let hash = hashfn(&x); ... }

I can now choose to write that function in one of two ways. I can use a generic type parameter to avoid virtual dispatch:

1 2 3 4 5 6 7 fn foo<F>(hashfn: F) where F : FnMut(&String) -> uint { let x = format!("Foo"); let hash = hashfn(&x); ... }

Note that we write the type parameters to FnMut using parentheses syntax (FnMut(&String) -> uint). This is a convenient syntactic sugar that winds up mapping to a traditional trait reference (currently, for<'a> FnMut<(&'a String,), uint>). At the moment, though, you are required to use the parentheses form, because we wish to retain the liberty to change precisely how the Fn trait type parameters work.

A caller of foo() might write:

1 2 let some_salt: String = ...; foo(|str| myhashfn(str.as_slice(), &some_salt))

You can see that the || expression still denotes a closure. In fact, the best way to think of it is that a || expression generates a fresh structure that has one field for each of the variables it touches. It is as if the user wrote:

1 2 3 let some_salt: String = ...; let closure = ClosureEnvironment { some_salt: &some_salt }; foo(closure);

where ClosureEnvironment is a struct like the following:

1 2 3 4 5 6 7 8 9 struct ClosureEnvironment<'env> { some_salt: &'env String } impl<'env,'arg> FnMut(&'arg String) -> uint for ClosureEnvironment<'env> { fn call_mut(&mut self, (str,): (&'arg String,)) -> uint { myhashfn(str.as_slice(), &self.some_salt) } }

Obviously the || form is quite a bit shorter.

Using object types to get virtual dispatch

The downside of using generic type parameters for closures is that you will get a distinct copy of the fn being called for every callsite. This is a great boon to inlining (at least sometimes), but it can also lead to a lot of code bloat. It’s also often just not practical: many times we want to combine different kinds of closures together into a single vector. None of these concerns are specific to closures. The same things arise when using traits in general. The nice thing about the new closure design is that it lets us use the same tool – object types – in both cases.

If I wanted to write my foo() function to avoid monomorphization, I might change it from:

1 2 3 fn foo<F>(hashfn: F) where F : FnMut(&String) -> uint {...}

to:

1 2 fn foo(hashfn: &mut FnMut(&String) -> uint) { {...}

Note that the argument is now a &mut FnMut(&String) -> uint, rather than being of some type F where F : FnMut(&String) -> uint.

One downside of changing the signature of foo() as I showed is that the caller has to change as well. Instead of writing:

1 foo(|str| ...)

the caller must now write:

1 foo(&mut |str| ...)

Therefore, what I expect to be a very common pattern is to have a “wrapper” that is generic which calls into a non-generic inner function:

1 2 3 4 5 6 7 8 fn foo<F>(hashfn: F) where F : FnMut(&String) -> uint { foo_obj(&mut hashfn) } fn foo_obj(hashfn: &mut FnMut(&String) -> uint) {...}

This way, the caller does not have to change, and only this outer wrapper is monomorphized, and it will likely be inlined away, and the “guts” of the function remain using virtual dispatch.

In the future, I’d like to make it possible to pass object types (and other “unsized” types) by value, so that one could write a function that just takes a FnMut() and not a &mut FnMut():

1 2 fn foo(hashfn: FnMut(&String) -> uint) { {...}

Among other things, this makes it possible to transition simply between static and virtual dispatch without altering callers and without creating a wrapper fn. However, it would compile down to roughly the same thing as the wrapper fn in the end, though with guaranteed inlining. This change requires somewhat more design and will almost surely not occur by 1.0, however.

Specifying the closure type explicitly

We just said that every closure expression like || expr generates a fresh type that implements one of the three traits (Fn, FnMut, or FnOnce). But how does the compiler decide which of the three traits to use?

Currently, the compiler is able to do this inference based on the surrouding context – basically, the closure was an argument to a function, and that function requested a specific kind of closure, so the compiler assumes that’s the one you want. (In our example, the function foo() required an argument of type F where F implements FnMut.) In the future, I hope to improve the inference to a more general scheme.

Because the current inference scheme is limited, you will sometimes need to specify which of the three fn traits you want explicitly. (Some people also just prefer to do that.) The current syntax is to use a leading &:, &mut:, or :, kind of like an “anonymous parameter”:

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 // Explicitly create a `Fn` closure which cannot mutate its // environment. Even though `foo()` requested `FnMut`, this closure // can still be used, because a `Fn` closure is more general // than `FnMut`. foo(|&:| { ... }) // Explicitly create a `FnMut` closure. This is what the // inference would select anyway. foo(|&mut:| { ... }) // Explicitly create a `FnOnce` closure. This would yield an // error, because `foo` requires a closure it can call multiple // times in a row, but it is being given a closure that can be // called exactly once. foo(|:| { ... }) // (ERROR)

The main time you need to use an explicit fn type annotation is when there is no context. For example, if you were just to create a closure and assign it to a local variable, then a fn type annotation is required:

1 let c = |&mut:| { ... };

Caveat: It is still possible we’ll change the &:/&mut:/: syntax before 1.0; if we can improve inference enough, we might even get rid of it altogether.

Moving vs non-moving closures

There is one final aspect of closures that is worth covering. We gave the example of a closure |str| myhashfn(str.as_slice(), &some_salt) that expands to something like:

1 2 3 struct ClosureEnvironment<'env> { some_salt: &'env String }

Note that the variable some_salt that is used from the surrounding environment is borrowed (that is, the struct stores a reference to the string, not the string itself). This is frequently what you want, because it means that the closure just references things from the enclosing stack frame. This also allows closures to modify local variables in place.

However, capturing upvars by reference has the downside that the closure is tied to the stack frame that created it. This is a problem if you would like to return the closure, or use it to spawn another thread, etc.

For this reason, closures can also take ownership of the things that they close over. This is indicated by using the move keyword before the closure itself (because the closure “moves” things out of the surrounding environment and into the closure). Hence if we change that same closure expression we saw before to use move:

1 move |str| myhashfn(str.as_slice(), &some_salt)

then it would generate a closure type where the some_salt variable is owned, rather than being a reference:

1 2 3 struct ClosureEnvironment { some_salt: String }

This is the same behavior that proc has. Hence, whenever we replace a proc expression, we generally want a moving closure.

Currently we never infer whether a closure should be move or not. In the future, we may be able to infer the move keyword in some cases, but it will never be 100% (specifically, it should be possible to infer that the closure passed to spawn should always take ownership of its environment, since it must meet the 'static bound, which is not possible any other way).

Transitioning away from proc

This section covers what you need to do to modify code that was using proc so that it works once proc is removed.

Transitioning away from proc for library users

For users of the standard library, the transition away from proc is fairly straightforward. Mostly it means that code which used to write proc() { ... } to create a “procedure” should now use move|| { ... }, to create a “moving closure”. The idea of a moving closure is that it is a closure which takes ownership of the variables in its environment. (Eventually, we expect to be able to infer whether or not a closure must be moving in many, though not all, cases, but for now you must write it explicitly.)

Hence converting calls to libstd APIs is mostly a matter of search-and-replace:

1 2 3 4 5 Thread::spawn(proc() { ... }) // becomes: Thread::spawn(move|| { ... }) task::try(proc() { ... }) // becomes: task::try(move|| { ... })

One non-obvious case is when you are creating a “free-standing” proc:

1 let x = proc() { ... };

In that case, if you simply write move||, you will get some strange errors:

1 let x = move|| { ... };

The problem is that, as discussed before, the compiler needs context to determine what sort of closure you want (that is, Fn vs FnMut vs FnOnce). Therefore it is necessary to explicitly declare the sort of closure using the : syntax:

1 2 let x = proc() { ... }; // becomes: let x = move|:| { ... };

Note also that it is precisely when there is no context that you must also specify the types of any parameters. Hence something like:

1 2 3 4 5 6 7 8 let x = proc(x:int) foo(x * 2, y); // ~~~~ ~~~~~ // | | // | | // | | // | No context, specify type of parameters. // | // proc always owns variables it touches (e.g., `y`)

might become:

1 2 3 4 5 6 7 8 let x = move|: x:int| foo(x * 2, y); // ~~~~ ^ ~~~~~ // | | | // | | No context, specify type of parameters. // | | // | No context, also specify FnOnce. // | // `move` keyword means that closure owns `y` Transitioning away from proc for library authors

The transition story for a library author is somewhat more complicated. The complication is that the equivalent of a type like proc():Send ought to be Box<FnOnce() + Send> – that is, a boxed FnOnce object that is also sendable. However, we don’t currently have support for invoking fn(self) methods through an object, which means that if you have a Box<FnOnce()> object, you can’t call it’s call_once method (put another way, the FnOnce trait is not object safe). We plan to fix this – possibly by 1.0, but possibly shortly thereafter – but in the interim, there are workarounds you can use.

In the standard library, we use a trait called Invoke (and, for convenience, a type called Thunk). You’ll note that although these two types are publicly available (under std::thunk), these types do not appear in the public interface any other stable APIs. That is, Thunk and Invoke are essentially implementation details that end users do not have to know about. We recommend you follow the same practice. This is for two reasons:

  1. It generally makes for a better API. People would rather write Thread::spawn(move|| ...) and not Thread::spawn(Thunk::new(move|| ...)) (etc).
  2. Eventually, once Box<FnOnce()> works properly, Thunk and Invoke may be come deprecated. If this were to happen, your public API would be unaffected.

Basically, the idea is to follow the “thin wrapper” pattern that I showed earlier for hiding virtual dispatch. If you recall, I gave the example of a function foo that wished to use virtual dispatch internally but to hide that fact from its clients. It did do by creating a thin wrapper API that just called into another API, performing the object coercion:

1 2 3 4 5 6 7 8 fn foo<F>(hashfn: F) where F : FnMut(&String) -> uint { foo_obj(&mut hashfn) } fn foo_obj(hashfn: &mut FnMut(&String) -> uint) {...}

The idea with Invoke is similar. The public APIs are generic APIs that accept any FnOnce value. These just turnaround and wrap that value up into an object. Here the problem is that while we would probably prefer to use a Box<FnOnce()> object, we can’t because FnOnce is not (currently) object-safe. Therefore, we use the trait Invoke (I’ll show you how Invoke is defined shortly, just let me finish this example):

1 2 3 4 5 6 7 8 9 10 pub fn spawn<F>(taskbody: F) where F : FnOnce(), F : Send { spawn_inner(box taskbody) } fn spawn_inner(taskbody: Box<Invoke+Send>) { ... }

The Invoke trait in the standard library is defined as:

1 2 3 trait Invoke<A=(),R=()> { fn invoke(self: Box<Self>, arg: A) -> R; }

This is basically the same as FnOnce, except that the self type is Box<Self>, and not Self. This means that Invoke requires allocation to use; it is really tailed for object types, unlike FnOnce.

Finally, we can provide a bridge impl for the Invoke trait as follows:

1 2 3 4 5 6 7 8 impl<A,R,F> Invoke<A,R> for F where F : FnOnce(A) -> R { fn invoke(self: Box<F>, arg: A) -> R { let f = *self; f(arg) } }

This impl allows any type that implements FnOnce to use the Invoke trait.

High-level summary

Here are the points I want you to take away from this post:

  1. As a library consumer, the latest changes mostly just mean replacing proc() with move|| (sometimes move|:| if there is no surrounding context).
  2. As a library author, your public interface should be generic with respect to one of the Fn traits. You can then convert to an object internally to use virtual dispatch.
  3. Because Box<FnOnce()> doesn’t currently work, library authors may want to use another trait internally, such as std::thunk::Invoke.

I also want to emphasize that a lot of the nitty gritty details in this post are transitionary. Eventually, I believe we can reach a point where:

  1. It is never (or virtually never) necessary to explicitly declare Fn vs FnMut vs FnOnce explicitly.
  2. We can frequently (though not always) infer the keyword move.
  3. Box<FnOnce()> works, so Invoke and friends are not needed.
  4. The choice between static and virtual dispatch can be changed without affecting users and without requiring wrapper functions.

I expect the improvements in inference before 1.0. Fixing the final two points is harder and so we will have to see where it falls on the schedule, but if it cannot be done for 1.0 then I would expect to see those changes shortly thereafter.

Categorieën: Mozilla-nl planet

Jared Wein: The Bugs Blocking In-Content Prefs, part 2

wo, 26/11/2014 - 20:59

Season's greetingsAt the beginning of November I published a blog post with the list of bugs that are blocking in-content prefs from shipping. Since that post, quite a few bugs have been fixed and we figured out an approach for fixing most of the high-contrast bugs.

As in the last post, bugs that should be easy to fix for a newcomer are highlighted in yellow.

Here is the new list of bugs that are blocking the release:

The list is now down to 16 bugs (from 20). In the meantime, the following bugs have been fixed:

  • Bug 1022578: Can’t tell what category is selected in about:preferences when using High Contrast mode
  • Bug 1022579: Help buttons in about:preferences have no icon when using High Contrast mode
  • Bug 1012410: Can’t close in-content cookie exceptions dialog
  • Bug 1089812: Implement updated In-content pref secondary dialogs

Big thanks goes out to Richard Marti and Tim Nguyen for fixing the above mentioned bugs as well as their continued focus on helping to bring the In-Content Preferences to to the Beta and Release channels.


Tagged: firefox, planet-mozilla, ux
Categorieën: Mozilla-nl planet

Lucas Rocha: Leaving Mozilla

wo, 26/11/2014 - 17:57

I joined Mozilla 3 years, 4 months, and 6 days ago. Time flies!

I was very lucky to join the company a few months before the Firefox Mobile team decided to rewrite the product from scratch to make it more competitive on Android. And we made it: Firefox for Android is now one of the highest rated mobile browsers on Android!

This has been the best team I’ve ever worked with. The talent, energy, and trust within the Firefox for Android group is simply amazing.

I’ve thoroughly enjoyed my time here but an exciting opportunity outside Mozilla came up and decided to take it.

What’s next? That’s a topic for another post ;-)

Categorieën: Mozilla-nl planet

Will Kahn-Greene: Input: New feedback form

wo, 26/11/2014 - 17:20

Since the beginning of 2014, I've been laying the groundwork to rewrite the feedback form that we use on Input.

Today, after a lot of work, we pushed out the new form! Just in time for Firefox 34 release.

This blog post covers the circumstances of the rewrite.

Why?

In 2011, James, Mike and I rewrote Input from the ground up. In order to reduce the amount of time it took to do that rewrite, we copied a lot of the existing forms and styles including the feedback forms. At that time, there were two: one for desktop and one for mobile. In order to avoid a translation round, we kept all the original strings of the two forms. The word "Firefox" was hardcoded in the strings, but that was fine since at the time Input only collected feedback for Firefox.

In 2013, in order to reduce complexity on the site because there's only one developer (me), I merged the desktop and mobile forms into one form. In order to avoid a translation round, I continued to keep the original strings. The wording became awkward and the flow through the form wasn't very smooth. Further, the form wasn't responsive at all, so it worked ok on desktop machines, but mediocre on other viewport sizes.

2014 rolled around and it was clear Input was going to need to branch out into capturing feedback for multiple products---some of which were not Firefox. The form made this difficult.

Related, the smoketest framework I wrote in 2014 struggled with testing the form accurately. I spent some time tweaking it, but a simpler form would make smoketesting a lot easier and less flakey.

Thus over the course of 3 years, we had accumulated the following problems:

  1. The flow through the form felt awkward, instructions weren't clear and information about what data would be public and what data would be private wasn't clear.
  2. Strings had "Firefox" hardcoded and wouldn't support multiple products.
  3. The form wasn't responsive and looked/behaved poorly in a variety of situations.
  4. The form never worked in right-to-left languages and possibly had other accessibility issues.
  5. The architecture didn't let us experiment with the form---tweaking the wording, switching to a more granular gradient of sentiment, capturing other data, etc.

Further, we were seeing many instances of people putting contact information in the description field and there was a significant amount of dropoff.

I had accrued the following theories:

  1. Since the email address is on the third card, users would put their email address in the description field because they didn't know they could leave their contact information later.
  2. Having two cards would reduce the amount of drop-off and unfinished forms than three cards.
  3. Having simpler instruction text would reduce the amount of drop-off.

Anyhow, it was due for an overhaul.

So what's changed?

I've been working on the overhaul for most of 2014, but did the bulk of the work in October and November. It has the following changes:

  1. The new form is shorter and clearer text-wise and design-wise.
  2. It consists of two cards: one for capturing sentiment and one for capturing details about that sentiment.
  3. It clearly delineates data that will be public from data that will be kept private.
  4. It works with LTR and RTL languages (If that's not true, please open a bug.)
  5. It fixes some accessibility issues. (If you find any, please open a bug.)
  6. It uses responsive design, mobile first. Thus it was designed for mobile devices and then scaled to desktop-sized viewports.
  7. It's smaller in kb size and requires fewer HTTP requests.
  8. It's got a better architecture for future development.
  9. It doesn't have "Firefox" hardcoded anymore.
  10. It's simpler so the smoketests work reliably now.
The old Input feedback form.

The old Input feedback form.

The new Input feedback form.

The new Input feedback form.

Note: Showing before and after isn't particularly exciting since this is only the first card of the form in both cases.

Going forward

The old and new forms were instrumented in various ways, so we'll be able to analyze differences between the two. Particularly, we'll be able to see if the new form performs worse.

Further, I'll be checking the data to see if my theories hold true especially the one regarding why people put contact data in the description.

There are a few changes in the queue that we want to make over the course of the next 6 months. Now that the new form has landed, we can start working on those.

Even if there are problems with the new form, we're in a much better position to fix them than we were before. Progress has been made!

Take a moment---try out the form and tell us YOUR feedback

Have you ever submitted feedback? Have you ever told Mozilla what you like and don't like about Firefox?

Take a moment and fill out the feedback form and tell us how you feel about Firefox.

Thanks, etc

I've been doing web development since 1997 or so. I did a lot of frontend work back then, but I haven't done anything serious frontend-wise in the last 5 years. Thus this was a big project for me.

I had a lot of help: Ricky, Mike and Rehan from the SUMO Engineering team were invaluable reviewing code, helping me fix issues and giving me a huge corpus of examples to learn from; Matt, Gregg, Tyler, Ilana, Robert and Cheng from the User Advocacy team who spent a lot of time smoothing out the rough edges of the new form so it captures the data we need; Schalk who wrote the product picker which I later tweaked; Matej who spent time proof-reading the strings to make sure they were consistent and felt good; the QA team which wrote the code that I copied and absorbed into the current Input smoketests; and the people who translated the user interface strings (and found a bunch of issues) making it possible for people to see this form in their language.

Categorieën: Mozilla-nl planet

Brian R. Bondy: SQL on Khan Academy enabled by SQLite, sqljs, asm.js and Emscripten

wo, 26/11/2014 - 16:17

Originally the computer programming section at Khan Academy only focused on learning JavaScript by using ProcessingJS. This still remains our biggest environment and still has lots of plans for further growth, but we recently generalized and abstracted the whole framework to allow for new environments.

The first environment we added was HTML/CSS which was announced here. You can try it out here. We also have a lot of content for learning how to make webpages already created.

SQL on Khan Academy

We recently also experimented with the ability to teach SQL on Khan Academy. This wasn't a near term priority for us, so we used our hack week as an opportunity to bring an SQL environment to Khan Academy.

You can try out the SQL environment here.

Implementation

To implement the environment, one would think of WebSQL, but there are a couple major browser vendors (Mozilla and Microsoft) who do not plan to implement it and W3C stopped working on the specification at the end of 2010,

Our implementation of SQL is based off of SQLite which is compiled down to asm.js by Emscripten packaged into sqljs.

All of these technologies I just mentioned, other than SQLite which is sponsored by Mozilla, are Mozilla based projects. In particular, largely thanks to Alon Zakai.

The environment

The environment looks like this, the entire code for creating, inserting, updating, and querying a database occur in a single editor. Behind the scenes, we re-create the entire state of the database and result sets on each code edit. Things run smoothly in the browser and you don't notice that.

Unlike many online SQL tutorials, this environment is entirely client side. It has no limitations on what you can do, and if we wanted, we could even let you export the SQL databases you create.

One of the other main highlights is that you can modify the inserts in the editor, and see the results in real time without having to run the code. This can lead to some cool insights on how changing data affects aggregate queries.

Hour of Code

Unlike the HTML/CSS work, we don’t have a huge number of tutorials created, but we do have some videos, coding talk throughs, challenges and a project setup in a single tutorial which we’ll be using for one of our hour of code offerings: Hour of Databases.

Categorieën: Mozilla-nl planet

Robert Nyman: Leaving Mozilla

wo, 26/11/2014 - 14:42

This is a really hard blog post to write, but I need to share this with you: I’m leaving Mozilla.

It started in 2009

At the end of 2008 I had started learning to code extensions for Firefox, and in March 2009 I went to Berlin to give my first international presentation at an add-ons workshop.

It was amazing! The rush of being on stage, teaching people, learning from them; helping, discussing and having a great time! I really loved it and at that time I felt like I had found home, what I was supposed to work with.

The following years I was part of the Mozilla community, speaking at more workshops and attending MozCamps. In 2011, a position came up as a Technical Evangelist and I joined Mozilla full time.

What has happened since

Since I started I’ve gotten to meet numerous fantastic and inspiring people, both employees and people in the great Mozilla community. I’ve traveled extensively and became the Most well-travelled speaker on Lanyrd – now the count is up to 32 countries.

I’ve also written more in detail about Why I travel and about working with developer relations and Why I Do What I Do. There’s also lots more in the Travel category.

image

I’ve worked on a lot of things at Mozilla over the years, and a couple of the things I’m really proud of is having run the Mozilla Hacks blog over the last two years, and having published 350 quality posts in 2 years! I also took the initiative to launch feedback channels around the Firefox Developer Tools and Open Web Apps and we’ve gotten great feedback from developers there.

Moving on

Alas, it’s time to move on. I’ve always preached to developers to always strive for more, whether that’s a new position in the current company or changing jobs, to ensure they keep on evolving and don’t stagnate. And I feel I really have to follow my own example in this regard.

I’ve gotten to learn and experience a lot of things at Mozilla and for that I’m eternally grateful.

Mozilla is going through a number of challenges at the moment, and to be honest, it’s my belief that the upper management need to acknowledge and address these.

I believe Mozilla is representing a great cause and I wish they can fix and tend to what they’re facing and that they come out stronger. I believe the Open Web and people need Mozilla and I wish it, and all the great people I know there, all the best.

What happens next?

I will be starting a new job, and I’ll tell you about it tomorrow, Thursday. For now, I’ll just let this sink in and then I’ll talk more about it.

If you have any questions or thoughts, please let me know here in the comments or e-mail me at robert [at] robertnyman [dot] com.

I’m always here for you. Thanks for reading.

Categorieën: Mozilla-nl planet

Doug Belshaw: Toward The Development of a Web Literacy Map: Exploring, Building, and Connecting Online

wo, 26/11/2014 - 13:44

The title of this post is also the title of a presentation I’m giving at the Literacy Research Association conference next week. The conference has the theme ‘The Dialogic Construction of Literacies’ – so this session is a great fit. It’s been organised by Ian O'Byrne and Greg McVerry, both researchers and Mozilla contributors.

Tiger

I’m cutting short my participation in the Mozilla work week in Portland, Oregon next week to fly to present at this conference. This is not only because I think it’s important to honour prior commitments, but because I want to encourage more literacy researchers to get involved in developing the Web Literacy Map.

I’ve drafted the talk in the style in which I’d deliver it. The idea isn’t to read it, but to use this to ensure that my presentation is backed up by slides, rather than vice-versa. I’ll then craft speaker notes to ensure I approximate what’s written here.

Click here to read the text of the draft presentation

I’d very much appreciate your feedback. Here’s the specific things I’m looking for answers to:

  • Gaps - what have I missed?
  • Structure - does it 'flow’?
  • Red flags - is there anything in there liable to cause problems/issues?

I’ve created a thread on the #TeachTheWeb discussion forum for your responses - or you can email me directly: doug@mozillafoundation.org

Thanks in advance! And remember, it doesn’t matter how new you are to the Web Literacy Map or the process of creating it. I’m interested in the views of newbies and veterans alike.

\(@ ̄∇ ̄@)/

Categorieën: Mozilla-nl planet

Andrea Marchesini: Switchy 0.9 released

wo, 26/11/2014 - 11:32

Break-news: Finally I had time to update Switchy to the latest addon-sdk 1.7 and now, version 0.9.x is restart-less!

What is Switchy? Switchy is an add-on for Firefox to better manage several profiles. This add-on allows the user to create Firefox profiles, rename, delete and open them just with a click.

By using Switchy, you can open more profiles at the same time: an important feature for those who are concerned about security and privacy. For instance, you can have a separate profile for Facebook and other social networks while browsing other websites or have a separate profile for Google so you are not always logged in.

Don’t we have some similar addons? There are other similar add-ons but Switchy has extra features. You can assign websites to be exclusive for particular profiles. This means that, when from profile X I try to open one of websites saved in a specific profile, Switchy allows me to “switch” to the correct profile with just 1 click. For example, if I open ‘Facebook’ from my default profile, Switchy immediately offers me the opportunity to open the correct profile where I am logged in on Facebook - which is nice!

What is new in version 0.9? Restart-less, and a new awesome UI for the Switchy panel.

I hope you enjoy it!

Categorieën: Mozilla-nl planet

François Marier: Hiding network disconnections using an IRC bouncer

wo, 26/11/2014 - 11:30

A bouncer can be a useful tool if you rely on IRC for team communication and instant messaging. The most common use of such a server is to be permanently connected to IRC and to buffer messages while your client is disconnected.

However, that's not what got me interested in this tool. I'm not looking for another place where messages accumulate and wait to be processed later. I'm much happier if people email me when I'm not around.

Instead, I wanted to do to irssi what mosh did to ssh clients: transparently handle and hide temporary disconnections. Here's how I set everything up.

Server setup

The first step is to install znc:

apt-get install znc

Make sure you get the 1.0 series (in jessie or trusty, not wheezy or precise) since it has much better multi-network support.

Then, as a non-root user, generate a self-signed TLS certificate for it:

openssl req -x509 -sha256 -newkey rsa:2048 -keyout znc.pem -nodes -out znc.crt -days 365

and make sure you use something like irc.example.com as the subject name, that is the URL you will be connecting to from your IRC client.

Then install the certificate in the right place:

mkdir ~/.znc mv znc.pem ~/.znc/ cat znc.crt >> ~/.znc/znc.pem

Once that's done, you're ready to create a config file for znc using the znc --makeconf command, again as the same non-root user:

  • create separate znc users if you have separate nicks on different networks
  • use your nickserv password as the server password for each network
  • enable ssl
  • say no to the chansaver and nickserv plugins

Finally, open the IRC port (tcp port 6697 by default) in your firewall:

iptables -A INPUT -p tcp --dport 6697 -j ACCEPT Client setup (irssi)

On the client side, the official documentation covers a number of IRC clients, but the irssi page was quite sparse.

Here's what I used for the two networks I connect to (irc.oftc.net and irc.mozilla.org):

servers = ( { address = "irc.example.com"; chatnet = "OFTC"; password = "fmarier/oftc:Passw0rd1!"; port = "6697"; use_ssl = "yes"; ssl_verify = "yes"; ssl_cafile = "~/.irssi/certs/znc.crt"; }, { address = "irc.example.com"; chatnet = "Mozilla"; password = "francois/mozilla:Passw0rd1!"; port = "6697"; use_ssl = "yes"; ssl_verify = "yes"; ssl_cafile = "~/.irssi/certs/znc.crt"; } );

Of course, you'll need to copy your znc.crt file from the server into ~/.irssi/certs/znc.crt.

Make sure that you're no longer authenticating with the nickserv from within irssi. That's znc's job now.

Wrapper scripts

So far, this is a pretty standard znc+irssi setup. What makes it work with my workflow is the wrapper script I wrote to enable znc before starting irssi and then prompt to turn it off after exiting:

#!/bin/bash ssh irc.example.com "pgrep znc || znc" irssi read -p "Terminate the bouncer? [y/N] " -n 1 -r echo if [[ $REPLY =~ ^[Yy]$ ]] then ssh irc.example.com killall -sSIGINT znc fi

Now, instead of typing irssi to start my IRC client, I use irc.

If I'm exiting irssi before commuting or because I need to reboot for a kernel update, I keep the bouncer running. At the end of the day, I say yes to killing the bouncer. That way, I don't have a backlog to go through when I wake up the next day.

Categorieën: Mozilla-nl planet

Gervase Markham: Not That Secret, Actually…

wo, 26/11/2014 - 11:25

(Try searching Google Maps for “Secret Location”… there’s one in Norway, one in Toronto, and two in Vancouver!)

Categorieën: Mozilla-nl planet

Karl Dubost: Fix Your Flexbox Web site

wo, 26/11/2014 - 01:51

Web compatibility issues takes many forms. Some are really hard to solve and there are sound business reasons behind them. On the other hand, some Web compatibility issues are really easy to fix with the benefits of allowing more potential market shares for the Web site. CSS Flexbox is one of those. I have written about it in the past. Let's make another practical demonstration on how to fix some of the flexbox issues.

8 Lines of CSS Code

Spoiler alert: This is the final result before and after fixing the CSS.

Screenshots of Hao123 site

How we did it? Someone had reported that the layout was broken on hao123.com on Firefox OS (Mobile). Two things are at happening. First of all, because Hao123 was not sending the mobile version to Firefox OS, we relied on User Agent overriding. Faking the Firefox Android user agent, we had access to the mobile version. Unfortunately, this version is partly tailored for -webkit- CSS properties.

Inspecting the stylesheets with the developer tools, we can easily discover the culprit.

→ grep -i "display:-webkit" hao123-old.css display:-webkit-box; display:-webkit-box; display:-webkit-box display:-webkit-box; display:-webkit-box; display:-webkit-box display:-webkit-box; → grep -i "flex" hao123-old.css -webkit-box-flex:1;

So I decided to fix it by adding

  1. display:flex; for each display:-webkit-box;
  2. flex-grow: 1; for -webkit-box-flex:1;

The thing which is amazing which this kind of fix is that the site dynamically fixes itself in your viewport as you go with it. You are literally sculpting the page. And if the company is telling why they should bother about it? Because for something that will take around 10 minutes to fix, they will suddenly have a much bigger coverage of devices… which means users… which means marketshares.

Guides For Fixing Web Compatibility Issues

I started a repository to help people fixing their own Web site with the most common issues in Web Compatibility. Your contribution on the project is more than welcome.

Otsukare.

Categorieën: Mozilla-nl planet

Hannah Kane: How to Mofo

di, 25/11/2014 - 22:47

OpenMatt and I have been talking about the various ways of working at Mofo, and we compiled this list of what we think works best. What do y’all Mofos think?

When starting a new project:

  • Clearly state the problem or goal. Don’t jump ahead to the solution. Ameliorating the problem is what you’ll measure success against, not your ability to implement an arbitrary solution.
  • Explicitly state assumptions. And, whenever possible, test those assumptions before you build anything. You may have assumptions about the nature of the problem you’re trying to solve, who’s experiencing it, or your proposed solution.
  • Have clear success metrics. How will you know if you’re winning? Do you have the instruments you need to measure success?
  • Determine what resources you need. Think about design, development, content, engagement, evaluation, and ongoing maintenance. We’re working on improving the ways we allocate resources throughout the organization, but to start, be clear about what resources your project will need.
  • Produce a project brief. Detail all of the above in a single document. (Examples templates here and here.) Use the project brief when you…
  • …Have a project kick-off meeting. Invite *all* the stakeholders to get involved early.

Communication:

  • Have a check-in plan. Will you have daily check-ins? Weekly email updates? How are you checking in and holding each other accountable?
  • Build a workbench and keep it updated.  We recommend a wiki page that will serve as a one-stop shop for anyone needing information about the project. Things to include: links to project briefs and notes, logistics for meetings, a timeline, a list of who’s involved, and, of course, bugs! Examples here, here, and here.
  • Put your notes in one spot.  A single canonical pad for notes and agendas. We don’t need to create a  new pad every time you have a meeting or a thought! That makes them very hard to track and find later. Examples here and here.

Doing the do:

  • Plan in two-week heartbeats. This helps us stay on track and makes it clear what the priorities are. Speaking of priorities…
  • Learn the Fine Art of Prioritizing. Hint: Not everything can be P1. The product owner or project manager should rank tasks in order of value added. Remember: prioritization is part of managing workflow. It may be true that all or most of the tasks are required for a successful launch, but that doesn’t help a developer or designer who’s trying to decide what to work on next.
  • Work with your friendly neighborhood Tactical Priorities Syndicate. The name sounds scary, but they’re here to serve you. They meet weekly to get your priorities into each two-week heartbeat process. https://wiki.mozilla.org/Webmaker/TPS

Categorieën: Mozilla-nl planet

Roberto A. Vitillo: Clustering Firefox hangs

di, 25/11/2014 - 19:51

Jim Chen recently implemented a system to collect stacktraces of threads running some code for more than 500ms. A summary of the aggregated data is displayed in a nice dashboard in which the top N aggregated stacks are shown according to different filters.

I have looked at a different way to group the frames that would help us identify the culprits of main-thread hangs, aka jank. The problem with aggregating stackframes and looking at the top N is that there is a very long tail of stacks that are not considered. It might very well be that in that tail some important patterns could be lurking that we are missing.

So I tried different clustering techniques until I settled with the very simple solution of aggregating the traces by their last frame. Why the last frame? When I used k-means to cluster the traces I noticed that, for many of the more interesting clusters the algorithm found, most stacks had the last frame in common, e.g.:

  • Startup::XRE_Main, (chrome script), Timer::Fire, nsRefreshDriver::Tick, PresShell::Flush, PresShell::DoReflow
  • Startup::XRE_Main, Timer::Fire, nsRefreshDriver::Tick, PresShell::Flush, PresShell::DoReflow
  • Startup::XRE_Main, EventDispatcher::Dispatch, (content script), PresShell::Flush, PresShell::DoReflow

Aggregating by the last frame yields clusters that are big enough to be considered interesting in terms of number of stacktraces and are likely to explain the most common issues our users experience.

Currently on Aurora, the top 10 meaningful offending main-thread frames are in order of importance:

  1. PresShell::DoReflow accounts for 5% of all stacks
  2. nsCycleCollector::collectSlice accounts for 4.5% of all stacks
  3. nsJSContext::GarbageCollectNow accounts for 3% of all stacks
  4. IPDL::PPluginInstance::SendPBrowserStreamConstructor accounts for 3% of all stacks
  5. (chrome script) accounts for 3% all stacks
  6. filterStorage.js (Adblock Plus?) accounts for 2.7% of all stacks
  7. nsStyleSet::FileRules accounts for 2.7% of all stacks
  8. IPDL::PPluginInstance::SendNPP_Destroy accounts for 2% of all stacks
  9. IPDL::PPluginScriptableObject::SendHasProperty accounts for 2% of all stacks
  10. IPDL::PPluginScriptableObject::SendInvoke accounts for 1.7% of all stacks

Even without showing sample stacks for each cluster, there is some useful information here. The elephants in the room are clearly plugins; or should I say Flash? But just how much do “plugins” hurt our responsiveness? In total, plugin related traces account for about 15% of all hangs. It also seems that the median duration of a plugin hang is not different from a non-plugin one, i.e. between 1 and 2 seconds.

Tha analysis was run on a week’s worth of data for Aurora (over 20M stackframes) and I got similar results when re-running on previous weeks, so those numbers seem to be pretty stable.

There is some work in progress to improve the status quo. Aaron Klotz’s formidable async plugin initialization is going to eliminate trace 4 and he might tackle frame 8 in the future. Furthermore, a recent improvent in cycle collection is hopefully going to reduce the impact of frame 2.


Categorieën: Mozilla-nl planet

Pagina's