mozilla

Mozilla Nederland LogoDe Nederlandse
Mozilla gemeenschap

Abonneren op feed Mozilla planet
Planet Mozilla - http://planet.mozilla.org/
Bijgewerkt: 2 uur 7 min geleden

Henrik Skupin: Firefox Automation report – week 25/26 2014

do, 17/07/2014 - 15:57

In this post you can find an overview about the work happened in the Firefox Automation team during week 25 and 26.

Highlights

June the 11th was actually the last Automation Training day for our team in Q3. About the results you can read here. We will implement some changes for the next quarter, when we most likely want to host 2 of them.

Henrik finally got the time to upgrade our Mozmill-CI systems to the lastest LTS version of Jenkins. There were a bit of changes necessary but in general all went fine this time, and we can see some great improvements. Especially the long delays when sending out job results seem to be gone.

Further Henrik investigated the slow behavior with the mozmill-ci production master, when it is under load, e.g. QA runs ondemand update tests for releases of Firefox. The main problem stays with Java, which is taking up about 100% of the CPU. Because of this the integrated web server cannot serve pages in a timely manner. Adding a 2nd CPU to this node gave us way better response times.

Given that the new version of Ubuntu came out already in April, we want to have our Mozmill tests also run on that platform version. So we got new VM spun-up by IT, which we now have to puppetize and bring online. But this may still take a bit, given the remaining blockers for using PuppetAgain.

While talking about Puppet we got the next big change reviewed and landed. With bug 1021230 we now have our own user account, which can be customized to our needs. And that’s what we totally need, given that our infrastructure is so different from the Releng one.

Also for TPS we made progress, so the new TPS-CI production machine came online. Yet it cannot replace the current CI due to still a fair amount of open blockers, but hopefully by end of July we should be able to turn the switch.

Individual Updates

For more granular updates of each individual team member please visit our weekly team etherpad for week 25 and week 26.

Meeting Details

If you are interested in further details and discussions you might also want to have a look at the meeting agenda, the video recording, and notes from the Firefox Automation meetings of week 25 and week 26.

Categorieën: Mozilla-nl planet

David Rajchenbach Teller: The Battle of Session Restore – Season 1 Episode 3 – All With Measure

do, 17/07/2014 - 14:34

Plot For the second time, our heroes prepared for battle. The startup of Firefox was too slow and Session Restore was one of the battle fields.

When Firefox starts, Session Restore is in charge of restoring the browser to its previous state, in case of a crash, a restart, or for the users who have configured Firefox to resume from its previous state. This entails numerous activities during startup:

  1. read sessionstore.js from disk, decode it and parse it (recall that the file is potentially several Mb large), handling errors;
  2. backup sessionstore.js in case of startup crash.
  3. create windows, tabs, frames;
  4. populate history, scroll position, forms, session cookies, session storage, etc.

It is common wisdom that Session Restore must have a large impact on Firefox startup. But before we could minimize this impact, we needed to measure it.

Benchmarking is not easy

When we first set foot on Session Restore territory, the contribution of that module to startup duration was uncharted. This was unsurprising, as this aspect of the Firefox performance effort was still quite young. To this day, we have not finished chartering startup or even Session Restore’s startup.

So how do we measure the impact of Session Restore on startup?

A first tool we use is Timeline Events, which let us determine how long it takes to reach a specific point of startup. Session Restore has had events `sessionRestoreInitialized` and `sessionRestored` for years. Unfortunately, these events did not tell us much about Session Restore itself.

The first serious attempt at measuring the impact of Session Restore on startup Performance was actually not due to the Performance team but rather to the metrics team. Indeed, data obtained through Firefox Health Report participants indicated that something wrong had happened.

Oops, something is going wrong

Indicator `d2` in the graph measures the duration between `firstPaint` (which is the instant at which we start displaying content in our windows) and `sessionRestored` (which is the instant at which we are satisfied that Session Restore has opened its first tab). While this measure is imperfect, the dip was worrying – indeed, it represented startups that lasted several seconds longer than usual.

Upon further investigation, we concluded that the performance regression was indeed due to Session Restore. While we had not planned to start optimizing the startup component of Session Restore, this battle was forced upon us. We had to recover from that regression and we had to start monitoring startup much better.

A second tool is Telemetry Histograms for measuring duration of individual operations, such as reading sessionstore.js or parsing it. We progressively added measures for most of the operations of Session Restore. While these measures are quite helpful, they are also unfortunately very unstable in real-world conditions, as they are affected both by scheduling (the operations are asynchronous), by the work load of the machine, by the actual contents of sessionstore.js, etc.

The following graph displays the average duration of reading and decoding sessionstore.js among Telemetry participants: Telemetry 4

Difference in colors represent successive versions of Firefox. As we can see, this graph is quite noisy, certainly due to the factors mentioned above (the spikes don’t correspond to any meaningful change in Firefox or Session Restore). Also, we can see a considerable increase in the duration of the read operation. This was quite surprising for us, given that this increase corresponds to the introduction of a much faster, off the main thread, reading and decoding primitive. At the time, we were stymied by this change, which did not correspond to our experience. We have now concluded that by changing the asynchronous operation used to read the file, we have simply changed the scheduling, which makes the operation appear longer, while in practice it simply does not block the rest of the startup from taking place on another thread.

One major tool was missing for our arsenal: a stable benchmark, always executed on the same machine, with the same contents of sessionstore.js, and that would let us determine more exactly (almost daily, actually) the impact of our patches upon Session Restore:Session Restore Talos

This test, based on our Talos benchmark suite, has proved both to be very stable, and to react quickly to patches that affected its performance. It measures the duration between the instant at which we start initializing Session Restore (a new event `sessionRestoreInit`) and the instant at which we start displaying the results (event `sessionRestored`).

With these measures at hand, we are now in a much better position to detect performance regressions (or improvements) to Session Restore startup, and to start actually working on optimizing it – we are now preparing to using this suite to experiment with “what if” situations to determine which levers would be most useful for such an optimization work.

Evolution of startup duration

Our first benchmark measures the time elapsed between start and stop of Session Restore if the user has requested all windows to be reopened automatically

restoreAs we can see, the performance on Linux 32 bits, Windows XP and Mac OS 10.6 is rather decreasing, while the performance on Linux 64 bits, Windows 7 and 8 and MacOS 10.8 is improving. Since the algorithm used by Session Restore upon startup is exactly the same for all platforms, and since “modern” platforms are speeding up while “old” platforms are slowing down, this suggests that the performance changes are not due to changes inside Session Restore. The origin of these changes is unclear. I suspect the influence of newer versions of the compilers or some of the external libraries we use, or perhaps new and improved (for some platforms) gfx.

Still, seeing the modern platforms speed up is good news. As of Firefox 31, any change we make that causes a slowdown of Session Restore will cause an immediate alert so that we can react immediately.

Our second benchmark measures the time elapsed if the user does not wish windows to be reopened automatically. We still need to read and parse sessionstore.js to find whether it is valid, so as to decide whether we can show the “Restore” button on about:home.

norestoreWe see peaks in Firefox 27 and Firefox 28, as well as a slight decrease of performance on Windows XP and Linux. Again, in the future, we will be able to react better to such regressions.

The influence of factors upon startup

With the help of our benchmarks, we were able to run “what if” scenarios to find out which of the data manipulated by Session Restore contributed to startup duration. We did this in a setting in which we restore windows:size-restore

and in a setting in which we do not:

size-norestore

Interestingly, increasing the size of sessionstore.js has apparently no influence on startup duration. Therefore, we do not need to optimize reading and parsing sessionstore.js. Similarly, optimizing history, cookies or form data would not gain us anything.

The single largest most expensive piece of data is the set of open windows – interestingly, this is the case even when we do not restore windows. More precisely, any optimization should target, by order of priority:

  1. the cost of opening/restoring windows;
  2. the cost of opening/restoring tabs;
  3. the cost of dealing with windows data, even when we do not restore them.
What’s next?

Now that we have information on which parts of Session Restore startup need to be optimized, the next step is to actually optimize them. Stay tuned!


Categorieën: Mozilla-nl planet

Marco Zehe: Quick tip: Add someone to circles on Google Plus using a screen reader

do, 17/07/2014 - 08:39

In my “WAI-ARIA for screen reader users” post in early May, I was asked by Donna to talk a bit about Google Plus. Especially, she asked how to add someone to circles. Google Plus has learned a thing or two about screen reader accessibility recently, but the fact that there is no official documentation on the Google Accessibility entry page yet suggests that people inside Google are not satisfied with the quality of Google Plus accessibility yet, or not placing a high enough priority on it. That quality, however, has improved, so adding someone to one or more circles using a screen reader is not that difficult any more.

Note that I tested the below steps with Firefox 31 Beta (out July 22) and NVDA 2014.2. Other screen reader/browser combos may vary in the way they output stuff or switch between their virtual cursor and focus/forms modes.

Here are the steps:

  1. Log into Google Plus. If you already have a profile, just go ahead and find someone. If not, create a profile and add people.
  2. The easiest way to find people is to go to the People tab. Note that these currently have no “selected” state yet, but they do have the word “active” as part of the link text.
  3. Once you found someone in the list of suggestions, find the “Add to circles” menu button, and press the Space Bar. Note that it is very important that you press Space here, not Enter!
  4. NVDA now automatically switches to focus mode. What happened is that a popup menu opened that has a list of current circles, and an item at the bottom that allows you to create a new circle on the fly. The circles themselves are checkable menu items. Use the up and down arrows to select a circle, for example Friends or Acquaintances, and press the Space Bar to add the person. The number of people in that circle will dynamically increase by one, and the state will be changing to “checked”. Likewise, if you want to remove a person from a particular circle, press Space Bar just the same. These all act like regular check boxes, and the menu stays active so you can shuffle that person around your circles as you please.
  5. At the bottom, there is a non-checkable menu item called “Add new circle”. Here, you have to press Enter. If you do this, a panel opens inside the menu, and focus lands on a text field where you can enter the name of a new circle, for example Web Developers. Press Tab to reach the Create Circle button and press Space Bar. The new circle will be added, the person you’re adding to circles will automatically be added to that circle, and you’re back in the menu of circle checkboxes.
  6. Once you’re done, press Escape twice. The first will end NVDA’s focus mode, the second will close the Add to Circles menu. Focus will land back on the button for that person, but the label will change to the name of a single circle, if you added the person to only one circle, or the label “x Circles”, where x is the number of circles you just put that person into.

The above steps also work on the menu button that you find if you opened the profile page of an individual person, not just in the list of suggested people, or any other list of people. The interaction is exactly the same.

Hope this helps you get around in Google Plus a bit more efficiently!

Categorieën: Mozilla-nl planet

Kent James: Following Wikipedia, Thunderbird Could Raise $1,600,000 in annual donations

do, 17/07/2014 - 08:31

What will it take to keep Thunderbird stable and vibrant? Although there is a dedicated, hard-working team of volunteers trying hard to keep Thunderbird alive, there has been very little progress on improvements since Mozilla drastically reduced their funding. I’ve been an advocate for some time that Thunderbird needs income to fulfill its potential, and that the best way to generate that income would be to appeal directly to its users for donations.

One internet organization that has done this successfully has been Wikipedia. How much income could Thunderbird generate if they received the same income per user as Wikipedia? Surely our users, who rely on Thunderbird for critical daily communications, are at least as willing to donate as Wikipedia users.

Estimates of income from Wikipedia’s annual fund raising drive to users are around $20,000,000 per year. Recently Wikipedia is reporting 11824 M pageviews per month and 5 pageviews per user. That results in a daily user count of 78 million users. Thunderbird by contrast has about 6 million daily users (using hits per day to update checks), or about 8% of the daily users of Wikipedia.

If Thunderbird were willing to directly engage users asking for donations, at the same rate per user as Wikipedia, there is a potential to raise $1,600,000 per year. That would certainly be enough income to maintain a serious team to move forward.

Wikipedia’s donation requests were fairly intrusive, with large banners at the top of all Wikipedia pages. When Firefox did a direct appeal to users early this year, the appeal was very subtle (did you even notice it?). I tried to scale the Firefox results to Thunderbird, and estimated that a similar subtle appeal might raise $50,000 – $100,000 per year in Thunderbird. That is not sufficient to make a significant impact. We would have to be willing to be a little intrusive, like Wikipedia, it we are going to be successful. This will generate pushback, as has Wikipedia’s campaign, so we would have to be willing to live with the pushback.

But is it really in the best interest of our users to spare them an annual, slightly intrusive appeal for donations, while letting the product that they depend on each day slowly wither away? I believe that if we truly care about our users, we will take the necessary steps to insure that we give them the best product possible, including undertaking fundraising to keep the product stable and vibrant.

Categorieën: Mozilla-nl planet

Nick Cameron: Rust for C++ programmers - part 9: destructuring pt2 - match and borrowing

do, 17/07/2014 - 03:19
(Continuing from part 8, destructuring).

When destructuring there are some surprises in store where borrowing is concerned. Hopefully, nothing surprising once you understand borrowed references really well, but worth discussing (it took me a while to figure out, that's for sure).

Imagine you have some `&Enum` variable `x` (where `Enum` is some enum type). You have two choices: you can match `*x` and list all the variants (`Variant1 => ...`, etc.) or you can match `x` and list reference to variant patterns (`&Variant1 => ...`, etc.). (As a matter of style, prefer the first form where possible since there is less syntactic noise). `x` is a borrowed reference and there are strict rules for how a borrowed reference can be dereferenced, these interact with match expressions in surprising ways (at least surprising to me), especially when you a modifying an existing enum in a seemingly innocuous way and then the compiler explodes on a match somewhere.

Before we get into the details of the match expression, lets recap Rust's rules for value passing. In C++, when assigning a value into a variable or passing it to a function there are two choices - pass-by-value and pass-by-reference. The former is the default case and means a value is copied either using a copy constructor or a bitwise copy. If you annotate the destination of the parameter pass or assignment with `&`, then the value is passed by reference - only a pointer to the value is copied and when you operate on the new variable, you are also operating on the old value.

Rust has the pass-by-reference option, although in Rust the source as well as the destination must be annotated with `&`. For pass-by-value in Rust, there are two further choices - copy or move. A copy is the same as C++'s semantics (except that there are no copy constructors in Rust). A move copies the value but destroys the old value - Rust's type system ensures you can no longer access the old value. As examples, `int` has copy semantics and `Box<int>` has move semantics:

fn foo() {
    let x = 7i;
    let y = x;                // x is copied
    println!("x is {}", x);   // Ok

    let x = box 7i;
    let y = x;                // x is moved
    //println!("x is {}", x); // error: use of moved value: `x`
}
Rust determines if an object has move or copy semantics by looking for destructors. Destructors probably need a post of their own, but for now, an object in Rust has a destructor if it implements the `Drop` trait. Just like C++, the destructor is executed just before an object is destroyed. If an object has a destructor then it has move semantics. If it does not, then all of its fields are examined and if any of those do then the whole object has move semantics. And so on down the object structure. If no destructors are found anywhere in an object, then it has copy semantics.

Now, it is important that a borrowed object is not moved, otherwise you would have a reference to the old object which is no longer valid. This is equivalent to holding a reference to an object which has been destroyed after going out of scope - it is a kind of dangling pointer. If you have a pointer to an object, there could be other references to it. So if an object has move semantics and you have a pointer to it, it is unsafe to dereference that pointer. (If the object has copy semantics, dereferencing creates a copy and the old object will still exist, so other references will be fine).

OK, back to match expressions. As I said earlier, if you want to match some `x` with type `&T` you can dereference once in the match clause or match the reference in every arm of the match expression. Example:

enum Enum1 {
    Var1,
    Var2,
    Var3
}

fn foo(x: &Enum1) {
    match *x {  // Option 1: deref here.
        Var1 => {}
        Var2 => {}
        Var3 => {}
    }

    match x {
        // Option 2: 'deref' in every arm.
        &Var1 => {}
        &Var2 => {}
        &Var3 => {}
    }
}
In this case you can take either approach because `Enum1` has copy semantics. Let's take a closer look at each approach: in the first approach we first dereference `x` to a temporary variable with type `Enum1` (which copies the value in `x`) and then do a match against the three variants of `Enum1`. This is a 'one level' match because we don't go deep into the value's type. In the second approach there is no dereferencing. We match a value with type `&Enum1` against a reference to each variant. This match goes two levels deep - it matches the type (always a reference) and looks inside the type to match the referred type (which is `Enum1`).

If we are matching a reference with move semantics, then the first approach is not an option. That is because `match *x` would move the enum value out of `*x` (rather than copy it). Any other references to the enum value would then be invalid. Option 2 is allowed, but that is not the end of the story. We have to be careful that any data nested in the enum is also not moved (well, the compiler has to be careful). That is to prevent an object being partially moved whilst someone else has a reference to it - this other referrer assumes the object is wholly immutable. For example,

enum Enum2 {
    // Box has a destructor so Enum2 has move semantics.
    Var1(Box<int>),
    Var2,
    Var3
}

fn foo(x: &Enum2) {
    // *x is no longer allowed.
    match x {
        // We're ignoring nested data, so this is OK
        &Var1(..) => {}
        // No change to the other arms.
        &Var2 => {}
        &Var3 => {}
    }
}
But what about if we want to use the data in `Var1`? We can't write:

    match x {
        &Var1(y) => {}
        _ => {}
    }

because that would mean moving part of `x` into `y`. We can use the 'ref' keyword to get a reference to the data in `Var1`: `&Var1(ref y) => {}`.That is OK, because now we are not dereferencing anywhere and thus not moving any part of `x`. Instead we are creating a pointer which points into the interior of `x`.

Alternatively, we could destructure the Box (this match is going three levels deep): `&Var1(box y) => {}`. This is Ok because `int` has copy semantics and `y` is a copy of the `int` inside the `Box` inside `Var1` (which is 'inside' a borrowed reference). Since `int` has copy semantics, we don't need to move any part of `x`. We could also create a reference to the int rather than copy it: `&Var1(box ref y) => {}`. Again, this is OK, because we don't do any dereferencing and thus don't need to move any part of `x`. If the contents of the Box had move semantics, then we could not write `&Var1(box y) => {}`, we would be forced to use the reference version.

If you do end up only being able to get a reference to some data and you need the value itself, you have no option except to copy that data. Usually that means using `clone()`. If the data doesn't implement clone, you're going to have to further destructure to make a manual copy or implement clone yourself.
Categorieën: Mozilla-nl planet

Kevin Ngo: More Happiness for Your Buck

do, 17/07/2014 - 02:00
Disney is the happiest places on Earth, but one of the most expensive. But it might be well worth the wallet hit.

With increasing assets, I have been thinking lately about what to purchase next, home purchasing, vacation planning, investment. You know, personal finances. Then I wonder how we spend in order to make ourselves happier. How can we use our money most efficiently to make ourselves happiest?

We have fine choices betweem 65" 3D plasma TVs, media-integrated BMWs and Audis, Tudor-style houses on the tree-lined avenue. Although we're all aware of the American Dream and although we might even consciously scoff at it, is it really ingrained in our heads enough to affect our purchases? Despite being aware of materialism, we still spend on items such as an Apple product upgrades or matching furniture sets. But really, compared to what we could potentially be allocating our money towards, are they really worth it?. Buck by buck, there are happier things to spend money on and happier ways to spend it.

Experiences Trumps Stuff

The happiness attained from a new toy is fleeting. When I buy a gadget, I get really excited about it for a couple weeks, and then it's just another item on the shelf. Once in freshman year, I dropped $70 on an HD camcorder. "Think about all the cool life experiences I could record!", I thought. After playing around with it for a bit, it got stowed away, just as Woody had when Buzz came to town. It wasn't the actual camcorder that I really wanted, it was thinking about the future experiences I could have.

Thinking back, the best things I have ever spent my money on were experiences. Trips around the world, places like the cultural streets of Beijing, the serenity of Oahu, or the cold isolation of Alaska. They bring back warm (or perhaps cold) memories and instill a rush of nostalgia. It brought about happiness in a way that those $100 beat-up sneakers or that now-stolen iPod touch ever did.

It's changed my thoughts on getting a nice house or car. Why spend to be stuck at a mundane home or spend to be stuck in traffic (just in cushier seats)? I'd rather use the money saved from not splurging $400K on a house to see the world. Spend money to be with people, go to places, attend shows, try new things. You won't forget it.

Instant Gratification is a Drag

It's not only what we spend on that makes us happy, it's how we spend. When we spend in a way such that we attain instant gratification, such as an in-store purchase on credit of that new fridge or getting that candy bar now, it destroys the whole fun of the waiting game. Have you ever eagerly awaited a package to come for weeks? Thinking about all the future possibilites, all the things you can do, all the fun you will have once that package comes. We are happier when we await something eagerly in anticipation. It's about the journey and not the destination.

Just yesterday, I booked my flight and hotel to Florida to visit my girlfriend working at Disney. It's almost two months out. But every day, I'll be thinking about how much fun we'll have watching the Fantasmic fireworks, how relaxing it will be staying at a 1940s Atlantic-city themed Disney inn, all the delicious food at the Flying Fish. With the date marked on my calendar, it makes me happier every day just eagerly anticipating it.

When you spend on something now, and defer the actual consumption or experience for later, you will much more gratified. Try pre-ordering something you enjoy, plan trips out months ahead, or purchasing online. By practicing patience, you'll probably even save a bit of cash.

Make It Scarce

Experiencing something too frequently makes it less of an experience. If you drink a frothy mocha cappucino every day, you become more and more desensitized to its creamy joys. By making something scarce by not buying or experiencing it too often, it becomes more of a treat. So if you're eating out for lunch every day at nice restaurants, you might want to think about only eating out once a week. Or only get expensive coffees on Fridays. It'll make those times you do go out that much more satisfying, and your wallet will thank you.

Time Trumps Money

Don't dwell too much on wasting your time to pinch some money. So Starbucks is giving out free 12oz coffees today? Free sounds enticing but is it really worth the gas, time in dreadful traffic, and waiting in line? View time as happiness. If you have more time, you can do more of the things you want to do. If you just feel like you have a lot of time, you feel much more free.

With that in mind, you should consider how purchases will affect your future time. Ask "will this really make me happier next week?". If you are contemplating a new TV, you might think it'll make you happier. Have so many friends over to play FIFA on the so-much-HD. But television doesn't make you happier or any less stressed. It's a numbing time-sink. Or perhaps think when you are debating between two similar products such as a Nexus 5 or an HTC One. Sure, when placed side-by-side, those extra megapixels and megahertz might seem like a huge advantage. But think about the product in isolation and see if it will really benefit your future time.

Give it Away

Warren Buffett pledged to give away 99% of his wealth, whether in his lifetime or posthumously. Giving away, passing it forward, being charitable makes people happy. Even happier had they splurged on themselves.

Helping others in need makes it feel like you have a lot of extra free time to give away. And feeling like you have a lot of free time is less of a boulder on your back. So invest in others and invest in relationships. We're inherently social creatures although sometimes selfish. It works against us. Donate to a charity where you know exactly where your money is going to, or buy something nice for a family member or friend without pressure. It's money happily spent.

Categorieën: Mozilla-nl planet

Mark Surman: How do we get depth *and* scale?

wo, 16/07/2014 - 22:20

We want millions of people learning about the web everyday with Mozilla. The ‘why’ is simple: web literacy is quickly becoming just as important as reading, writing and math. By 2024, there will be more than 5 billion people on the web. And, by then, the web will shape our everyday lives even more than it does today. Understanding how the it works, how to build it and how to make it your own will be essential for nearly everyone.

Maker Party Uganda

The tougher question is ‘how’ — how do we teach the web with both the depth *and* scale that’s needed? Most people who tackle a big learning challenge pick one path of the other. For example, the educators in our Hive Learning Networks are focused on depth of learning. Everything the do is high touch, hands-on and focused on innovating so learning happens in a deep way. On the flip side, MOOCs have quickly shown what scale looks like, but they almost universally have high drop out rates and limited learning impact for all but the most motivated learners. We rarely see depth and scale go together. Yet, as the web grows, we need both. Urgently.

I’m actually quite hopeful. I’m hopeful because the Mozilla community is deeply focused on tackling this challenge head on, with people rolling up their sleeves to help people learn by making and organizing themselves in new ways that could massively grow the number of people teaching the web. We’re seeing the seeds of both depth and scale emerge.

This snapped into focus for me at MozFest East Africa in Kampala a few days ago. Borrowing from the MozFest London model, the event showcased a variety of open tech efforts by Mozilla and others: FirefoxOS app development; open data tools from a local org called Mountabatten; Mozilla localization; Firefox Desktop engineering; the work of the Ugandan National Information Technology Agency. It also included a huge Maker Party, with 200 young Ugandans showing up to learn and hack with Webmaker tools.

Maker Party Uganda

The Maker Party itself was impressive — pulled off well despite rain and limited connectivity. But what was more impressive was seeing how the Mozilla community is stepping up to plant the seeds of teaching the web at depth and scale, which I’d call out as:

Mentors: IMHO, a key to depth is humans connecting face to face to learn. We’ve set up a Webmaker Mentors program in the last year to encourage this kind of learning. The question has been: will people step up to do this kind of teaching and mentoring, and do it well? MozFest EA was promising start: 30 motivated mentors showed up prepared, enthusiastic and ready to help the 200 young people at the event learn the web.

Curriculum: one of the hard parts of scaling a volunteer-based mentor program is getting people to focus their teaching on the most important web literacy skills. We released a new collection of open source web literacy curriculum over the past couple of months designed to solve this problem. We weren’t sure how things would work out, I’d say MozFestEA is early evidence that curriculum can do a good job of helping people quickly understand what and how to teach. Here, each of the mentors was confidently and articulately teaching a piece of the web literacy framework using Webmaker tools.

Making as learning: another challenge is getting people to teach / learn deeply based on written curriculum. Mozilla focuses on ‘making by learning’ as a way past this — putting hands-on, project based learning at the heart of most of our Webmaker teaching kits. For example, the basic remix teaching kit gets learners quickly hacking and personalizing their favourite big brand web site, which almost always gets people excited and curious. More importantly: this ‘making as learning’ approach lets mentors adapt the experience to a learner’s interests and local context in real time. It was exciting to see the Ugandan mentors having students work on web pages focused on local school tasks and local music stars, which worked well in making the standard teaching kits come to life.

Clubs: mentors + curriculum + making can likely get us to our 2014 goal of 10,000 people around the world teaching web literacy with Mozilla. But the bigger question is how do we keep the depth while scaling to a much bigger level? One answer is to create more ’nodes’ in the Webmaker network and get them teaching all year round. At MozFest EA, there was a session on Webmaker Clubs — after school web literacy clubs run by students and teachers. This is an idea that floated up from the Mozilla community in Uganda and Canada. In Uganda, the clubs are starting to form. For me, this is exciting. Right now we have 30 contributors working on Webmaker in Uganda. If we opened up clubs in schools, we could imagine 100s or even 1000s. I think clubs like this is a key next step towards scale.

Community leadership: the thing that most impressed me at MozFestEA was the leadership from the community. San Emmanuel James and Lawrence Kisuuki have grown the Mozilla community in Uganda in a major way over the last couple of years. More importantly, they have invested in building more community leaders. As one example, they organized a Webmaker train the trainer event a few weeks before MozFestEA. The result was what I described above: confident mentors showing up ready to teach, including people other than San and Lawrence taking leadership within the Maker Party side of the event. I was impressed.This is key to both depth and scale: building more and better Mozilla community leaders around the world.

Of course, MozFestEA was just one event for one weekend. But, as I said, it gave me hope: it made be feel that the Mozilla community is taking the core building blocks of Webmaker shaping them into something that could have a big impact.

IMG_20140716_185205

With Maker Party kicking off this week, I suspect we’ll see more of this in coming months. We’ll see more people rolling up their sleeves to help people learn by making. And more people organizing themselves in new ways that could massively grow the number of people teaching the web. If we can make happen this summer, much bigger things lay on the path ahead.


Filed under: education, mozilla, webmakers
Categorieën: Mozilla-nl planet

Gregory Szorc: Updates to firefoxtree Mercurial extension

wo, 16/07/2014 - 21:55

My Please Stop Using MQ post, has been generating a lot of interest for bookmark-based workflows at Mozilla. To make adoption easier, I quickly authored an extension to add remote refs of Firefox repositories to Mercurial.

There was still a bit of confusion and gripes about workflows that I thought it would be best to update the extension to make things more pleasant.

Automatic tree names

People wanted an ability to easy pull/aggregate the various Firefox trees without additional configuration to an hgrc file.

With firefoxtree, you can now hg pull central or hg pull inbound or hg pull aurora and it just works.

Pushing with aliases doesn't yet work. It is slightly harder to do in the Mercurial API. I have a solution, but I'm validating some code paths to ensure it is safe. This feature will likely appear soon.

fxheads commands

Once people adopted unified repositories with heads from multiple repositories, they asked how they could quickly identify the heads of the pulled Firefox repositories.

firefoxtree now provides a hg fxheads command that prints a concise output of the commits constituting the heads of the Firefox repos. e.g.

$ hg fxheads 224969:0ec0b9ac39f0 aurora (sort of) bug 898554 - raise expected hazard count for b2g to 4 until they are fixed, a=bustage+hazbuild-only 224290:6befadcaa685 beta Tagging /src/mdauto/build/mozilla-beta 1772e55568e4 with FIREFOX_RELEASE_31_BASE a=release CLOSED TREE 224848:8e8f3ba64655 central Merge inbound to m-c a=merge 225035:ec7f2245280c fx-team fx-team/default Merge m-c to fx-team 224877:63c52b7ddc28 inbound Bug 1039197 - Always build js engine with zlib. r=luke 225044:1560f67f4f93 release release/default tip Automated checkin: version bump for firefox 31.0 release. DONTBUILD CLOSED TREE a=release

Please note that the output is based upon local-only knowledge.

Reject pushing multiple heads

People were complaining that bookmark-based workflows resulted in Mercurial trying to push multiple heads to a remote. This complaint stems from the fact that Mercurial's default push behavior is to find all commits missing from the remote and push them. This behavior is extremely frustrating for Firefox development because the Firefox repos only have a single head and pushing multiple heads will only result in a server hook rejecting the push (after wasting a lot of time transferring that commit data).

firefoxtree now will refuse to push multiple heads to a known Firefox repo before any commit data is sent. In other words, we fail fast so your time is saved.

firefoxtree also changes the default behavior of hg push when pushing to a Firefox repo. If no -r argument is specified, hg push to a Firefox repo will automatically remap to hg push -r .. In other words, we attempt to push the working copy's commit by default. This change establishes sensible default and likely working behavior when typing just hg push.

Installing firefoxtree

Within the next 48 hours, mach mercurial-setup should prompt to install firefoxtree. Until then, clone https://hg.mozilla.org/hgcustom/version-control-tools and ensure your ~/hgrc file has the following:

[extensions] firefoxtree = /path/to/version-control-tools/hgext/firefoxtree

You likely already have a copy of version-control-tools in ~/.mozbuild/version-control-tools.

It is completely safe to install firefoxtree globally: the extension will only modify behavior of repositories that are clones of Firefox repositories.

Categorieën: Mozilla-nl planet

Pete Moore: Weekly review 2014-07-16

wo, 16/07/2014 - 16:37

Highlights

Last week build duty, therefore much less to report this week. I think we’ll have plenty to talk about though (wink wink).

l10n vcs sync was done by aki, and i posted my responses, and am writing up a patch which I hope to land in the next 24 hours. That will be l10n done.

I’ve been busy traiging queues too, and inviting people to meetings that I don’t attend myself, and cleaning up a lot of bugs (not just the triaging, but in general).

Today’s major incident was fallout from panda train 3 move - finally resolved now (yay). Basically, devices.json was out-of-date on the foopies. Disappointingly I thought to check devices.json, but did not consider it would be out-of-date on foopies, as I knew we’d been having lots of reconfigs every day. But for other reasons, the foopy updates were not performed (hanging ssh sessions when updating them) - so it took a while until this was discovered (by dustin!). In the meantime had to disable and enable > 250 pandas.

Other than that, working ferociously on finishing off vcs sync.

I think I probably updated 200 bugs this week! Was quite a clean up.

Categorieën: Mozilla-nl planet

Frédéric Harper: Community Evangelist: Firefox OS developer outreach at MozCamp India

wo, 16/07/2014 - 16:10
//j.mp/1jIYxWb (click to enlarge)

Copyright Ratnadeep Debnath
http://j.mp/1jIYxWb (click to enlarge)

At the end of June, I was in India to do a train the trainer session at MozCamp India. The purpose of the session Janet Swisher (first time we worked together, and I think we got a winning combo), and I delivered was to help Mozillians to become Community Evangelists. Our goal was to help them become part of our Technical Evangelist team: helping us inspiring and enabling developers in India to be successful with Firefox OS (we are starting with this technology because of the upcoming launch).

We would have been able to do a full day or more about developer outreach, but we only had three hours in which we shown the attendees how they can contribute, did a fun speaker idol and worked on their project plan. The contribution can be done at many levels, like public speaking, helping developers to build Firefox OS application, answering questions on StackOverflow, and more.

Since we had parallel tracks during our session, we gave it twice to give them the chance to assist to more than one track. For those who were there in the Saturday session, the following slides are the one we used:

Developer Outreach for Firefox OS – Mozcamp India – 2014-06-21 from Frédéric Harper

I also recorded the session for those of you that would like to refresh your memory:

For the session on Sunday, we fixed some slides, and adapted our session to give us more time for the speaker idol as the project plan. Here are the slides:

Developer Outreach for Firefox OS – Mozcamp India – 2014-06-22 from Frédéric Harper

If you were not there, I would suggest you to follow the slides as the video of the second day, as it’s an improve version of the first one (not that the first one was not good, but it was the first time we gave this session);

From the feedback we got, it was a pretty good session, and we were happy so see the excitement of the Indian community about this community evangelist role. I can’t wait to see what the Mozilla community in India will do! If you too, Mozillian or not, have any interest about evangelizing the open web, you should join the Mozilla Evangelism mailing list.

 


--
Community Evangelist: Firefox OS developer outreach at MozCamp India is a post on Out of Comfort Zone from Frédéric Harper

Related posts:

  1. Firefox OS love in Toronto Yesterday, I was in Toronto to share some Firefox OS...
  2. Working your magic with Firefox OS – Playing mp4 Everything you are looking for, about Firefox OS development, is...
  3. One month as a Firefox OS Technical Evangelist Time flies; I thought I started at Mozilla last week,...
Categorieën: Mozilla-nl planet

Luis Villa: Designers and Creative Commons: Learning Through Wikipedia Redesigns

wo, 16/07/2014 - 08:31

tl;dr: Wikipedia redesigns mostly ignore attribution of Wikipedia authors, and none approach the problem creatively. This probably says as much or more about Creative Commons as it does about the designers.

disclaimer-y thing: so far, this is for fun, not work; haven’t discussed it at the office and have no particular plans to. Yes, I have a weird idea of fun.

Refresh variant from interfacesketch.com.
A mild refresh from interfacesketch.com.

It is no longer surprising when a new day brings a new redesign of Wikipedia. After seeing one this weekend with no licensing information, I started going back through seventeen of them (most of the ones listed on-wiki) to see how (if at all) they dealt with licensing, attribution, and history. Here’s a summary of what I found.

Completely missing

Perhaps not surprisingly, many designers completely remove attribution (i.e., history) and licensing information in their designs. Seven of the seventeen redesigns I surveyed were in this camp. Some of them were in response to a particular, non-licensing-related challenge, so it may not be fair to lump them into this camp, but good designers still deal with real design constraints, and licensing is one of them.

History survives – sometimes

The history link is important, because it is how we honor the people who wrote the article, and comply with our attribution obligations. Five of the seventeen redesigns lacked any licensing information, but at least kept a history link.

Several of this group included some legal information, such as links to the privacy policy, or in one case, to the Wikimedia Foundation trademark page. This suggests that our current licensing information may be presented in a worse way than some of our other legal information, since it seems to be getting cut out even by designers who are tolerant of some of our other legalese?

Same old, same old

Four of the seventeen designs keep the same old legalese, though one fails to comply by making it impossible to get to the attribution (history) page. Nothing wrong with keeping the existing language, but it could reflect a sad conclusion that licensing information isn’t worth the attention of designers; or (more generously) that they don’t understand the meaning/utility of the language, so it just gets cargo-culted around. (Credit to Hamza Erdoglu , who was the only mockup designer who specifically went out of his way to show the page footer in one of his mockups.)

A winner, sort of!

Of the seventeen sites I looked at, exactly one did something different: Wikiwand. It is pretty minimal, but it is something. The one thing: as part of the redesign, it adds a big header/splash image to the page, and then adds a new credit specifically for the author of the header/splash image down at the bottom of the page with the standard licensing information. Arguably it isn’t that creative, just complying with their obligations from adding a new image, but it’s at least a sign that not everyone is asleep at the wheel.

Observations

This is surely not a large or representative sample, so all my observations from this exercise should be taken with a grain of salt. (They’re also speculative since I haven’t talked to the designers.) That said, some thoughts besides the ones above:

  • Virtually all of the designers who wrote about why they did the redesign mentioned our public-edit-nature as one of their motivators. Given that, I expected history to be more frequently/consistently addressed. Not clear whether this should be chalked up to designers not caring about attribution, or the attribution role of history being very unclear to anyone who isn’t an expect. I suspect the latter.
  • It was evident that some of these designers had spent a great deal of time thinking about the site, and yet were unaware of licensing/attribution. This suggests that people who spend less time with the site (i.e., 99.9% of readers) are going to be even more ignorant.
  • None of the designers felt attribution and licensing was even important enough to experiment on or mention in their writeups. As I said above, this is understandable but sort of sad, and I wonder how to change it.

Postscript, added next morning:

I think it’s important to stress that I didn’t link to the individual sites here, because I don’t want to call out particular designers or focus on their failures/oversights. The important (and as I said, sad) thing to me is that designers are, historically, a culture concerned with licensing and attribution. If we can’t interest them in applying their design talents to our problem, in the context of the world’s most famously collaborative project, we (lawyers and other Commoners) need to look hard at what we’re doing, and how we can educate and engage designers to be on our side.

I should also add that the WMF design team has been a real pleasure to work with on this problem, and I look forward to doing more of it. Some stuff still hasn’t made it off the drawing board, but they’re engaged and interested in this challenge. Here is one example.

Categorieën: Mozilla-nl planet

Byron Jones: happy bmo push day!

wo, 16/07/2014 - 08:30

switching the default monospace font on bmo yesterday highlighted a few issues with the fira-mono typeface that we’d like to see addressed before we use it.  as a result comments are now displayed using their old font.

the following changes have been pushed to bugzilla.mozilla.org:

discuss these changes on mozilla.tools.bmo.


Filed under: bmo, mozilla
Categorieën: Mozilla-nl planet

Jess Klein: The first 6 weeks of Hive Labs

wo, 16/07/2014 - 00:05
Six weeks ago, Atul Varma, Chris Lawrence, Kat Baybrooke and I embarked on an experiment we call Hive Labs. Let me tell you about, Let me show you a little slideshow I made about our first 6 weeks to the tune of Josh Gad singing In Summer from the movie Frozen.





So, in summary (or if you aren't the musical slideshare type) the first 6 weeks have been great. We did a bunch of listening and research, including attending events and hackjams run by and for Hive members. Here's a neat worksheet from a Mouse run Webmaker training in New York. 

We did some research and design on tools and resources to support prototyping:




Sherpa is a codename for a tool that helps prototypers define a design opportunity and openly work through the process for designing a solution. We designed some mockups to see if this is a direction that we should pursue. Sherpa could be a back-end for the "Cupcake dashboard" or be a stand alone tool. We spun up an instance of the "Cupcakes" dashboard  designed by the Firefox UX team to help figure out if it is a useful tool to surface prototypes.

We also prototyped a snippet for Firefox to promote Maker Party, worked on an idea for self guided Webmaking and began work on a Net Neutrality Teaching Kit.

Finally, we've shipped some things:
The No-Fi, Lo-Fi Teaching Kit and the Mobile Design Teaching Kit
The No-Fi Lo-Fi Teaching Kit asks participants the question how can we empower educators to teach the web in settings where connectivity isn't guaranteed?
With the Mobile Design Teaching Kit, participants play with, break apart and modify mobile apps in order to understand how they work as systems. This teaching kit is designed to explore a few activities that can be mixed and mashed into workshops for teens or adults who want to design mobile apps. Participants will tinker with paper prototyping, design mindmaps and program apps while learning basic design and webmaking concepts.
A local and a global Hive Learning Network directory
... and a section on Webmaker.org to help guide mentors through making Teaching Kits and Activities:



The first 6 weeks have been great, and we are going to continue to listen, create and deliver based on needs from the community. We have lots more to build. We want to do this incrementally, partly to release sooner, and partly to build momentum through repeated releases.

Categorieën: Mozilla-nl planet

Armen Zambrano Gasparnian: Developing with GitHub and remote branches

di, 15/07/2014 - 23:04
I have recently started contributing using Git by using GitHub for the Firefox OS certification suite.

It has been interestting switching from Mercurial to Git. I honestly believed it would be more straight forward but I have to re-read again and again until the new ways sink in with me.

jgraham shared with me some notes (Thanks!) with regards what his workflow looks like and I want to document it for my own sake and perhaps yours:
git clone git@github.com:mozilla-b2g/fxos-certsuite.git

# Time passes

# To develop something on master
# Pull in all the new commits from master

git fetch origin

# Create a new branch (this will track master from origin,
# which we don't really want, but that will be fixed later)

git checkout -b my_new_thing origin/master

# Edit some stuff

# Stage it and then commit the work

git add -p
git commit -m "New awesomeness"

# Push the work to a remote branch
git push --set-upstream origin HEAD:jgraham/my_new_thing

# Go to the GH UI and start a pull request

# Fix some review issues
git add -p
git commit -m "Fix review issues" # or use --fixup

# Push the new commits
git push

# Finally, the review is accepted
# We could rebase at this point, however,
# we tend to use the Merge button in the GH UI
# Working off a different branch is basically the same,
# but you replace "master" with the name of the branch you are working off.

Creative Commons License
This work by Zambrano Gasparnian, Armen is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 Unported License.
Categorieën: Mozilla-nl planet

Alon Zakai: Massive, a new work-in-progress asm.js benchmark - feedback is welcome!

di, 15/07/2014 - 19:26
Massive is a new benchmark for asm.js. While many JavaScript benchmarks already exist, asm.js - a strict subset of JavaScript, designed to be easy to optimize - poses some new challenges. In particular, asm.js is typically generated by compiling from another language, like C++, and people are using that approach to run large asm.js codebases, by porting existing large C++ codebases (for example, game engines like Unity and Unreal).

Very large codebases can be challenging to optimize for several reasons: Often they contain very large functions, for example, which stress register allocation and other compiler optimizations. Total code size can also cause pauses while the browser parses and prepares to execute a very large script. Existing JavaScript benchmarks typically focus on small programs, and tend to focus on throughput, ignoring things like how responsive the browser is (which matters a lot for the user experience). Massive does focus on those things, by running several large real-world codebases compiled to asm.js, and testing them on throughput, responsiveness, preparation time and variance. For more details, see the FAQ at the bottom of the benchmark page.

Massive is not finished yet, it is a work in progress - the results should not be taken seriously yet (bugs might cause some things to not be measured accurately, etc.). Massive is being developed as an open source project, so please test it and report your feedback. Any issues you find or suggestions for improvements are very welcome!

Categorieën: Mozilla-nl planet

Gregory Szorc: Python Packaging Do's and Don'ts

di, 15/07/2014 - 19:20

Are you someone who casually interacts with Python but don't know the inner workings of Python? Then this post is for you. Read on to learn why some things are the way they are and how to avoid making some common mistakes.

Always use Virtualenvs

It is an easy trap to view virtualenvs as an obstacle, a distraction towards accomplishing something. People see me adding virtualenvs to build instructions and they say I don't use virtualenvs, they aren't necessary, why are you doing that?

A virtualenv is effectively an overlay on top of your system Python install. Creating a virtualenv can be thought of as copying your system Python environment into a local location. When you modify virtualenvs, you are modifying an isolated container. Modifying virtualenvs has no impact on your system Python.

A goal of a virtualenv is to isolate your system/global Python install from unwanted changes. When you accidentally make a change to a virtualenv, you can just delete the virtualenv and start over from scratch. When you accidentally make a change to your system Python, it can be much, much harder to recover from that.

Another goal of virtualenvs is to allow different versions of packages to exist. Say you are working on two different projects and each requires a specific version of Django. With virtualenvs, you install one version in one virtualenv and a different version in another virtualenv. Things happily coexist because the virtualenvs are independent. Contrast with trying to manage both versions of Django in your system Python installation. Trust me, it's not fun.

Casual Python users may not encounter scenarios where virtualenvs make their lives better... until they do, at which point they realize their system Python install is beyond saving. People who eat, breath, and die Python run into these scenarios all the time. We've learned how bad life without virtualenvs can be and so we use them everywhere.

Use of virtualenvs is a best practice. Not using virtualenvs will result in something unexpected happening. It's only a matter of time.

Please use virtualenvs.

Never use sudo

Do you use sudo to install a Python package? You are doing it wrong.

If you need to use sudo to install a Python package, that almost certainly means you are installing a Python package to your system/global Python install. And this means you are modifying your system Python instead of isolating it and keeping it pristine.

Instead of using sudo to install packages, create a virtualenv and install things into the virtualenv. There should never be permissions issues with virtualenvs - the user that creates a virtualenv has full realm over it.

Never modify the system Python environment

On some systems, such as OS X with Homebrew, you don't need sudo to install Python packages because the user has write access to the Python directory (/usr/local in Homebrew).

For the reasons given above, don't much around with the system Python environment. Instead, use a virtualenv.

Beware of the package manager

Your system's package manager (apt, yum, etc) is likely using root and/or installing Python packages into the system Python.

For the reasons given above, this is bad. Try to use a virtualenv, if possible. Try to not use the system package manager for installing Python packages.

Use pip for installing packages

Python packaging has historically been a mess. There are a handful of tools and APIs for installing Python packages. As a casual Python user, you only need to know of one of them: pip.

If someone says install a package, you should be thinking create a virtualenv, activate a virtualenv, pip install <package>. You should never run pip install outside of a virtualenv. (The exception is to install virtualenv and pip itself, which you almost certainly want in your system/global Python.)

Running pip install will install packages from PyPI, the Python Packaging Index by default. It's Python's official package repository.

There are a lot of old and outdated tutorials online about Python packaging. Beware of bad content. For example, if you see documentation that says use easy_install, you should be thinking, easy_install is a legacy package installer that has largely been replaced by pip, I should use pip instead. When in doubt, consult the Python packaging user guide and do what it recommends.

Don't trust the Python in your package manager

The more Python programming you do, the more you learn to not trust the Python package provided by your system / package manager.

Linux distributions such as Ubuntu that sit on the forward edge of versions are better than others. But I've run into enough problems with the OS or package manager maintained Python (especially on OS X), that I've learned to distrust them.

I use pyenv for installing and managing Python distributions from source. pyenv also installs virtualenv and pip for me, packages that I believe should be in all Python installs by default. As a more experienced Python programmer, I find pyenv just works.

If you are just a beginner with Python, it is probably safe to ignore this section. Just know that as soon as something weird happens, start suspecting your default Python install, especially if you are on OS X. If you suspect trouble, use something like pyenv to enforce a buffer so the system can have its Python and you can have yours.

Recovering from the past

Now that you know the preferred way to interact with Python, you are probably thinking oh crap, I've been wrong all these years - how do I fix it?

The goal is to get a Python install somewhere that is as pristine as possible. You have two approaches here: cleaning your existing Python or creating a new Python install.

To clean your existing Python, you'll want to purge it of pretty much all packages not installed by the core Python distribution. The exception is virtualenv, pip, and setuptools - you almost certainly want those installed globally. On Homebrew, you can uninstall everything related to Python and blow away your Python directory, typically /usr/local/lib/python*. Then, brew install python. On Linux distros, this is a bit harder, especially since most Linux distros rely on Python for OS features and thus they may have installed extra packages. You could try a similar approach on Linux, but I don't think it's worth it.

Cleaning your system Python and attempting to keep it pure are ongoing tasks that are very difficult to keep up with. All it takes is one dependency to get pulled in that trashes your system Python. Therefore, I shy away from this approach.

Instead, I install and run Python from my user directory. I use pyenv. I've also heard great things about Miniconda. With either solution, you get a Python in your home directory that starts clean and pure. Even better, it is completely independent from your system Python. So if your package manager does something funky, there is a buffer. And, if things go wrong with your userland Python install, you can always nuke it without fear of breaking something in system land. This seems to be the best of both worlds.

Please note that installing packages in the system Python shouldn't be evil. When you create virtualenvs, you can - and should - tell virtualenv to not use the system site-packages (i.e. don't use non-core packages from the system installation). This is the default behavior in virtualenv. It should provide an adequate buffer. But from my experience, things still manage to bleed through. My userland Python install is extra safety. If something wrong happens, I can only blame myself.

Conclusion

Python's long and complicated history of package management makes it very easy for you to shoot yourself in the foot. The long list of outdated tutorials on The Internet make this a near certainty for casual Python users. Using the guidelines in this post, you can adhere to best practices that will cut down on surprises and rage and keep your Python running smoothly.

Categorieën: Mozilla-nl planet

Darrin Henein: Side Tabs: Prototyping An Unexpected Productivity Hack

di, 15/07/2014 - 18:24

A few months ago, I came across an interesting Github repo authored by my (highly esteemed!) colleague Vlad Vukicevic called VerticalTabs. This is a Firefox add-on which moves your tabs, normally organized horizontally along the top of your browser, to a vertical list docked to either the left or right side of the window. Vlad’s add-on worked great, but I saw a few areas where a small amount of UX and visual design love could make this something I’d be willing to trial. So, I forked.

 

After cloning the repo, I spent a couple days modifying some of the layout, adding a new dark theme to the CSS, and replaced a handful of the images and icons with my own. Ultimately, it was probably a single-digit number of hours spent to get my code to a place that I was happy with. Certainly, there are some issues on certain operating systems, and things like Firefox’s pinned tabs don’t get the treatment I would love them to have, but that was not the point. The point of my experiment was to learn.

Learn? Learn what?

Let’s step back for a moment. Here at Mozilla, we like to experiment. Hack, Play, Make… whatever you’d like to call it. But we don’t like to waste time: we do things with purpose. If we build something, we try to make sure it’s worth the time and effort involved. As a Design Engineer on the UX team, I (along with others) work hard to bring and make clear the value of prototyping to my colleagues. What is the minimal thing we can make to test our assumptions? The reality is that when designing digital products, how it works is equally (arguably more) important than how it looks. Steve Jobs said it best:

Design is how it works.

 

Let’s bring it back to Side Tabs now (I’ll be using Side Tabs and VerticalTabs interchangeably). The hypothesis I was hoping to validate was that there was a subset of the Firefox user base that would find value in the layout that Side Tabs enabled. I wanted to bring this add-on to a level where users would find it delightful and usable enough to at least give it a fair shot.

It’s critically important that before you unleash your experiment and start learning from it, you mitigate (as much as possible) any sources of bias or false-negatives. Make (or fake) it to a point where the data you collect is not influenced by poor design, conflated features, or poor performance/usability. It turned out that this delta, from Vlad’s work to my own version, was a small amount of work. I went for it, pushed it a few steps in the right direction, and shared it with as many people as I could.

I want to restate: the majority of the credit for my particular version of VerticalTabs goes to those who published the work on top of which I built, namely Vlad and Philipp von Weitershausen. Furthermore, the incredibly talented Stephen Horlander has explored the idea of Side Tabs as well, and you will notice his work helped inspire the visual language I used in my implementation. This is how great things are built; rarely from scratch, but more commonly on the shoulders and brilliance of those who came before you.

My Github repo (at time of writing) has 13 stars and is part of a graph with 19 forks. Similarly, I’ve had colleagues fork my repo and take it further, adding improvements and refinements as they go (see my colleague Hayden’s work for one promising effort). I’ve had great response on Twitter from developers and users who love the add-on and who can’t wait to share their ideas and thoughts. It’s awesome to see ideas take shape and grow so organically like this. This is collaboration.

I’ve been using Side Tabs full-time in my default browser (Firefox Nightly) for 5 or 6 months now, and I’ve learned a ton. Aside from now preferring a horizontal layout (made possible by stacking tabs vertically) on a screen pressed for vertical space, I’ve discovered a use case that I never would have imagined had I simply mocked this idea up in Photoshop.

I use productivity tools heavily, from calendars to to-do lists and beyond. One common scenario is this: I click on a link, and it’s something I find interesting or valuable, but I don’t want to address it right now. I’ve experimented with Pocket (I still use this for longer form writing I wish to read later) but find that most of my Read Later lists are Should-but-Never-Actually-Read-Later lists. Out of sight, out of mind, right?

Saving for Later

The Firefox UX Team has actually done some great research on Save for Later behaviour. There is a great blog post here as well as a more detailed research summary here.

By a quite pleasant surprise, the vertical layout of Side Tabs surfaced a solution to me. I found myself appropriating my tab list into a priority-stack , always giving my focus to the tab at the bottom of the list. When I open something I want to keep around, I simply drag it in the list to a spot based on its relative importance; right to the top for ‘Someday’ items, 2nd or 3rd from the bottom if I want to take a peek once I’m done my task at hand (which is always the bottom tab). I’ve even moved to having two Firefox windows open, which essentially gives me two separate task lists: one for personal tabs/to-dos and one for work.

 

So where does this leave us? Quite clearly, it’s shown the immediate value of investing in an interactive prototype versus using only static mockups: people can use the design, see how it works, and in this case, expose a usage pattern I hadn’t seen before. The most common argument against prototyping is the cost involved (time, chiefly), and in my experience the value of building your designs (to varying levels of fidelity, based on the project/hypothesis) always outweighs the cost. Building the design sheds light on design problems and solutions that traditional, static mockups often fail to illuminate.

With regards to Side Tabs itself, I learned that in some cases, users treat their tabs as tasks to accomplish, and when a task is completed, it’s tab is closed. Increasingly so, our work and personal tasks exist online (email, banking, shopping, etc.), and are accessed through the browser. Some tasks (tabs) have higher priority or urgency than others, and whether visible or not, there is an implicit order by which a user will attend to their tabs. Helping users better organize their tabs made using the browser a more productive, delightful experience. And anything that can make an experience more delightful or useful is not only of great value and importance to the product or team I work with, but also to me as a designer.

Get the Add-on Side Tabs on Github

Side Tabs was built in Javascript, CSS and HTML. Knowing how to code, prototype and build the designs I create has been the advantage that has allowed me to excel as a UX designer.

For updates and access to my best content, blog posts and tutorials about designing with code, join my mailing list below!

Email

No spam, unsubscribe whenever.

Categorieën: Mozilla-nl planet

Christian Heilmann: Maker Party 2014 – go and show off the web to the next makers

di, 15/07/2014 - 17:56

Today is the start of this year’s Maker Party campaign. What’s that? Check out this video.

webmaker

Maker Party is Mozilla’s annual campaign to teach the culture, mechanics and citizenship of the web through thousands of community-run events around the world from July 15-September 15, 2014.

This week, Maker Party events in places like Uganda, Taiwan, San Francisco and Mauritius mark the start of tens of thousands of educators, organizations and enthusiastic web users just like you getting together to teach and learn the web.

You can join Maker Party by finding an event in your area and learning more about how the web works in a fun, hands-on way with others. Events are open to everyone regardless of skill level, and almost all are free! Oh, and there will be kickoff events in all the Mozspaces this Thursday—join in!

No events in your area? Why not host one of your own? Maker Party Resources provides all the information you need to successfully throw an event of any size, from 50+ participants in a library or hackerspace to just you and your little sister sitting on the living room sofa.

Go teach the web, believe me, it is fun!

Categorieën: Mozilla-nl planet

Frédéric Harper: One year at Mozilla

di, 15/07/2014 - 17:19
Click to see full size

Proud to see my name on that monument (Click to see full size)

On July 15th last year, I was starting a new job at Mozilla: it was the beginning of a new journey. Today, it’s been one year that I’m a Mozillian, and I’m proud.

One year later

One year later, I’m still there. It means I like what I’m doing, my team, my manager, and the company. It has been an interesting, but amazing year. I always say that my job is to give love to developers, and it’s true. I’m fortunate enough to have a job where I can share my passion with other, and being paid to help them. During the last year, I spoke at 26 events (conferences, user groups…) sharing about technology and educating developer about open web app like Firefox OS. I’ve helped many developers to fix their bugs, create their applications, provide a better experience to their users, solve the issue they had, and even more important, be successful on the platform.

I’ve always been energized by the fact that the line between working, and having fun for me is really thin, but the volunteers I meet stoked me. The passion, the energy, the time they give to Mozilla, or should I say, to get a better Web, an open one, and help people to take ownership of that web, is astonishing. I will always remember the events I’ve done with them! There is no way you can’t be pumped up for your work, when you see those people giving their time and being dedicated 100% to the mission like that. To all Mozillians, I salute you, thanks for being part of my life!

I can’t write a post about my first year at Mozilla without talking about the travels: I’ve been on the road for 104 days in 15 cities (Toronto, Krakow, San Jose, Brussels, Guadalajara, Budapest, Athens, San Francisco, Moutain View, Barcelona, Paris, Prague, London, Bangalore, and Mumbai) from 12 countries (Canada, Poland, USA, Belgium, Mexico, Hungary, Greece, Spain, France, Czech Republic, UK, and India). For someone who like to discover new cities, cultures, foods, and more, travelling for work is an amazing bonus.

I’ve been a Technical Evangelist for three years, and a half now. I’ve not been in this role for a decade, but it’s not something new for me, I have some experience. Still, I learn a lot in the last year, and it’s perfect as I’m one of the kinds who think we should never stop learning, and improving ourselves. For now, I would not like to be in another position…

Mozilla is a strange beast

The biggest learning curve for me was about the organization, or should I say, the company. Mozilla is a particular beast, a strange one. As far as I know, no other company can be compared to Mozilla: it’s unique. No one can be against the mission of Mozilla, and all the Mozillians move forward to make the web even more open. We are working on amazing projects that changed, and will continue to change the world. We are a bunch of passionate people who believe in what we do, and for any enterprise, it’s a definite asset. We can go, and do what other are afraid to do as we are not there to make money (even if we need money to survive). It’s crazy what all Mozilla together can accomplish.

On the other side, Mozilla is cannibalizing itself. We are getting bigger, and bigger, but we are not always well organized. Because of the nature of Mozilla, everybody has, and wants to give their opinion, and some people tend to forget that it’s their job. The industry has higher expectations for us. We are pro open source, and open web, but we are not always pragmatic. We need volunteers to be successful, but we tend to accept everybody, when we should aim for quality instead of quantity. At the same time, we have so many projects we are working on: it’s not just about Firefox or Firefox OS my friends. Don’t get me wrong, I’m not complaining as I love Mozilla. I guess that it’s part of my reflexion on the last year of my professional life. We are getting better at organizing ourselves, and I hope it will continue that way as I want Mozilla to be the protector of the web for many more years to come!

Today is the first day of my next year at Mozilla, and I’m looking forward to many more!

 


--
One year at Mozilla is a post on Out of Comfort Zone from Frédéric Harper

Related posts:

  1. I’m joining Mozilla It was a bold move to leave Microsoft without knowing...
  2. Three months as a Mozillian I wrote a post after my initial week, and after...
  3. One year at the evil empire Before the end of the year, on the December 31,...
Categorieën: Mozilla-nl planet

Andreas Gal: Improving JPEG image encoding

di, 15/07/2014 - 16:59

Images are a big proportion of the data that browsers load when displaying a website, so better image compression goes a long way towards displaying content faster. Over the last few years there has been debate on whether a new image format is needed over the ubiquitous JPEG to provide better image data compression.

We published a study last year which compares JPEG with a number of more recent image formats, including WebP. Since then, we have expanded and updated that study. We did not find that WebP or any other royalty-free format we tested offers sufficient improvements over JPEG to justify the high maintenance cost of adding a new image format to the Web.

As an alternative we recently started an effort to improve the state of the art of JPEG encoders. Our research team released version 2.0 of this enhanced JPEG encoder, mozjpeg today. mozjpeg reduces the size of both baseline and progressive JPEGs by 5% on average, with many images showing significantly larger reductions.

Facebook announced today that they are testing mozjpeg 2.0 to improve the compression of images on facebook.com. It has also donated $60,000 to contribute to the ongoing development of the technology, including the next iteration, mozjpeg 3.0.

“Facebook supports the work Mozilla has done in building a JPEG encoder that can create smaller JPEGs without compromising the visual quality of photos,” said Stacy Kerkela, software engineering manager at Facebook. “We look forward to seeing the potential benefits mozjpeg 2.0 might bring in optimizing images and creating an improved experience for people to share and connect on Facebook.”

mozjpeg improves image encoding while maintaining full backwards compatibility with existing JPEG decoders. This is very significant because any browser can immediately benefit from these improvements without having to adopt new image formats, such as WebP.

The JPEG format continues to evolve along with the Web, and mozjpeg 2.0 will make it easier than ever for users to enjoy those images. Check out the Mozilla Research blog post for all the details.


Filed under: Mozilla
Categorieën: Mozilla-nl planet

Pagina's