mozilla

Mozilla Nederland LogoDe Nederlandse
Mozilla-gemeenschap

Mozilla Science Lab: Bullying & Imposter Phenomenon: the Fraught Process of Learning to Code in the Lab

Mozilla planet - to, 19/03/2015 - 16:00

I’ve been speaking and writing for some time now on the importance of communication and collaboration in the scientific community. We have to stop reinventing wheels, and we can’t expect to learn the skills in coding and data management we need from attending one workshop alone; we need to establish customs and venues for the free exchange of ideas, and for practicing the new skills we are trying to become fluent in. Normally, these comments are founded on values of efficiency and sound learning strategies. But lately, I find myself reaching the same conclusions from a different starting point: the vital need for a reality check on how we treat the towering challenge of learning to code.

Everywhere I go, I meet students who tell me the same story: they are terrified of asking for help, or admitting they don’t know something to the other members of their research group – and the more computationally intensive the field, the more intense this aversion becomes around coding. Fears include: “What if my supervisor is disappointed that I asked such a ‘trivial’ question?“; “What if the other students lose respect for me if I admit I don’t know something?,” and, perhaps most disheartening of all, “What if this means I am not cut out for my field?

If this is what our students are thinking – what have we done, and where has this come from?

There can be, at times, a toxic machismo that creeps into any technical field: a vicious cycle begins when, fearing that admitting ‘ignorance’ will lead to disrepute (perhaps even as lost grants and lost promotions), we dismiss the challenges faced by others, and instill the same fear in our colleagues of admitting they don’t know everything. The arrogant colleague that treats as trivial every problem they don’t have to solve themselves has risen to the level of departmental trope, and it is beginning to cost us in new blood. I remember working on a piece of code once for weeks, for which I received neither feedback nor advice, but only the admonition ‘you should have been able to write that in five minutes‘. Should I have? By what divine inspiration, genetic memory, or deal with the devil would I, savant like, channel a complex and novel algorithm in three hundred seconds, as a new graduate student with absolutely no training in programming?

That rebuke was absurd – and for those less pathologically insensitive than myself, devastating as they accrue, year after year.

Even in the absence of such bullying, we have made things for the new coder double-bleak. The computing demands in almost every field of research are skyrocketing, and the extent to which we train our students in computing continues to stagnate. Think of the signal this sends: computing skills are, apparently, beneath contempt, not even worth speaking of and so trivial to be not worth the training. And yet, they are so central to a growing number of fields as to be indispensable. Is it any wonder then, that so many students and early career researchers feel alienated and isolated in their fields, and doubt themselves for being hobbled in their work when they ‘fail’ to miraculously intuit the skills their establishment has signaled should be obvious?

A couple of years ago, Angelina Fabbro, my friend and mentor as well as noted web developer, wrote a brilliant article on Imposter Phenomenon (aka Imposter Syndrome), which they define as ‘the experience of feeling like a fraud (or impostor) while participating in communities of highly skilled participants even when you are of a level of competence to match those around you.‘ I strongly recommend reading this article, because even though it was written with the tech world in mind, it is one hundred percent applicable to the experience of legions of academics making careers in the current age of adolescence in research coding. The behavior and effects I describe above have contributed to an epidemic of imposter phenomenon in the sciences, particularly surrounding coding and digital acumen and particularly in students. That fear is keeping us in our little silos, making us terrified to come out, share our work, and move forward together; I consider that fear to be one of the biggest obstacles to open science. Also from Fabbro’s article:

‘In the end I wasn’t shocked that the successful people I admired had experienced impostor phenomenon and talked to me about it — I was shocked that I somehow thought the people I see as heroes were somehow exempt from having it… We’re all just doing the best we know how to when it comes to programming, it’s just that some people have more practice coming across as confident than others do. Never mistake confidence for competence, though.’ – Angelina Fabbro

So What Are We Going To Do About It?

The cultural and curricular change around coding for research that ultimately needs to happen to cut to the root of these problems will be, like all institutional change, slow. But what we can do, right now, is to start making spaces at our local universities and labs where students and researchers can get together, struggle with problems, ask each other questions and work together on code in casual, no-bullying, no-stakes safe spaces, welcoming of beginners and where no question is too basic. These are the Study Groups, User’s Groups and Hacky Hours I’ve been talking about, and addressing the problems I described is the other dimension, beyond simple technical skill building, of why they are so important. In my travels, I’ve stumbled across a few; here’s a map:

Study Groups & Hacky Hours

Please, if you’re running a meetup group or something similar for researchers writing code, let me know (bill@mozillafoundation.org) – I’d love to add you to the map and invite you to tell your story here on this blog (see Kathi Unglert’s guest post for a great example). Also, if you’re curious about the idea of small, locally driven study groups, my colleague Noam Ross has assembled a panel Ask Me Anything event on the Mozilla Science Forum, kicking off at 6 PM EDT, Tuesday, March 24. Panelists from several different meetup groups will be available to answer your questions on this thread, from 6-8 PM EDT; more details are on the blog. Don’t forget to also check out our growing collection of lessons and links to curriculum ideas for a study group meetup, if you’d like some material to try working through.

There are tons of ways to do a good meetup – but to start, see if you can get a couple of people you know and trust to hang out once or twice a month, work on some code, and acknowledge that you’re all still learning together. If you can create a space like that, a whole lot of the anxiety and isolation around learning to code for research will fall away, and more people will soon want to join; I’d love to hear your stories, and I hope you’ll join us for the AMA on the 24th.

Categorieën: Mozilla-nl planet

Monica Chew: How do I turn on Tracking Protection? Let me count the ways.

Mozilla planet - to, 19/03/2015 - 15:58

I get this question a lot from various people, so it deserves its own post. Here's how to turn on Tracking Protection in Firefox to avoid connecting to known tracking domains from Disconnect's blocklist:
  1. Visit about:config and turn on privacy.trackingprotection.enabled. Because this works Firefox 35 or later, this is my favorite method. In Firefox 37 and later, it also works on Fennec.
  2. On Fennec Nightly, visit Settings > Privacy and select the checkbox "Tracking Protection".
  3. Install Lightbeam and toggle the "Tracking Protection" button in the top-right corner. Check out the difference in visiting only 2 sites with Tracking Protection on and off!
  4. On Firefox Nightly, visit about:config and turn on browser.polaris.enabled. This will enable privacy.trackingprotection.enabled and also show the checkbox for it in about:preferences#privacy, similar to the Fennec screenshot above. Because this only works in Nightly and also requires visiting about:config, it's my least favorite option.
  5. Do any of the above and sign into Firefox Sync. Tracking Protection will be enabled on all of your desktop profiles!
Categorieën: Mozilla-nl planet

Ben Hearsum: Release Automation Futures: Seamless integration of manual and automated steps

Mozilla planet - to, 19/03/2015 - 15:30

I've written about the history of our Release Automation systems in the past. We've gone from mostly manual releases to almost completely automated since I joined Mozilla. One thing I haven't talked about before is Ship It - our web tool for kicking off releases:



It may be ugly, but having it has meant that we don't have to log on to a single machine to ship a release. A release engineer doesn't even need to be around to start the release process - Release Management has direct access to Ship It to do it themselves. We're only needed to push releases live, and that's something we'd like to fix as well. We're looking at tackling that and other ancillary issues of releases, such as:

  • Improving and expanding validation of release automation inputs (revisions, branches, locales, etc.)
  • Scripting the publishing of Fennec to Google Play
  • Giving release Release Managers more direct control over updates
  • Updating metadata (ship dates, versions, locales) about releases
  • Improving security with better authentication (eg, HSMs or other secondary tokens) and authorization (eg, requiring multiple people to push updates)

Rail and I had a brainstorming session about this yesterday and a theme that kept coming up was that most of the things we want to improve are on the edges of release automation: they happen either before the current automation starts, or after the current automation ends. Everything in this list also needs someone to decide that it needs to happen -- our automation can't make the decision about what revision a release should be built with or when to push it to Google Play - it only knows how to do those things after being told that it should. These points where we jump back and forth between humans and automation are a big rough edge for us right now. The way they're implemented currently is very situation-specific, which means that adding new points of human-automation interaction is slow and full of uncertainty. This is something we need to fix in order to continue to ship as fast and effectively as we do.

We think we've come up a new design that will enable us to deal with all of the current human-automation interactions and any that come up in the future. It consists of three key components:

Workflows

A workflow is a DAG that represents an entire release process. It consists of human steps, automation steps, and potentially other types. An important point about workflows is that they aren't necessarily the same for every release. A Firefox Beta's workflow is different than a Fennec Beta or Firefox Release. The workflow for a Firefox Beta today may look very different than for one a few months from now. The details of a workflow are explicitly not baked into the system - they are part of the data that feeds it. Each node in the DAG will have upstreams, downstreams, and perhaps a list of notifications. The tooling around the workflow will respond to changes in state of each node and determine what can happen next. Much of each workflow will end up being the existing graph of Buildbot builders (eg: this graph of Firefox Beta jobs).

We're hoping to use existing software for this part. We've looked at Amazon's Simple Workflow Service already, but it doesn't support any dependencies between nodes, so we're not sure if it's going to fit the bill. We're also looking at Taskcluster which does do dependency management. If anyone knows of anything else that might be useful here please let know!

Ship It

As well as continuing to provide a human interface, Ship It will be the API between the workflow tool and humans/automation. When new nodes become ready it makes that information available to automation, or gives humans the option to enact them (depending on node type). It also receives state changes of nodes from automation (eg, build completion events). Ship It may also be given the responsibility of enforcing user ACLs.

Release Runner

Release Runner is the binding between Ship It and the backend parts of the automation. When Ship It is showing automation events ready to start, it will poke the right systems to make them go. When those jobs complete, it will send that information back to Ship It.

This will likely be getting a better name.

This design still needs some more thought and review, but we're very excited to be moving towards a world where humans and machines can integrate more seamlessly to get you the latest Firefox hotness more quickly and securely.

Categorieën: Mozilla-nl planet

Firefox 35 zum Download: Mozilla-Browser kann chatten - T-Online

Nieuws verzameld via Google - to, 19/03/2015 - 14:36

T-Online

Firefox 35 zum Download: Mozilla-Browser kann chatten
T-Online
Firefox 35 hat an der Oberfläche kaum Veränderungen erfahren, sondern wurde vor allem technisch weiter verbessert – der neue Mozilla-Browser wurde weiter beschleunigt und abgesichert. Insgesamt neun Sicherheitslücken werden geschlossen. Für den ...

Google Nieuws
Categorieën: Mozilla-nl planet

Andy McKay: Cutting the cord

Mozilla planet - to, 19/03/2015 - 08:00

The CRTC goes on and on about plans for TV, cable and unbundling and all that. Articles are written about how to watch all the things without paying for cable. Few discuss the main point, of watching less movies and television.

Television as show on cable is probably the worst thing in the world. You spend a large amount of money to receive the channels. Then you spend a large portion of that time watching adverts. The limit to adverts in Canada is 12% on specialty channels, not including promotion of their own shows.

Advertising is aspirational and as such depressing. It spends all its time telling you things you should buy, things you should be doing, things you should be spending on your money on and if you do all that... there's just more advertising to get you wanting different things.

Its even worse in the US where Americans spend on average over four and a half hours a day watching television. How that is even possible, I don't know. Of that 17 to 18 minutes is adverts. That means they watch some where around 46 minutes of adverts a day.

So you pay for advertising. Why would you do that? That is terrible.

Netflix does not have any adverts. If you need to watch more than Netflix multiple other sources exist e.g.: ctv.ca. If you need to watch more than that?

Just go do something else. Please go do something else. Read a book, meet your neighbours, play a game with friends, take up a sport... anything but watching that much television and adverts.

I can only hope that cable television dies off because its the worst thing ever.

Categorieën: Mozilla-nl planet

Frederic Wenzel: Updating Adobe Flash Without Restarting Firefox

Mozilla planet - to, 19/03/2015 - 08:00

No reason for a Flash upgrade to shut down your entire browser, even if it claims so.

It's 2015, and the love-hate relationship of the Web with Flash has not quite ended yet, though we're getting there. Click-to-play in Firefox makes sure most websites can't run Flash willy-nilly anymore, but most people, myself included, still have it installed, so keeping Flash up-to-date with its frequently necessary security updates is a process well-known to users.

Sadly, the Adobe Flash updater has the nasty habit of asking you to shut down Firefox entirely, or it won't install the update:

 Close Firefox

If you're anything like me, you have dozens of tabs open, half-read articles and a few draft emails open for good measure, and if there's one thing you don't want to do right now is restart your browser.

Fret not, the updater is lying.

Firefox runs the Flash plugin in an out of process plugin container, which is tech talk for: separately from your main Firefox browser.

Sure enough, in a command line window, I can search for a running instance of an application called plugin-container:

Firefox plugin container

Looks complicated, but tells me that Firefox Nightly is running a plugin container with process ID 7602.

Ka-boom

The neat thing is that we can kill that process without taking down the whole browser:

killall plugin-container

Note: killall is the sawed-off shotgun of process management. It'll close any process by the name you hand to it, so use with caution. For a lot more fine-grained control, find the process ID (in the picture above: 7602, but it'll be different for your computer) and then use the kill command on only that process ID (e.g., kill 7602).

This will, of course, stop all the Flash instances you might have running in your browser right now, so don't do it right in the middle of watching a movie on a Flash video site (note: Youtube does not use Flash by default anymore).

Now hit Retry in the Adobe Updater window and sure enough, it'll install the update without requiring you to close your entire browser.

Aaand we're done.

If you were in fact using Flash at the time of the update, you might see this in the browser when you're done:

 Flash plugin crashed

You can just reload the page to restart Flash.

Why won't Adobe do that for you, and instead asks you to close your browser entirely? I don't know. But if the agony of closing your entire browser outweighs the effort of a little command-line magic, now you know how to help yourself.

Hack on, friends.

Categorieën: Mozilla-nl planet

Cameron Kaiser: IonPower now beyond "doesn't suck" stage

Mozilla planet - to, 19/03/2015 - 04:45
Very pleased! IonPower (in Baseline mode) made it through the entire JavaScript JIT test suite tonight (which includes SunSpider and V8 but a lot else besides) without crashing or asserting. It doesn't pass everything yet, so we're not quite to phase 4, but the failures appear to group around some similar areas of code which suggest a common bug, and one of the failures is actually due to that particular test monkeying with JIT options we don't yet support (but will). Getting closer!

TenFourFox 31.6 is on schedule for March 31.

Categorieën: Mozilla-nl planet

Monica Chew: Tracking Protection talk on Air Mozilla

Mozilla planet - to, 19/03/2015 - 01:46
In August 2014, Georgios Kontaxis and I gave a talk on the implementation status of tracking protection in Firefox. At the time the talk was Mozillians only, but now it is public! Please visit Air Mozilla to view the talk, or see the slides below. The implementation status has not changed very much since last August, so most of the information is still pretty accurate.
Categorieën: Mozilla-nl planet

James Long: Backend Apps with Webpack, Part II: Driving with Gulp

Mozilla planet - to, 19/03/2015 - 01:00

In Part I of this series, we configured webpack for building backend apps. With a few simple tweaks, like leaving all dependencies from node_modules alone, we can leverage webpack's powerful infrastructure for backend modules and reuse the same system for the frontend. It's a relief to not maintain two separate build systems.

This series is targeted towards people already using webpack for the frontend. You may find babel's require hook fine for the backend, which is great. You might want to run files through multiple loaders, however, or share code between frontend and backend. Most importantly, you want to use hot module replacement. This is an experiment to reuse webpack for all of that.

In this post we are going to look at more fine-grained control over webpack, and how to manage both frontend and backend code at the same time. We are going to use gulp to drive webpack. This should be a usable setup for a real app.

Some of the responses to Part I criticized webpack as too complicated and not standards-compliant, and we should be moving to jspm and SystemJS. SystemJS is a runtime module loaded based on the ES6 specification. The people behind jspm are doing fantastic work, but all I can say is that they don't have many features that webpack users love. A simple example is hot module replacement. I'm sure in the years to come something like webpack will emerge based on the loader specification, and I'll gladly switch to it.

The most important thing is that we start writing ES6 modules. This affects the community a whole lot more than loaders, and luckily it's very simple to do with webpack. You need to use a compiler like Babel that supports modules, which you really want to do anyway to get all the good ES6 features. These compilers will turn ES6 modules into require statements, which can be processed with webpack.

I converted the backend-with-webpack repo to use the Babel loader and ES6 modules in the part1-es6 branch, and I will continue to use ES6 modules from here on.

Gulp

Gulp is nice task runner that makes it simple to automate anything. Even though we aren't using it to transform or bundle modules, its still useful as a "master controller" to drive webpack, test runners, and anything else you might need to do.

If you are going to use webpack for both frontend and backend code, you will need two separate configuration files. You could manually specify the desired config with --config, and run two separate watchers, but that gets redundant quickly. It's annoying to have two separate processes in two different terminals.

Webpack actually supports multiple configurations. Instead of exporting a single one, you export an array of them and it will run multiple processes for you. I still prefer using gulp instead because you might not want to always run both at the same time.

We need to convert our webpack usage to use the API instead of the CLI, and make a gulp task for it. Let's start by converting our existing config file into a gulp task.

The only difference is instead of exporting the config, you pass it to the webpack API. The gulpfile will look like this:

var gulp = require('gulp'); var webpack = require('webpack'); var config = { ... }; gulp.task('build-backend', function(done) { webpack(config).run(function(err, stats) { if(err) { console.log('Error', err); } else { console.log(stats.toString()); } done(); }); });

You can pass a config to the webpack function and you get back a compiler. You can call run or watch on the compiler, so if you wanted to make a build-watch task which automatically recompiles modules on change, you would call watch instead of run.

Our gulpfile is getting too big to show all of it here, but you can check out the new gulpfile.js which is a straight conversion of our old webpack.config.js. Note that we added a babel loader so we can write ES6 module syntax.

Multiple Webpack Configs

Now we're ready to roll! We can create another task for building frontend code, and simply provide a different webpack configuration. But we don't want to manage two completely separate configurations, since there are common properties between them.

What I like to do is create a base config and have others extend from it. Let's start with this:

var DeepMerge = require('deep-merge'); var deepmerge = DeepMerge(function(target, source, key) { if(target instanceof Array) { return [].concat(target, source); } return source; }); // generic var defaultConfig = { module: { loaders: [ {test: /\.js$/, exclude: /node_modules/, loaders: ['babel'] }, ] } }; if(process.env.NODE_ENV !== 'production') { defaultConfig.devtool = 'source-map'; defaultConfig.debug = true; } function config(overrides) { return deepmerge(defaultConfig, overrides || {}); }

We create a deep merging function for recursively merging objects, which allows us to override the default config, and we provide a function config for generating configs based off of it.

Note that you can turn on production mode by running the gulp task with NODE_ENV=production prefixed to it. If so, sourcemaps are not generated and you could add plugins for minifying code.

Now we can create a frontend config:

var frontendConfig = config({ entry: './static/js/main.js', output: { path: path.join(__dirname, 'static/build'), filename: 'frontend.js' } });

This makes static/js/main.js the entry point and bundles everything together at static/build/frontend.js.

Our backend config uses the same technique: customizing the config to be backend-specific. I don't think it's worth pasting here, but you can look at it on github. Now we have two tasks:

function onBuild(done) { return function(err, stats) { if(err) { console.log('Error', err); } else { console.log(stats.toString()); } if(done) { done(); } } } gulp.task('frontend-build', function(done) { webpack(frontendConfig).run(onBuild(done)); }); gulp.task('backend-build', function(done) { webpack(backendConfig).run(onBuild(done)); });

In fact, you could go crazy and provide several different interactions:

gulp.task('frontend-build', function(done) { webpack(frontendConfig).run(onBuild(done)); }); gulp.task('frontend-watch', function() { webpack(frontendConfig).watch(100, onBuild()); }); gulp.task('backend-build', function(done) { webpack(backendConfig).run(onBuild(done)); }); gulp.task('backend-watch', function() { webpack(backendConfig).watch(100, onBuild()); }); gulp.task('build', ['frontend-build', 'backend-build']); gulp.task('watch', ['frontend-watch', 'backend-watch']);

watch takes a delay as the first argument, so any changes within 100ms will only fire one rebuild.

You would typically run gulp watch to watch the entire codebase for changes, but you could just build or watch a specific piece if you wanted.

Nodemon

Nodemon is a nice process management tool for development. It starts a process for you and provides APIs to restart it. The goal of nodemon is to watch file changes and restart automatically, but we are only interested in manual restarts.

After installing with npm install nodemon and adding var nodemon = require('nodemon') to the top of the gulpfile, we can create a run task which executes the compiled backend file:

gulp.task('run', ['backend-watch', 'frontend-watch'], function() { nodemon({ execMap: { js: 'node' }, script: path.join(__dirname, 'build/backend'), ignore: ['*'], watch: ['foo/'], ext: 'noop' }).on('restart', function() { console.log('Restarted!'); }); });

This task also specifies dependencies on the backend-watch and frontend-watch tasks, so the watchers are automatically fired up and will code will recompile on change.

The execMap and script options specify how to actually run the program. The rest of the options are for nodemon's watcher, and we actually don't want it to watch anything. That's why ignore is *, watch is a non-existant directory, and ext is a non-existant file extension. Initially I only used the ext option but I ran into performance problems because nodemon still was watching everything in my project.

So how does our program actually restart on change? Calling nodemon.restart() does the trick, and we can do this within the backend-watch task:

gulp.task('backend-watch', function() { webpack(backendConfig).watch(100, function(err, stats) { onBuild()(err, stats); nodemon.restart(); }); });

Now, when running backend-watch, if you change a file it will be rebuilt and the process will automatically restart.

Our gulpfile is complete. After all this work, you just need to run this to start everything:

gulp run

As you code, everything will automatically be rebuilt and the server will restart. Hooray!

A Few Tips Better Performance

If you are using sourcemaps, you will notice compilation performance degrades the more files you have, even with incremental compilation (using watchers). This happens because webpack has to regenerate the entire sourcemap of the generated file even if a single module changes. This can be fixed by changing the devtool from source-map to #eval-source-map:

config.devtool = '#eval-source-map';

This tells webpack to process source-maps individually for each module, which it achieves by eval-ing each module at runtime with its own sourcemap. Prefixing it with # tells it you use the //# comment instead of the older //@ style.

Node Variables

I mentioned this in Part I, but some people missed it. Node defines variables like __dirname which are different for each module. This is a downside to using webpack, because we no longer have the node context for these variables, and webpack needs to fill them in.

Webpack has a workable solution, though. You can tell it how to treat these variables with the node configuration entry. You most likely want to set __dirname and __filename to true which will keep its real values. They default to "mock" which gives them dummy values (meant for browser environments).

Until Next Time

Our setup is now capable of building a large, complex app. If you want to share code between the frontend and backend, its easy to do since both sides use the same infrastructure. We get the same incremental compilation on both sides, and with the #eval-source-map setting, even with large amount of files modules are rebuilt in under 200ms.

I encourage you to modify this gulpfile to your heart's content. The great thing about webpack and gulp and is that its easy to customize it to your needs, so go wild.

These posts have been building towards the final act. We are now ready to take advantage of the most significant gain of this infrastructure: hot module replacement. React users have enjoyed this via react-hot-loader, and now that we have access to it on the backend, we can live edit backend apps. Part III will show you how to do this.

Thanks to Dan Abramov for reviewing this post.

Categorieën: Mozilla-nl planet

Gen Kanai: Analyse Asia – The Firefox Browser & Mobile OS with Gen Kanai

Mozilla planet - to, 19/03/2015 - 00:53

I had the pleasure to sit down with Bernard Leong, host of the Analyse Asia podcast, after my keynote presentation at FOSSASIA 2015. Please enjoy our discussion on Firefox, Firefox OS in Asia and other related topics.

Analyse Asia with Bernard Leong, Episode 22: The Firefox Browser & Mobile OS with Gen Kanai

 

Categorieën: Mozilla-nl planet

Air Mozilla: Kid's Vision - Mentorship Series

Mozilla planet - wo, 18/03/2015 - 23:00

Kid's Vision - Mentorship Series Mozilla hosts Kids Vision Bay Area Mentor Series

Categorieën: Mozilla-nl planet

Air Mozilla: Quality Team (QA) Public Meeting

Mozilla planet - wo, 18/03/2015 - 21:30

Quality Team (QA) Public Meeting This is the meeting where all the Mozilla quality teams meet, swap ideas, exchange notes on what is upcoming, and strategize around community building and...

Categorieën: Mozilla-nl planet

Find out more about crashes in Mozilla Firefox - Ghacks Technology News

Nieuws verzameld via Google - wo, 18/03/2015 - 21:02

Ghacks Technology News

Find out more about crashes in Mozilla Firefox
Ghacks Technology News
It is often not clear what caused a web browser to crash. While you sometimes get a good clue by analyzing what you did last, opened a link that supposedly crashes Firefox when doing so, it is often not clear why Firefox crashed at that point in time ...

Categorieën: Mozilla-nl planet

QMO: Design a T-shirt for Mozilla QA

Mozilla planet - wo, 18/03/2015 - 20:57

Attention visual designers! Are you interested in contributing to Mozilla QA? Design a T-shirt for Mozilla’s Quality Assurance (QA) team!

At Mozilla QA, we work with our community members to drive software quality assurance activities across Mozilla. We play a key role in releasing a diverse range of software products on schedule.

The Mozilla QA team at a Work Week in July 2014.

The Mozilla QA team at a Work Week in July 2014. Photo credit: Mozilla.

Along with jeans and sneakers, T-shirts are very much a part of a Mozillian’s wardrobe. Not only are they light and comfortable to wear, T-shirts also serve as a medium of personal expression.

At Mozilla, we have a special love affair with T-shirts. The walls of Mozilla’s headquarters in Mountain View are framed with T-shirts from the early Netscape days to recent times. T-shirts mark important milestones in the organization’s history, advertise its mission statement, and celebrate important anniversaries. They also announce new product launches, express team solidarity, and help Mozillians recognize each other in public places.

Are you interested in creating a T-shirt for Mozilla QA? You need not be a professional designer – we’d love to have you try your hand at creativity and submit your design. You would be part of an awesome worldwide community and your work would be seen by a lot of people.

What we’re looking for
  • We want a T-shirt design that would convey the QA team’s unique identity and mission within Mozilla.
  • To submit designs, please upload your images to Flickr and tag them with mozqatshirt. (This part is very important – we won’t see your submission if it’s not tagged properly.)
  • Your images should be no larger than 20cm by 20cm (about 8in by 8in).
  • If your design is selected, you will be requested to submit vector files or high-quality raster images of minimum 300dpi.
  • Designs are due on Thursday, April 30 2015 8:00 PM PST.
Design Guidelines

If you have questions, please ask in the comments below and we’ll respond as quickly as possible.

Existing T-shirts at Mozilla Jamie Zawinsky, Bob Lisbonne, Tara Hernandez, March 31, 1998.

Jamie Zawinsky frees the lizard on March 31, 1998. Standing behind him are Bob Lisbonne, Engineering Manager and Tara Hernandez, Release Engineer. On this day, Netscape released Navigator 5.0 as open source software through mozilla.org. This iconic t-shirt was designed by Shepard Fairey. Photo credit: Code Rush.

The Argentine community at the Firefox 4 launch party in 2011.

The Argentine community at the Firefox 4 launch party in 2011. Photo credit: musingt.

The Mozilla Arabic community with Chairperson Mitchell Baker at Jordan in June 2011.

The Mozilla Arabic community with Chairperson Mitchell Baker at Jordan in June 2011. Photo credit: nozom.

The Mozilla Philippines community with Chairperson Mitchell Baker at MozCamp Asia, Kuala Lampur in November 2011.

The Mozilla Philippines community with Chairperson Mitchell Baker at MozCamp Asia, Kuala Lampur in November 2011. Photo credit: Yofie Setiawan

Firefox is 'The browser with a mission' at the Wikimania 2012 conference in Washington DC.

Firefox is “The browser with a mission” at the Wikimania 2012 conference in Washington DC. Photo credit: Benoit Rochon via Wikipedia.

At the Firefox OS launch in Poland in July 2013.

At the Firefox OS launch in Poland in July 2013. Photo credit: Mozilla EU.

Carrying Mozilla's mission statement 'The Internet for everyone by everyone' to Campus Party Brazil 2014.

Carrying Mozilla’s mission statement “The Internet for everyone by everyone” to Campus Party Brazil 2014. Photo credit: Andre Garzia.

Mozillians cycle together on Bike To Work day in May 2011.

Mozillians cycle together on Bike To Work day in May 2011. Photo credit: Lukas Blakk.

The Thunderbird team gets together at the Toronto Summit in 2014.

The Thunderbird team gets together at the Toronto Summit in 2014. Photo credit: Kent James.

Celebrating the 10th anniversary of the Firefox browser in November 2014.

Celebrating the 10th anniversary of the Firefox browser in November 2014. Photo credit: Mozilla US.

Mozilla Reps at MozCamp India Beta in June 2014.

Mozilla Reps at MozCamp India Beta in June 2014. Photo credit: Abhishek Potnis.

Spreading digital literacy in schools through the Firefox Student Ambassador program.

Spreading digital literacy in schools through the Firefox Student Ambassador program. Photo credit: Sruthi.

Having fun is always part of the Maker Party!

Having fun is always part of the Maker Party! Photo credit: Michelle Thorne.

Mozilla community members wearing different t-shirts that mesh well together at the Campus Party Brazil 2014.

Mozilla community members wearing different t-shirts that mesh well together at the Campus Party Brazil 2014. Photo credit: Andre Garzia.

Categorieën: Mozilla-nl planet

Air Mozilla: Product Coordination Meeting

Mozilla planet - wo, 18/03/2015 - 19:00

Product Coordination Meeting Weekly coordination meeting for Firefox Desktop & Android product planning between Marketing/PR, Engineering, Release Scheduling, and Support.

Categorieën: Mozilla-nl planet

Air Mozilla: The Joy of Coding (mconley livehacks on Firefox) - Episode 6

Mozilla planet - wo, 18/03/2015 - 18:00

The Joy of Coding (mconley livehacks on Firefox) - Episode 6 Watch mconley livehack on Firefox Desktop bugs!

Categorieën: Mozilla-nl planet

Air Mozilla: MWoS 2014: OpenVPN MFA

Mozilla planet - wo, 18/03/2015 - 17:43

 OpenVPN MFA The OpenVPN MFA MWoS team presents their work to add native multi-factor authentication to OpenVPN. The goal of this project is to improve the ease...

Categorieën: Mozilla-nl planet

Justin Crawford: MDN Product Talk: The Case for Experiments

Mozilla planet - wo, 18/03/2015 - 17:15

This is the third post in a series that is leading up to a roadmap of sorts — a set of experiments I will recommend to help MDN both deepen and widen its service to web developers.

The first post in the series introduced MDN as a product ecosystem. The second post in the series explained the strategic context for MDN’s product work. In this post I will explain how an experimental approach can help us going forward, and I’ll talk about the two very different kinds of experiments we’re undertaking in 2015.

In the context of product development, “experiments” is shorthand for market-validated learning — a product discovery and optimization activity advocated by practitioners of lean product development and many others. Product development teams run experiments by testing hypotheses against reality. The primary benefit of this methodology is eliminating wasted effort by building things that people demonstrably want.

Now my method, though hard to practise, is easy to explain; and it is this. I propose to establish progressive stages of certainty.

-Francis Bacon (introducing the scientific method in Novum Organum)

Product experiments all…

  • have a hypothesis
  • test the hypothesis by exposing it to reality (in other words, introducing it to the market)
  • deliver some insight to drive further product development

Here is a concrete example of how MDN will benefit from an experimental approach. We have an outstanding bug, “Kuma: Optional IRC Widget“, opened 3 years ago and discussed at great length on numerous occasions. This bug, like so many enhancement requests, is really a hypothesis in disguise: It asserts that MDN would attract and retain more contributors if they had an easier way to discuss contribution in realtime with other contributors.

That hypothesis is untested. We don’t have an easy way to discuss contribution in realtime now. In order to test the bug’s hypothesis we propose to integrate a 3rd-party IRC widget into a specific subset of MDN pages and measure the result. We will undoubtedly learn something from this experiment: We will learn something about the specific solution or something about the problem itself, or both.

Understanding the actual problem to be solved (and for who) is a critical element of product experimentation. In this case, we do not assert that anonymous MDN visitors need a realtime chat feature, and we do not assert that MDN contributors specifically want to use IRC. We assert that contributors need a way to discuss and ask questions about contribution, and by giving them such a facility we’ll increase the quality of contribution. Providing this facility via IRC widget is an implementation detail.

This experiment is an example of optimization. We already know that contribution is a critical factor in the quality of documentation in the documentation wiki. This is because we already understand the business model and key metrics of the documentation wiki. The MDN documentation wiki is a very successful product, and our focus going forward should be on improving and optimizing it. We can do that with experiments like the one above.

In order to optimize anything, though, we need better measurements than we have now. Here’s an illustration of the key components of MDN’s documentation wiki:

metrics_status Visitors come to the wiki from search, by way of events, or through links in online social activity. If they create an account they become users and we notify them that they can contribute. If they follow up on that notification they become returners. If they contribute they become contributors. If they stop visiting they become disengaged users. Users can request content (in Bugzilla and elsewhere). Users can share content (manually).

All the red and orange shapes in the picture above represent things we’re measuring imperfectly or not at all. So we track the number of visitors and the number of users, but we don’t measure the rate by which visitors become users (or any other conversion rate). We measure the rates of content production and content consumption, but we don’t measure the helpfulness of content. And so forth.

If we wanted to add a feature to the wiki that might impact one of these numbers, how would we measure the before and after states? We couldn’t. If we wanted to choose between features that might affect these numbers, how would we decide which metric needed the most attention? We couldn’t. So in 2015 we must prioritize building enough measurements into the MDN platform that we can see what needs optimization and which optimizations make a difference. In particular, considering the size of content’s role in our ecosystem, we must prioritize features that help us better understand the impact of our content.

Once we have proper measurements, we have a huge backlog of optimization opportunities to consider for the documentation wiki. Experiments will help us prioritize them and implement them.

As we do so, we are also simultaneously engaged in a completely different kind of experimentation. Steve Blank describes the difference in his recent post, “Fear of Failure and Lack of Speed In a Large Corporation”. To paraphrase him: A successful product organization that has already found market fit (i.e. MDN’s documentation wiki) properly seeks to maximize the value of its existing fit — it optimizes. But a fledgling product organization has no idea what the fit is, and so properly seeks to discover it.

This second kind of experimentation is not for optimization, it is for discovery. MDN’s documentation wiki clearly solves a problem and there is clearly value in solving that problem, but MDN’s audience has plenty of problems to solve (and more on the way), and new audiences similar to MDN’s current audience have similar problems to solve, too. We can see far enough ahead to advance some broad hypotheses about solving these problems, and we now need to learn how accurate those hypotheses are.

Here is an illustration of the different kinds of product experiments we’re running in the context of the overall MDN ecosystem:

optimization_vs_learningThe left bubble represents the existing documentation wiki: It’s about content and contribution; we have a lot of great optimizations to build there. The right bubble represents our new product/market areas: We’re exploring new products for web developers (so far, in services) and we’re serving a new audience of web learners (so far, by adding some new areas to our existing product).

The right bubble is far less knowable than the left. We need to conduct experiments as quickly as possible to learn whether any particular service or teaching material resonates with its audience. Our experiments with new products and audiences will be more wide-ranging than our experiments to improve the wiki; they will also be measured in smaller numbers. These new initiatives have the possibility to grow into products as successful as the documentation wiki, but our focus in 2015 is to validate that these experiments can solve real problems for any part of MDN’s audience.

As Marty Cagan says, “Good [product] teams are skilled in the many techniques to rapidly try out product ideas to determine which ones are truly worth building.  Bad teams hold meetings to generate prioritized roadmaps.” On MDN we have an incredible opportunity to develop our product team by taking a more experimental approach to our work. Developing our product team will improve the quality of our products and help us serve more web developers better.

In an upcoming post I will talk about how our 2015 focus areas will help us meet the future. And of course I will talk about specific experiments soon, too.

Categorieën: Mozilla-nl planet

Will Kahn-Greene: Input status: March 18th, 2015

Mozilla planet - wo, 18/03/2015 - 17:00
Development

High-level summary:

  • new Alerts API
  • Heartbeat fixes
  • bunch of other minor fixes and updates

Thank you to contributors!:

    1. Guruprasad: 6
  • Ricky Rosario: 2

Landed and deployed:

  • 73eaaf2 bug 1103045 Add create survey form (L. Guruprasad)
  • e712384 bug 1130765 Implement Alerts API
  • 6bc619e bug 1130765 Docs fixes for the alerts api
  • 1e1ca9a bug 1130765 Tweak error msg in case where auth header is missing
  • 067d6e8 bug 1130765 Add support for Fjord-Authorization header
  • 1f3bde0 bug 909011 Handle amqp-specific indexing errors
  • 3da2b2d Fix alerts_api requests examples
  • 601551d Cosmetic: Rename heartbeat/views.py to heartbeat/api_views.py
  • 8f3b8e8 bug 1136810 Fix UnboundLocalError for "showdata"
  • 1721758 Update help_text in api_auth initial migration
  • 473e900 Fix migration for fixing AlertFlavor.allowed_tokens
  • 2d3d05a bug 1136809 Fix (person, survey, flow) uniqueness issues
  • 3ce45ec Update schema migration docs regarding module-level docstrings
  • 2a91627 bug 1137430 Validate sortby values
  • 6e3961c Update setup docs for django 1.7. (Ricky Rosario)
  • 6739af7 bug 1136814 Update to Django 1.7.5
  • 334eed7 Tweak commit msg linter
  • ac35deb bug 1048462 Update some requirements to pinned versions. (Ricky Rosario)
  • 8284cfa Clarify that one token can GET/POST to multiple alert flavors
  • 7a60497 bug 1137839 Add start_time/end_time to alerts api
  • 7a21735 Fix flavor.slug tests and eliminate needless comparisons
  • 89dbb49 bug 1048462 Switch some github url reqs to pypi
  • e1b62b5 bug 1137839 Add start_time/end_time to AlertAdmin
  • 3668585 bug 1103045 Add update survey form (L. Guruprasad)
  • ab706c6 bug 1139510 Update selenium to 2.45
  • 6df753d Cosmetic: Minor cleanup of server error testing
  • 1dcaf62 Make throw_error csrf exempt
  • ceb53eb bug 1136840 Fix error handling for better debugging
  • 92ce3b6 bug 1139545 Handle all exceptions
  • e33cf9f bug 1048462 Upgrade gengo-python from 0.1.14 to 0.1.19
  • 4a8de81 bug 1048462 Remove nuggets
  • ff9f01c bug 1139713 Add received_ts field to hb Answer model
  • d853fa9 bug 1139713 Fix received_ts migration
  • ae5cb13 bug 1048462 Upgrade django-multidb-router to 0.6
  • 649b136 bug 1048462 Nix django-compressor
  • 1547073 Cosmetic: alphabetize requirements
  • e165f49 Add note to compiled reqs about py-bcrypt
  • ecdd00f bug 1136840 Back out new WSGIHandler
  • cc75bef bug 1141153 Upgrade Django to 1.7.6
  • d518731 bug 1136840 Back out rest of WSGIHandler mixin
  • 12940b0 bug 1139545 Wrap hb integrity error with logging
  • 8b61f14 bug 1139545 Fix "get or create" section of HB post view
  • d44faf3 bug 1129102 ditch ditchchart flag (L. Guruprasad)
  • 7fa256a bug 1141410 Fix unicode exception when feedback has invalid unicode URL (L. Guruprasad)
  • c1fe25a bug 1134475 Cleanup all references to input-dev environment (L. Guruprasad)

Landed, but not deployed:

  • 1cac166 bug 1081177 Rename feedback api and update docs
  • 026d9ae bug 1144476 stop logging update_ts errors (L. Guruprasad)

Current head: 9b3e263

Rough plan for the next two weeks
  1. removing settings we don't need and implementing environment-based configuration for instance settings
  2. prepare for 2015q2
End of OPW and thank you to Adam!

March 9th was the last day of OPW. Adam did some really great work on Input which is greatly appreciated. We hope he sticks around with us. Thank you, Adam!

Categorieën: Mozilla-nl planet

Gregory Szorc: Network Events

Mozilla planet - wo, 18/03/2015 - 15:25

The Firefox source repositories and automation have been closed the past few days due to a couple of outages.

Yesterday, aggregate CPU usage on many of the machines in the hg.mozilla.org cluster hit 100%. Previously, whenever hg.mozilla.org was under high load, we'd run out of network bandwidth before we ran out of CPU on the machines. In other words, Mercurial was generating data faster than the network could accept it.

When this happened, the service started issuing HTTP 503 Service Not Available responses. This is the universal server signal for I'm down, go away. Unfortunately, not all clients did this.

Parts of Firefox's release automation retried failing requests immediately, or with insufficient jitter in their backoff interval. Actively retrying requests against a server that's experiencing load issues only makes the problem worse. This effectively prolonged the outage.

Today, we had a similar but different network issue. The load balancer fronting hg.mozilla.org can only handle so much bandwidth. Today, we hit that limit. The load balancer started throttling connections. Load on hg.mozilla.org skyrocketed and request latency increased. From the perspective of clients, the service grinded to a halt.

hg.mozilla.org was partially sharing a load balancer with ftp.mozilla.org. That meant if one of the services experienced very high load, the other service could effectively be locked out of bandwidth. We saw this happening this morning. ftp.mozilla.org load was high (it looks like downloads of Firefox Developer Edition are a major contributor - these don't go through the CDN for reasons unknown to me) and there wasn't enough bandwidth to go around.

Separately today, hg.mozilla.org again hit 100% CPU. At that time, it also set a new record for network throughput: ~3 Gbps. It normally consumes between 200 and 500 Mbps, with periodic spikes to 750 Mbps. (Yesterday's event saw a spike to around ~2 Gbps.)

Going back through the hg.mozilla.org server logs, an offender is quite obvious. Before March 9, total outbound transfer for the build/tools repo was around 1 tebibyte per day. Starting on March 9, it increased to 3 tebibytes per day! This is quite remarkable, as a clone of this repo is only about 20 MiB. This means the repo was getting cloned about 150,000 times per day! (Note: I think all these numbers may be low by ~20% - stay tuned for the final analysis.)

2 TiB/day is statistically significant because we transfer less than 10 TiB/day across all of hg.mozilla.org. And, 1 TiB/day is close to 100 Mbps, assuming requests are evenly spread out (which of course they aren't).

Multiple things went wrong. If only one or two happened, we'd likely be fine. Maybe there would have been a short blip. But not the major event we've been firefighting the last ~24 hours.

This post is only a summary of what went wrong. I'm sure there will be a post-mortem and that it will contain lots of details for those who want to know more.

Categorieën: Mozilla-nl planet

Pages