mozilla

Mozilla Nederland LogoDe Nederlandse
Mozilla-gemeenschap

Firefox 35 zum Download: Mozilla-Browser kann chatten - T-Online

Nieuws verzameld via Google - do, 19/03/2015 - 14:36

T-Online

Firefox 35 zum Download: Mozilla-Browser kann chatten
T-Online
Firefox 35 hat an der Oberfläche kaum Veränderungen erfahren, sondern wurde vor allem technisch weiter verbessert – der neue Mozilla-Browser wurde weiter beschleunigt und abgesichert. Insgesamt neun Sicherheitslücken werden geschlossen. Für den ...

Google Nieuws
Categorieën: Mozilla-nl planet

Andy McKay: Cutting the cord

Mozilla planet - do, 19/03/2015 - 08:00

The CRTC goes on and on about plans for TV, cable and unbundling and all that. Articles are written about how to watch all the things without paying for cable. Few discuss the main point, of watching less movies and television.

Television as show on cable is probably the worst thing in the world. You spend a large amount of money to receive the channels. Then you spend a large portion of that time watching adverts. The limit to adverts in Canada is 12% on specialty channels, not including promotion of their own shows.

Advertising is aspirational and as such depressing. It spends all its time telling you things you should buy, things you should be doing, things you should be spending on your money on and if you do all that... there's just more advertising to get you wanting different things.

Its even worse in the US where Americans spend on average over four and a half hours a day watching television. How that is even possible, I don't know. Of that 17 to 18 minutes is adverts. That means they watch some where around 46 minutes of adverts a day.

So you pay for advertising. Why would you do that? That is terrible.

Netflix does not have any adverts. If you need to watch more than Netflix multiple other sources exist e.g.: ctv.ca. If you need to watch more than that?

Just go do something else. Please go do something else. Read a book, meet your neighbours, play a game with friends, take up a sport... anything but watching that much television and adverts.

I can only hope that cable television dies off because its the worst thing ever.

Categorieën: Mozilla-nl planet

Frederic Wenzel: Updating Adobe Flash Without Restarting Firefox

Mozilla planet - do, 19/03/2015 - 08:00

No reason for a Flash upgrade to shut down your entire browser, even if it claims so.

It's 2015, and the love-hate relationship of the Web with Flash has not quite ended yet, though we're getting there. Click-to-play in Firefox makes sure most websites can't run Flash willy-nilly anymore, but most people, myself included, still have it installed, so keeping Flash up-to-date with its frequently necessary security updates is a process well-known to users.

Sadly, the Adobe Flash updater has the nasty habit of asking you to shut down Firefox entirely, or it won't install the update:

 Close Firefox

If you're anything like me, you have dozens of tabs open, half-read articles and a few draft emails open for good measure, and if there's one thing you don't want to do right now is restart your browser.

Fret not, the updater is lying.

Firefox runs the Flash plugin in an out of process plugin container, which is tech talk for: separately from your main Firefox browser.

Sure enough, in a command line window, I can search for a running instance of an application called plugin-container:

Firefox plugin container

Looks complicated, but tells me that Firefox Nightly is running a plugin container with process ID 7602.

Ka-boom

The neat thing is that we can kill that process without taking down the whole browser:

killall plugin-container

Note: killall is the sawed-off shotgun of process management. It'll close any process by the name you hand to it, so use with caution. For a lot more fine-grained control, find the process ID (in the picture above: 7602, but it'll be different for your computer) and then use the kill command on only that process ID (e.g., kill 7602).

This will, of course, stop all the Flash instances you might have running in your browser right now, so don't do it right in the middle of watching a movie on a Flash video site (note: Youtube does not use Flash by default anymore).

Now hit Retry in the Adobe Updater window and sure enough, it'll install the update without requiring you to close your entire browser.

Aaand we're done.

If you were in fact using Flash at the time of the update, you might see this in the browser when you're done:

 Flash plugin crashed

You can just reload the page to restart Flash.

Why won't Adobe do that for you, and instead asks you to close your browser entirely? I don't know. But if the agony of closing your entire browser outweighs the effort of a little command-line magic, now you know how to help yourself.

Hack on, friends.

Categorieën: Mozilla-nl planet

Cameron Kaiser: IonPower now beyond "doesn't suck" stage

Mozilla planet - do, 19/03/2015 - 04:45
Very pleased! IonPower (in Baseline mode) made it through the entire JavaScript JIT test suite tonight (which includes SunSpider and V8 but a lot else besides) without crashing or asserting. It doesn't pass everything yet, so we're not quite to phase 4, but the failures appear to group around some similar areas of code which suggest a common bug, and one of the failures is actually due to that particular test monkeying with JIT options we don't yet support (but will). Getting closer!

TenFourFox 31.6 is on schedule for March 31.

Categorieën: Mozilla-nl planet

Monica Chew: Tracking Protection talk on Air Mozilla

Mozilla planet - do, 19/03/2015 - 01:46
In August 2014, Georgios Kontaxis and I gave a talk on the implementation status of tracking protection in Firefox. At the time the talk was Mozillians only, but now it is public! Please visit Air Mozilla to view the talk, or see the slides below. The implementation status has not changed very much since last August, so most of the information is still pretty accurate.
Categorieën: Mozilla-nl planet

James Long: Backend Apps with Webpack, Part II: Driving with Gulp

Mozilla planet - do, 19/03/2015 - 01:00

In Part I of this series, we configured webpack for building backend apps. With a few simple tweaks, like leaving all dependencies from node_modules alone, we can leverage webpack's powerful infrastructure for backend modules and reuse the same system for the frontend. It's a relief to not maintain two separate build systems.

This series is targeted towards people already using webpack for the frontend. You may find babel's require hook fine for the backend, which is great. You might want to run files through multiple loaders, however, or share code between frontend and backend. Most importantly, you want to use hot module replacement. This is an experiment to reuse webpack for all of that.

In this post we are going to look at more fine-grained control over webpack, and how to manage both frontend and backend code at the same time. We are going to use gulp to drive webpack. This should be a usable setup for a real app.

Some of the responses to Part I criticized webpack as too complicated and not standards-compliant, and we should be moving to jspm and SystemJS. SystemJS is a runtime module loaded based on the ES6 specification. The people behind jspm are doing fantastic work, but all I can say is that they don't have many features that webpack users love. A simple example is hot module replacement. I'm sure in the years to come something like webpack will emerge based on the loader specification, and I'll gladly switch to it.

The most important thing is that we start writing ES6 modules. This affects the community a whole lot more than loaders, and luckily it's very simple to do with webpack. You need to use a compiler like Babel that supports modules, which you really want to do anyway to get all the good ES6 features. These compilers will turn ES6 modules into require statements, which can be processed with webpack.

I converted the backend-with-webpack repo to use the Babel loader and ES6 modules in the part1-es6 branch, and I will continue to use ES6 modules from here on.

Gulp

Gulp is nice task runner that makes it simple to automate anything. Even though we aren't using it to transform or bundle modules, its still useful as a "master controller" to drive webpack, test runners, and anything else you might need to do.

If you are going to use webpack for both frontend and backend code, you will need two separate configuration files. You could manually specify the desired config with --config, and run two separate watchers, but that gets redundant quickly. It's annoying to have two separate processes in two different terminals.

Webpack actually supports multiple configurations. Instead of exporting a single one, you export an array of them and it will run multiple processes for you. I still prefer using gulp instead because you might not want to always run both at the same time.

We need to convert our webpack usage to use the API instead of the CLI, and make a gulp task for it. Let's start by converting our existing config file into a gulp task.

The only difference is instead of exporting the config, you pass it to the webpack API. The gulpfile will look like this:

var gulp = require('gulp'); var webpack = require('webpack'); var config = { ... }; gulp.task('build-backend', function(done) { webpack(config).run(function(err, stats) { if(err) { console.log('Error', err); } else { console.log(stats.toString()); } done(); }); });

You can pass a config to the webpack function and you get back a compiler. You can call run or watch on the compiler, so if you wanted to make a build-watch task which automatically recompiles modules on change, you would call watch instead of run.

Our gulpfile is getting too big to show all of it here, but you can check out the new gulpfile.js which is a straight conversion of our old webpack.config.js. Note that we added a babel loader so we can write ES6 module syntax.

Multiple Webpack Configs

Now we're ready to roll! We can create another task for building frontend code, and simply provide a different webpack configuration. But we don't want to manage two completely separate configurations, since there are common properties between them.

What I like to do is create a base config and have others extend from it. Let's start with this:

var DeepMerge = require('deep-merge'); var deepmerge = DeepMerge(function(target, source, key) { if(target instanceof Array) { return [].concat(target, source); } return source; }); // generic var defaultConfig = { module: { loaders: [ {test: /\.js$/, exclude: /node_modules/, loaders: ['babel'] }, ] } }; if(process.env.NODE_ENV !== 'production') { defaultConfig.devtool = 'source-map'; defaultConfig.debug = true; } function config(overrides) { return deepmerge(defaultConfig, overrides || {}); }

We create a deep merging function for recursively merging objects, which allows us to override the default config, and we provide a function config for generating configs based off of it.

Note that you can turn on production mode by running the gulp task with NODE_ENV=production prefixed to it. If so, sourcemaps are not generated and you could add plugins for minifying code.

Now we can create a frontend config:

var frontendConfig = config({ entry: './static/js/main.js', output: { path: path.join(__dirname, 'static/build'), filename: 'frontend.js' } });

This makes static/js/main.js the entry point and bundles everything together at static/build/frontend.js.

Our backend config uses the same technique: customizing the config to be backend-specific. I don't think it's worth pasting here, but you can look at it on github. Now we have two tasks:

function onBuild(done) { return function(err, stats) { if(err) { console.log('Error', err); } else { console.log(stats.toString()); } if(done) { done(); } } } gulp.task('frontend-build', function(done) { webpack(frontendConfig).run(onBuild(done)); }); gulp.task('backend-build', function(done) { webpack(backendConfig).run(onBuild(done)); });

In fact, you could go crazy and provide several different interactions:

gulp.task('frontend-build', function(done) { webpack(frontendConfig).run(onBuild(done)); }); gulp.task('frontend-watch', function() { webpack(frontendConfig).watch(100, onBuild()); }); gulp.task('backend-build', function(done) { webpack(backendConfig).run(onBuild(done)); }); gulp.task('backend-watch', function() { webpack(backendConfig).watch(100, onBuild()); }); gulp.task('build', ['frontend-build', 'backend-build']); gulp.task('watch', ['frontend-watch', 'backend-watch']);

watch takes a delay as the first argument, so any changes within 100ms will only fire one rebuild.

You would typically run gulp watch to watch the entire codebase for changes, but you could just build or watch a specific piece if you wanted.

Nodemon

Nodemon is a nice process management tool for development. It starts a process for you and provides APIs to restart it. The goal of nodemon is to watch file changes and restart automatically, but we are only interested in manual restarts.

After installing with npm install nodemon and adding var nodemon = require('nodemon') to the top of the gulpfile, we can create a run task which executes the compiled backend file:

gulp.task('run', ['backend-watch', 'frontend-watch'], function() { nodemon({ execMap: { js: 'node' }, script: path.join(__dirname, 'build/backend'), ignore: ['*'], watch: ['foo/'], ext: 'noop' }).on('restart', function() { console.log('Restarted!'); }); });

This task also specifies dependencies on the backend-watch and frontend-watch tasks, so the watchers are automatically fired up and will code will recompile on change.

The execMap and script options specify how to actually run the program. The rest of the options are for nodemon's watcher, and we actually don't want it to watch anything. That's why ignore is *, watch is a non-existant directory, and ext is a non-existant file extension. Initially I only used the ext option but I ran into performance problems because nodemon still was watching everything in my project.

So how does our program actually restart on change? Calling nodemon.restart() does the trick, and we can do this within the backend-watch task:

gulp.task('backend-watch', function() { webpack(backendConfig).watch(100, function(err, stats) { onBuild()(err, stats); nodemon.restart(); }); });

Now, when running backend-watch, if you change a file it will be rebuilt and the process will automatically restart.

Our gulpfile is complete. After all this work, you just need to run this to start everything:

gulp run

As you code, everything will automatically be rebuilt and the server will restart. Hooray!

A Few Tips Better Performance

If you are using sourcemaps, you will notice compilation performance degrades the more files you have, even with incremental compilation (using watchers). This happens because webpack has to regenerate the entire sourcemap of the generated file even if a single module changes. This can be fixed by changing the devtool from source-map to #eval-source-map:

config.devtool = '#eval-source-map';

This tells webpack to process source-maps individually for each module, which it achieves by eval-ing each module at runtime with its own sourcemap. Prefixing it with # tells it you use the //# comment instead of the older //@ style.

Node Variables

I mentioned this in Part I, but some people missed it. Node defines variables like __dirname which are different for each module. This is a downside to using webpack, because we no longer have the node context for these variables, and webpack needs to fill them in.

Webpack has a workable solution, though. You can tell it how to treat these variables with the node configuration entry. You most likely want to set __dirname and __filename to true which will keep its real values. They default to "mock" which gives them dummy values (meant for browser environments).

Until Next Time

Our setup is now capable of building a large, complex app. If you want to share code between the frontend and backend, its easy to do since both sides use the same infrastructure. We get the same incremental compilation on both sides, and with the #eval-source-map setting, even with large amount of files modules are rebuilt in under 200ms.

I encourage you to modify this gulpfile to your heart's content. The great thing about webpack and gulp and is that its easy to customize it to your needs, so go wild.

These posts have been building towards the final act. We are now ready to take advantage of the most significant gain of this infrastructure: hot module replacement. React users have enjoyed this via react-hot-loader, and now that we have access to it on the backend, we can live edit backend apps. Part III will show you how to do this.

Thanks to Dan Abramov for reviewing this post.

Categorieën: Mozilla-nl planet

Gen Kanai: Analyse Asia – The Firefox Browser & Mobile OS with Gen Kanai

Mozilla planet - do, 19/03/2015 - 00:53

I had the pleasure to sit down with Bernard Leong, host of the Analyse Asia podcast, after my keynote presentation at FOSSASIA 2015. Please enjoy our discussion on Firefox, Firefox OS in Asia and other related topics.

Analyse Asia with Bernard Leong, Episode 22: The Firefox Browser & Mobile OS with Gen Kanai

 

Categorieën: Mozilla-nl planet

Air Mozilla: Kid's Vision - Mentorship Series

Mozilla planet - wo, 18/03/2015 - 23:00

Kid's Vision - Mentorship Series Mozilla hosts Kids Vision Bay Area Mentor Series

Categorieën: Mozilla-nl planet

Air Mozilla: Quality Team (QA) Public Meeting

Mozilla planet - wo, 18/03/2015 - 21:30

Quality Team (QA) Public Meeting This is the meeting where all the Mozilla quality teams meet, swap ideas, exchange notes on what is upcoming, and strategize around community building and...

Categorieën: Mozilla-nl planet

Find out more about crashes in Mozilla Firefox - Ghacks Technology News

Nieuws verzameld via Google - wo, 18/03/2015 - 21:02

Ghacks Technology News

Find out more about crashes in Mozilla Firefox
Ghacks Technology News
It is often not clear what caused a web browser to crash. While you sometimes get a good clue by analyzing what you did last, opened a link that supposedly crashes Firefox when doing so, it is often not clear why Firefox crashed at that point in time ...

Categorieën: Mozilla-nl planet

QMO: Design a T-shirt for Mozilla QA

Mozilla planet - wo, 18/03/2015 - 20:57

Attention visual designers! Are you interested in contributing to Mozilla QA? Design a T-shirt for Mozilla’s Quality Assurance (QA) team!

At Mozilla QA, we work with our community members to drive software quality assurance activities across Mozilla. We play a key role in releasing a diverse range of software products on schedule.

The Mozilla QA team at a Work Week in July 2014.

The Mozilla QA team at a Work Week in July 2014. Photo credit: Mozilla.

Along with jeans and sneakers, T-shirts are very much a part of a Mozillian’s wardrobe. Not only are they light and comfortable to wear, T-shirts also serve as a medium of personal expression.

At Mozilla, we have a special love affair with T-shirts. The walls of Mozilla’s headquarters in Mountain View are framed with T-shirts from the early Netscape days to recent times. T-shirts mark important milestones in the organization’s history, advertise its mission statement, and celebrate important anniversaries. They also announce new product launches, express team solidarity, and help Mozillians recognize each other in public places.

Are you interested in creating a T-shirt for Mozilla QA? You need not be a professional designer – we’d love to have you try your hand at creativity and submit your design. You would be part of an awesome worldwide community and your work would be seen by a lot of people.

What we’re looking for
  • We want a T-shirt design that would convey the QA team’s unique identity and mission within Mozilla.
  • To submit designs, please upload your images to Flickr and tag them with mozqatshirt. (This part is very important – we won’t see your submission if it’s not tagged properly.)
  • Your images should be no larger than 20cm by 20cm (about 8in by 8in).
  • If your design is selected, you will be requested to submit vector files or high-quality raster images of minimum 300dpi.
  • Designs are due on Thursday, April 30 2015 8:00 PM PST.
Design Guidelines

If you have questions, please ask in the comments below and we’ll respond as quickly as possible.

Existing T-shirts at Mozilla Jamie Zawinsky, Bob Lisbonne, Tara Hernandez, March 31, 1998.

Jamie Zawinsky frees the lizard on March 31, 1998. Standing behind him are Bob Lisbonne, Engineering Manager and Tara Hernandez, Release Engineer. On this day, Netscape released Navigator 5.0 as open source software through mozilla.org. This iconic t-shirt was designed by Shepard Fairey. Photo credit: Code Rush.

The Argentine community at the Firefox 4 launch party in 2011.

The Argentine community at the Firefox 4 launch party in 2011. Photo credit: musingt.

The Mozilla Arabic community with Chairperson Mitchell Baker at Jordan in June 2011.

The Mozilla Arabic community with Chairperson Mitchell Baker at Jordan in June 2011. Photo credit: nozom.

The Mozilla Philippines community with Chairperson Mitchell Baker at MozCamp Asia, Kuala Lampur in November 2011.

The Mozilla Philippines community with Chairperson Mitchell Baker at MozCamp Asia, Kuala Lampur in November 2011. Photo credit: Yofie Setiawan

Firefox is 'The browser with a mission' at the Wikimania 2012 conference in Washington DC.

Firefox is “The browser with a mission” at the Wikimania 2012 conference in Washington DC. Photo credit: Benoit Rochon via Wikipedia.

At the Firefox OS launch in Poland in July 2013.

At the Firefox OS launch in Poland in July 2013. Photo credit: Mozilla EU.

Carrying Mozilla's mission statement 'The Internet for everyone by everyone' to Campus Party Brazil 2014.

Carrying Mozilla’s mission statement “The Internet for everyone by everyone” to Campus Party Brazil 2014. Photo credit: Andre Garzia.

Mozillians cycle together on Bike To Work day in May 2011.

Mozillians cycle together on Bike To Work day in May 2011. Photo credit: Lukas Blakk.

The Thunderbird team gets together at the Toronto Summit in 2014.

The Thunderbird team gets together at the Toronto Summit in 2014. Photo credit: Kent James.

Celebrating the 10th anniversary of the Firefox browser in November 2014.

Celebrating the 10th anniversary of the Firefox browser in November 2014. Photo credit: Mozilla US.

Mozilla Reps at MozCamp India Beta in June 2014.

Mozilla Reps at MozCamp India Beta in June 2014. Photo credit: Abhishek Potnis.

Spreading digital literacy in schools through the Firefox Student Ambassador program.

Spreading digital literacy in schools through the Firefox Student Ambassador program. Photo credit: Sruthi.

Having fun is always part of the Maker Party!

Having fun is always part of the Maker Party! Photo credit: Michelle Thorne.

Mozilla community members wearing different t-shirts that mesh well together at the Campus Party Brazil 2014.

Mozilla community members wearing different t-shirts that mesh well together at the Campus Party Brazil 2014. Photo credit: Andre Garzia.

Categorieën: Mozilla-nl planet

Air Mozilla: Product Coordination Meeting

Mozilla planet - wo, 18/03/2015 - 19:00

Product Coordination Meeting Weekly coordination meeting for Firefox Desktop & Android product planning between Marketing/PR, Engineering, Release Scheduling, and Support.

Categorieën: Mozilla-nl planet

Air Mozilla: The Joy of Coding (mconley livehacks on Firefox) - Episode 6

Mozilla planet - wo, 18/03/2015 - 18:00

The Joy of Coding (mconley livehacks on Firefox) - Episode 6 Watch mconley livehack on Firefox Desktop bugs!

Categorieën: Mozilla-nl planet

Air Mozilla: MWoS 2014: OpenVPN MFA

Mozilla planet - wo, 18/03/2015 - 17:43

 OpenVPN MFA The OpenVPN MFA MWoS team presents their work to add native multi-factor authentication to OpenVPN. The goal of this project is to improve the ease...

Categorieën: Mozilla-nl planet

Justin Crawford: MDN Product Talk: The Case for Experiments

Mozilla planet - wo, 18/03/2015 - 17:15

This is the third post in a series that is leading up to a roadmap of sorts — a set of experiments I will recommend to help MDN both deepen and widen its service to web developers.

The first post in the series introduced MDN as a product ecosystem. The second post in the series explained the strategic context for MDN’s product work. In this post I will explain how an experimental approach can help us going forward, and I’ll talk about the two very different kinds of experiments we’re undertaking in 2015.

In the context of product development, “experiments” is shorthand for market-validated learning — a product discovery and optimization activity advocated by practitioners of lean product development and many others. Product development teams run experiments by testing hypotheses against reality. The primary benefit of this methodology is eliminating wasted effort by building things that people demonstrably want.

Now my method, though hard to practise, is easy to explain; and it is this. I propose to establish progressive stages of certainty.

-Francis Bacon (introducing the scientific method in Novum Organum)

Product experiments all…

  • have a hypothesis
  • test the hypothesis by exposing it to reality (in other words, introducing it to the market)
  • deliver some insight to drive further product development

Here is a concrete example of how MDN will benefit from an experimental approach. We have an outstanding bug, “Kuma: Optional IRC Widget“, opened 3 years ago and discussed at great length on numerous occasions. This bug, like so many enhancement requests, is really a hypothesis in disguise: It asserts that MDN would attract and retain more contributors if they had an easier way to discuss contribution in realtime with other contributors.

That hypothesis is untested. We don’t have an easy way to discuss contribution in realtime now. In order to test the bug’s hypothesis we propose to integrate a 3rd-party IRC widget into a specific subset of MDN pages and measure the result. We will undoubtedly learn something from this experiment: We will learn something about the specific solution or something about the problem itself, or both.

Understanding the actual problem to be solved (and for who) is a critical element of product experimentation. In this case, we do not assert that anonymous MDN visitors need a realtime chat feature, and we do not assert that MDN contributors specifically want to use IRC. We assert that contributors need a way to discuss and ask questions about contribution, and by giving them such a facility we’ll increase the quality of contribution. Providing this facility via IRC widget is an implementation detail.

This experiment is an example of optimization. We already know that contribution is a critical factor in the quality of documentation in the documentation wiki. This is because we already understand the business model and key metrics of the documentation wiki. The MDN documentation wiki is a very successful product, and our focus going forward should be on improving and optimizing it. We can do that with experiments like the one above.

In order to optimize anything, though, we need better measurements than we have now. Here’s an illustration of the key components of MDN’s documentation wiki:

metrics_status Visitors come to the wiki from search, by way of events, or through links in online social activity. If they create an account they become users and we notify them that they can contribute. If they follow up on that notification they become returners. If they contribute they become contributors. If they stop visiting they become disengaged users. Users can request content (in Bugzilla and elsewhere). Users can share content (manually).

All the red and orange shapes in the picture above represent things we’re measuring imperfectly or not at all. So we track the number of visitors and the number of users, but we don’t measure the rate by which visitors become users (or any other conversion rate). We measure the rates of content production and content consumption, but we don’t measure the helpfulness of content. And so forth.

If we wanted to add a feature to the wiki that might impact one of these numbers, how would we measure the before and after states? We couldn’t. If we wanted to choose between features that might affect these numbers, how would we decide which metric needed the most attention? We couldn’t. So in 2015 we must prioritize building enough measurements into the MDN platform that we can see what needs optimization and which optimizations make a difference. In particular, considering the size of content’s role in our ecosystem, we must prioritize features that help us better understand the impact of our content.

Once we have proper measurements, we have a huge backlog of optimization opportunities to consider for the documentation wiki. Experiments will help us prioritize them and implement them.

As we do so, we are also simultaneously engaged in a completely different kind of experimentation. Steve Blank describes the difference in his recent post, “Fear of Failure and Lack of Speed In a Large Corporation”. To paraphrase him: A successful product organization that has already found market fit (i.e. MDN’s documentation wiki) properly seeks to maximize the value of its existing fit — it optimizes. But a fledgling product organization has no idea what the fit is, and so properly seeks to discover it.

This second kind of experimentation is not for optimization, it is for discovery. MDN’s documentation wiki clearly solves a problem and there is clearly value in solving that problem, but MDN’s audience has plenty of problems to solve (and more on the way), and new audiences similar to MDN’s current audience have similar problems to solve, too. We can see far enough ahead to advance some broad hypotheses about solving these problems, and we now need to learn how accurate those hypotheses are.

Here is an illustration of the different kinds of product experiments we’re running in the context of the overall MDN ecosystem:

optimization_vs_learningThe left bubble represents the existing documentation wiki: It’s about content and contribution; we have a lot of great optimizations to build there. The right bubble represents our new product/market areas: We’re exploring new products for web developers (so far, in services) and we’re serving a new audience of web learners (so far, by adding some new areas to our existing product).

The right bubble is far less knowable than the left. We need to conduct experiments as quickly as possible to learn whether any particular service or teaching material resonates with its audience. Our experiments with new products and audiences will be more wide-ranging than our experiments to improve the wiki; they will also be measured in smaller numbers. These new initiatives have the possibility to grow into products as successful as the documentation wiki, but our focus in 2015 is to validate that these experiments can solve real problems for any part of MDN’s audience.

As Marty Cagan says, “Good [product] teams are skilled in the many techniques to rapidly try out product ideas to determine which ones are truly worth building.  Bad teams hold meetings to generate prioritized roadmaps.” On MDN we have an incredible opportunity to develop our product team by taking a more experimental approach to our work. Developing our product team will improve the quality of our products and help us serve more web developers better.

In an upcoming post I will talk about how our 2015 focus areas will help us meet the future. And of course I will talk about specific experiments soon, too.

Categorieën: Mozilla-nl planet

Will Kahn-Greene: Input status: March 18th, 2015

Mozilla planet - wo, 18/03/2015 - 17:00
Development

High-level summary:

  • new Alerts API
  • Heartbeat fixes
  • bunch of other minor fixes and updates

Thank you to contributors!:

    1. Guruprasad: 6
  • Ricky Rosario: 2

Landed and deployed:

  • 73eaaf2 bug 1103045 Add create survey form (L. Guruprasad)
  • e712384 bug 1130765 Implement Alerts API
  • 6bc619e bug 1130765 Docs fixes for the alerts api
  • 1e1ca9a bug 1130765 Tweak error msg in case where auth header is missing
  • 067d6e8 bug 1130765 Add support for Fjord-Authorization header
  • 1f3bde0 bug 909011 Handle amqp-specific indexing errors
  • 3da2b2d Fix alerts_api requests examples
  • 601551d Cosmetic: Rename heartbeat/views.py to heartbeat/api_views.py
  • 8f3b8e8 bug 1136810 Fix UnboundLocalError for "showdata"
  • 1721758 Update help_text in api_auth initial migration
  • 473e900 Fix migration for fixing AlertFlavor.allowed_tokens
  • 2d3d05a bug 1136809 Fix (person, survey, flow) uniqueness issues
  • 3ce45ec Update schema migration docs regarding module-level docstrings
  • 2a91627 bug 1137430 Validate sortby values
  • 6e3961c Update setup docs for django 1.7. (Ricky Rosario)
  • 6739af7 bug 1136814 Update to Django 1.7.5
  • 334eed7 Tweak commit msg linter
  • ac35deb bug 1048462 Update some requirements to pinned versions. (Ricky Rosario)
  • 8284cfa Clarify that one token can GET/POST to multiple alert flavors
  • 7a60497 bug 1137839 Add start_time/end_time to alerts api
  • 7a21735 Fix flavor.slug tests and eliminate needless comparisons
  • 89dbb49 bug 1048462 Switch some github url reqs to pypi
  • e1b62b5 bug 1137839 Add start_time/end_time to AlertAdmin
  • 3668585 bug 1103045 Add update survey form (L. Guruprasad)
  • ab706c6 bug 1139510 Update selenium to 2.45
  • 6df753d Cosmetic: Minor cleanup of server error testing
  • 1dcaf62 Make throw_error csrf exempt
  • ceb53eb bug 1136840 Fix error handling for better debugging
  • 92ce3b6 bug 1139545 Handle all exceptions
  • e33cf9f bug 1048462 Upgrade gengo-python from 0.1.14 to 0.1.19
  • 4a8de81 bug 1048462 Remove nuggets
  • ff9f01c bug 1139713 Add received_ts field to hb Answer model
  • d853fa9 bug 1139713 Fix received_ts migration
  • ae5cb13 bug 1048462 Upgrade django-multidb-router to 0.6
  • 649b136 bug 1048462 Nix django-compressor
  • 1547073 Cosmetic: alphabetize requirements
  • e165f49 Add note to compiled reqs about py-bcrypt
  • ecdd00f bug 1136840 Back out new WSGIHandler
  • cc75bef bug 1141153 Upgrade Django to 1.7.6
  • d518731 bug 1136840 Back out rest of WSGIHandler mixin
  • 12940b0 bug 1139545 Wrap hb integrity error with logging
  • 8b61f14 bug 1139545 Fix "get or create" section of HB post view
  • d44faf3 bug 1129102 ditch ditchchart flag (L. Guruprasad)
  • 7fa256a bug 1141410 Fix unicode exception when feedback has invalid unicode URL (L. Guruprasad)
  • c1fe25a bug 1134475 Cleanup all references to input-dev environment (L. Guruprasad)

Landed, but not deployed:

  • 1cac166 bug 1081177 Rename feedback api and update docs
  • 026d9ae bug 1144476 stop logging update_ts errors (L. Guruprasad)

Current head: 9b3e263

Rough plan for the next two weeks
  1. removing settings we don't need and implementing environment-based configuration for instance settings
  2. prepare for 2015q2
End of OPW and thank you to Adam!

March 9th was the last day of OPW. Adam did some really great work on Input which is greatly appreciated. We hope he sticks around with us. Thank you, Adam!

Categorieën: Mozilla-nl planet

Gregory Szorc: Network Events

Mozilla planet - wo, 18/03/2015 - 15:25

The Firefox source repositories and automation have been closed the past few days due to a couple of outages.

Yesterday, aggregate CPU usage on many of the machines in the hg.mozilla.org cluster hit 100%. Previously, whenever hg.mozilla.org was under high load, we'd run out of network bandwidth before we ran out of CPU on the machines. In other words, Mercurial was generating data faster than the network could accept it.

When this happened, the service started issuing HTTP 503 Service Not Available responses. This is the universal server signal for I'm down, go away. Unfortunately, not all clients did this.

Parts of Firefox's release automation retried failing requests immediately, or with insufficient jitter in their backoff interval. Actively retrying requests against a server that's experiencing load issues only makes the problem worse. This effectively prolonged the outage.

Today, we had a similar but different network issue. The load balancer fronting hg.mozilla.org can only handle so much bandwidth. Today, we hit that limit. The load balancer started throttling connections. Load on hg.mozilla.org skyrocketed and request latency increased. From the perspective of clients, the service grinded to a halt.

hg.mozilla.org was partially sharing a load balancer with ftp.mozilla.org. That meant if one of the services experienced very high load, the other service could effectively be locked out of bandwidth. We saw this happening this morning. ftp.mozilla.org load was high (it looks like downloads of Firefox Developer Edition are a major contributor - these don't go through the CDN for reasons unknown to me) and there wasn't enough bandwidth to go around.

Separately today, hg.mozilla.org again hit 100% CPU. At that time, it also set a new record for network throughput: ~3 Gbps. It normally consumes between 200 and 500 Mbps, with periodic spikes to 750 Mbps. (Yesterday's event saw a spike to around ~2 Gbps.)

Going back through the hg.mozilla.org server logs, an offender is quite obvious. Before March 9, total outbound transfer for the build/tools repo was around 1 tebibyte per day. Starting on March 9, it increased to 3 tebibytes per day! This is quite remarkable, as a clone of this repo is only about 20 MiB. This means the repo was getting cloned about 150,000 times per day! (Note: I think all these numbers may be low by ~20% - stay tuned for the final analysis.)

2 TiB/day is statistically significant because we transfer less than 10 TiB/day across all of hg.mozilla.org. And, 1 TiB/day is close to 100 Mbps, assuming requests are evenly spread out (which of course they aren't).

Multiple things went wrong. If only one or two happened, we'd likely be fine. Maybe there would have been a short blip. But not the major event we've been firefighting the last ~24 hours.

This post is only a summary of what went wrong. I'm sure there will be a post-mortem and that it will contain lots of details for those who want to know more.

Categorieën: Mozilla-nl planet

L. David Baron: Priority of constituencies

Mozilla planet - wo, 18/03/2015 - 15:15

Since the HTML design principles (which are effectively design principles for modern Web technology) were published, I've thought that the priority of constituencies was among the most important. It's certainly among the most frequently cited in debates over Web technology. But I've also thought that it was wrong in a subtle way.

I'd rather it had been phrased in terms of utility, so that instead of stating as a rule that value (benefit minus cost) to users is more important than value to authors, it recognized that there are generally more users than authors, which means that a smaller value per user multiplied by the number of users is generally more important than a somewhat larger value per author, because it provides more total value when the value is multiplied by the number of people it applies to. However, this doesn't hold for a very large difference in value, that is, one where multiplying the cost and benefit by the numbers of people they apply to yields results where the magnitude of the cost and benefit control which side is larger, rather than the numbers of people. The same holds for implementors and specification authors; there are generally fewer in each group. Likewise, the principle should recognize that something that benefits a very small portion of users doesn't outweigh the interests of authors as much, because the number of users it benefits is no longer so much greater than the number of authors who have to work to make it happen.

Also, the current wording of the principle doesn't consider the scarcity of the smaller groups (particularly implementors and specification authors), and thus the opportunity costs of choosing one behavior over another. In other words, there might be a behavior that we could implement that would be slightly better for authors, but would take more time for implementors to implement. But there aren't all that many implementors, and they can only work on so many things. (Their number isn't completely fixed, but it can't be changed quickly.) So given the scarcity of implementors, we shouldn't consider only whether the net benefit to users is greater than the net cost to implementors; we should also consider whether there are other things those implementors could work on in that time that would provide greater net benefit to users. The same holds for scarcity of specification authors. A good description of the principle in terms of utility would also correct this problem.

Categorieën: Mozilla-nl planet

Former Mozilla Leader Joins Groupon's Product Side - PYMNTS.com

Nieuws verzameld via Google - wo, 18/03/2015 - 12:08

PYMNTS.com

Former Mozilla Leader Joins Groupon's Product Side
PYMNTS.com
Sullivan, in particular, is an interesting pick for the commerce and daily deals site company as he is a former leader of Firefox maker, Mozilla Corporation. While Campagnolo will work out of Groupon's Chicago base, Sullivan will oversee all consumer ...

Categorieën: Mozilla-nl planet

QMO: Firefox E10S Add-on Compatibility Testday Results

Mozilla planet - wo, 18/03/2015 - 11:26

Hello everyone!

Last Friday, March 13th, we held a testday where we looked at how different add-ons are working with our latest Firefox 39 Nightly in multi-process mode (aka Electrolysis, aka E10S).

We would like to thank everyone that showed up and got involved in testing this very important Firefox feature. We had quite a few add-ons covered, and also some issues reported. Thank you all for your hard work in helping us make Firefox even better!

Our biggest contributors on Friday were: Hossain Al IkramdoublexAleksej,  gaby2300 and kenkon. Thank you for all your efforts and contribution, your help is always greatly appreciated!

We look forward to seeing you at the next Testday, this Friday. Make sure to keep an eye on QMO for upcoming events and scheduled announcements!

Categorieën: Mozilla-nl planet

Pagina's