mozilla

Mozilla Nederland LogoDe Nederlandse
Mozilla gemeenschap

Andreas Gal: HTML5 reaches the Recommendation stage

Mozilla planet - wo, 29/10/2014 - 05:29

Today HTML5 reached the Recommendation stage inside the W3C, the last stage of W3C standards. Mozilla was one of the first organizations to become deeply involved in the evolution and standardization of HTML5, so today’s announcement by the W3C has a special connection to Mozilla’s mission and our work over the last 10 years.

Mozilla has pioneered many widely adopted technologies such as WebGL which further enhance HTML5 and make it a competitive and compelling alternative to proprietary and native ecosystems. With the entrance of Firefox OS into the smartphone market we have also made great progress in advancing the state of the mobile Web. Many of the new APIs and capabilities we have proposed in the context of Firefox OS are currently going through the standards process, bringing capabilities to the Web that were previously only available to native applications.

W3C Standards go through a series of steps, ranging from proposals to Editors’ Drafts to Candidate Recommendations and ultimately Recommendations. While reaching the Recommendation stage is an important milestone, we encourage developers to engage with new Web standards long before they actually hit that point. To stay current, Web developers should keep an eye on new evolving standards and read Editors’ Drafts instead of Recommendations. Web developer-targeted documentation such as developer.mozilla.org and caniuse.com are also a great way to learn about upcoming standards.

A second important area of focus for Mozilla around HTML5 has been test suites. Test suites can be used by Web developers and Web engine developers alike to verify that Web browsers consistently implement the HTML5 specification. You can check out the latest results at:

http://w3c.github.io/test-results/dom/all.html
http://w3c.github.io/test-results/html/details.html

These automated testing suites for HTML5 play a critical role in ensuring a uniform and consistent Web experience for users.

At Mozilla, we envision a Web which can do anything you can do in a native application. The advancement of HTML5 marks an important step on the road to this vision. We have many exciting things planned for our upcoming 10th anniversary of Firefox (#Fx10), which will continue to move the Web forward as an open ecosystem and platform for innovation.

Stay tuned.


Filed under: Mozilla
Categorieën: Mozilla-nl planet

Kartikaya Gupta: Building a NAS

Mozilla planet - wo, 29/10/2014 - 03:24

I've been wanting to build a NAS (network-attached storage) box for a while now, and the ominous creaking noises from the laptop I was previously using as a file server prompted me to finally take action. I wanted to build rather than buy because (a) I wanted more control over the machine and OS, (b) I figured I'd learn something along the way and (c) thought it might be cheaper. This blog posts documents the decisions and mistakes I made and problems I ran into.

First step was figuring out the level of data redundancy and storage space I wanted. After reading up on the different RAID levels I figured 4 drives with 3 TB each in a RAID5 configuration would suit my needs for the next few years. I don't have a huge amount of data so the ~9TB of usable space sounded fine, and being able to survive single-drive failures sounded sufficient to me. For all critical data I keep a copy on a separate machine as well.

I chose to go with software RAID rather than hardware because I've read horror stories of hardware RAID controllers going obsolete and being unable to find a replacement, rendering the data unreadable. That didn't sound good. With an open-source software RAID controller at least you can get the source code and have a shot at recovering your data if things go bad.

With this in mind I started looking at software options - a bit of searching took me to FreeNAS which sounded exactly like what I wanted. However after reading through random threads in the user forums it seemed like the FreeNAS people are very focused on using ZFS and hardware setups with ECC RAM. From what I gleaned, using ZFS without ECC RAM is a bad idea, because errors in the RAM can cause ZFS to corrupt your data silently and unrecoverably (and worse, it causes propagation of the corruption). A system that makes bad situations worse didn't sound so good to me.

I could have still gone with ZFS with ECC RAM but from some rudimentary searching it sounded like it would increase the cost significantly, and frankly I didn't see the point. So instead I decided to go with NAS4Free (which actually was the original FreeNAS before iXsystems bought the trademark and forked the code) which allows using a UFS file system in a software RAID5 configuration.

So with the software decisions made, it was time to pick hardware. I used this guide by Sam Kear as a starting point and modified a few things here and there. I ended up with this parts list that I mostly ordered from canadadirect.com. (Aside: I wish I had discovered pcpartpicker.com earlier in the process as it would have saved me a lot of time). They shipped things to me in 5 different packages which arrived on 4 different days using 3 different shipping services. Woo! The parts I didn't get from canadadirect.com I picked up at a local Canada Computers store. Then, last weekend, I put it all together.

It's been a while since I've built a box so I screwed up a few things and had to rewind (twice) to fix them. Took about 3 hours in total for assembly; somebody who knew what they were doing could have done it in less than one. I mostly blame lack of documentation with the chassis since there were a bunch of different screws and it wasn't obvious which ones I had to use for what. They all worked for mounting the motherboard but only one of them was actually correct and using the wrong one meant trouble later.

In terms of the hardware compatibility I think my choices were mostly sound, but there were a few hitches. The case and motherboard both support up to 6 SATA drives (I'm using 4, giving me some room to grow). However, the PSU only came with 4 SATA power connectors which means I'll need to get some adaptors or maybe a different PSU if I need to add drives. The other problem was that the chassis comes with three fans (two small ones at the front, one big one at the back) but there was only one chassis power connector on the motherboard. I plugged the big fan in and so far the machine seems to be staying pretty cool so I'm not too worried. Does seem like a waste to have those extra unused fans though.

Finally, I booted it up using a monitor/keyboard borrowed from another machine, and ran memtest86 to make sure the RAM was good. It was, so I flashed the NAS4Free LiveUSB onto a USB drive and booted it up. Unfortunately after booting into NAS4Free my keyboard stopped working. I had to disable the USB 3.0 stuff in the BIOS to get around that. I don't really care about having USB 3.0 support on this machine so not a big deal. It took me some time to figure out what installation mode I wanted to use NAS4Free in. I decided to do a full install onto a second USB drive and not have a swap partition (figured hosting swap over USB would be slow and probably unnecessary).

So installing that was easy enough, and I was able to boot into the full NAS4Free install and configure it to have a software RAID5 on the four disks. Things generally seemed OK and I started copying stuff over.. and then the box rebooted. It also managed to corrupt my installation somehow, so I had to start over from the LiveUSB stick and re-install. I had saved the config from the first time so it was easy to get it back up again, and once again I started putting data on there. Again it rebooted, although this time it didn't corrupt my installation. This was getting worrying, particularly since the system log files provided no indication as to what went wrong.

My first suspicion was that the RAID wasn't fully initialized and so copying data onto it resulted in badness. The array was "rebuilding" and I'm supposed to be able to use it then, but I figured I might as well wait until it was done. Turns out it's going to be rebuilding for the next ~20 days because RAID5 has to read/write the entire disk to initialize fully and in the days of multi-terabyte disk this takes forever. So in retrospect perhaps RAID5 was a poor choice for such large disks.

Anyway in order to debug the rebooting, I looked up the FreeBSD kernel debugging documentation, and that requires having a swap partition that the kernel can dump a crash report to. So I reinstalled and set up a swap partition this time. This seemed to magically fix the rebooting problem entirely, so I suspect the RAID drivers just don't deal well when there's no swap, or something. Not an easy situation to debug if it only happens with no swap partition but you need a swap partition to get a kernel dump.

So, things were good, and I started copying more data over and configuring more stuff and so on. The next problem I ran into was the USB drive to which I had installed NAS4Free started crapping out with read/write errors. This wasn't so great but by this point I'd already reinstalled it about 6 or 7 times, so I reinstalled again onto a different USB stick. The one that was crapping out seems to still work fine in other machines, so I'm not sure what the problem was there. The new one that I used, however, was extremely slow. Things that took seconds on the previous drive took minutes on this one. So I switched again to yet another drive, this time an old 2.5" internal drive that I have mounted in an enclosure through USB.

And finally, after installing the OS at least I've-lost-count-how-many times, I have a NAS that seems stable and appears to work well. To be fair, reinstalling the OS is a pretty painless process and by the end I could do it in less than 10 minutes from sticking in the LiveUSB to a fully-configured working system. Being able to download the config file (which includes not just the NAS config but also user accounts and so on) makes it pretty painless to restore your system to exactly the way it was. The only additional things I had to do were install a few FreeBSD packages and unpack a tarball into my home directory to get some stuff I wanted. At no point was any of the data on the RAID array itself lost or corrupted, so I'm pretty happy about that.

In conclusion, setup was a bit of a pain, mostly due to unclear documentation and flaky USB drives (or drivers) but now that I have it set up it seems to be working well. If I ever have to do it over I might go for something other than RAID5 just because of the long rebuild time but so far it hasn't been an actual problem.

Categorieën: Mozilla-nl planet

Asa Dotzler: MozFest Flame Phones

Mozilla planet - di, 28/10/2014 - 23:15
Dancing FlamesImage via Flickr user Capture Queen, and used under a CC license

Even though I wasn’t there, it sure was thrilling to see all the activity around the Flame phones at MozFest.

So, you’ve got a Flame and you’re wondering how you can use this new hardware to help Mozilla make Firefox OS awesome?! Well, here’s what we’d love from you.

First, check your Flame to see what build of Firefox OS it’s running. If you have not flashed it, it’s probably on Firefox OS 1.3 and you’ll need to upgrade it to something contemporary first. If you’re using anything older than the v188 base image, you definitely need to upgrade. To upgrade, visit the Flame page on MDN and follow the instructions to flash a new vendor-provided base image and then flash the latest nightly from Mozilla on top of that.

Once you’re on the latest nightly of Firefox OS, you’re ready to start using the Flame and filing bugs on things that don’t work. You’d think that with about five thousand Flames out there, we’d have reports on everything that’s not working but that’s not the case. Even if the bug seems highly visible, please report it. We’d rather have a couple of duplicate reports than no report at all. If you’re experienced with Bugzilla, please search first *and* help us triage incoming reports so the devs can focus on fixing rather than duping bugs.

In addition to this use-based ad hoc testing, you can participate in the One and Done program or work directly with the Firefox OS QA team on more structured testing.

But that’s not all! Because Firefox OS is built on Web technologies, you don’t have to be a hardcore programmer to fix many of the bugs in the OS or the default system apps like Dialer, Email, and Camera. If you’ve got Web dev skills, please help us squash bugs. A great place to start is the list of bugs with developers assigned to mentor you through the process.

It’s a non-trivial investment that the Mozilla Foundation has made in giving away these Flame reference phones and I’m here to work with you all to help make that effort pay off in terms of bugs reported and fixed. Please let me know if you run into problems or could use my help. Enjoy your Flames!

Categorieën: Mozilla-nl planet

J. Ryan Stinnett: Debugging Tabs with Firefox for Android

Mozilla planet - di, 28/10/2014 - 22:08

For quite a while, it has been possible to debug tabs on Firefox for Android devices, but there were many steps involved, including manual port forwarding from the terminal.

As I hinted a few weeks ago, WebIDE would soon support connecting to Firefox for Android via ADB Helper support, and that time is now!

How to Use

You'll need to assemble the following bits and bobs:

  • Firefox 36 (2014-10-25 or later)
  • ADB Helper 0.7.0 or later
  • Firefox for Android 35 or later

Opening WebIDE for the first time should install ADB Helper if you don't already have it, but double-check it is the right version in the add-on manager.

Firefox for Android runtime appears

Inside WebIDE, you'll see an entry for Firefox for Android in the Runtime menu.

Firefox for Android tab list

Once you select the runtime, tabs from Firefox for Android will be available in the (now poorly labelled) apps menu on the left.

Inspecting a tab in WebIDE

Choosing a tab will open up the DevTools toolbox for that tab. You can also toggle the toolbox via the "Pause" icon in the top toolbar.

If you would like to debug Firefox for Android's system-level / chrome code, instead of a specific tab, you can do that with the "Main Process" option.

What's Next

We have even more connection UX improvements on the way, so I hope to have more to share soon!

If there are features you'd like to see added, file bugs or contact the team via various channels.

Categorieën: Mozilla-nl planet

Christian Heilmann: Speaking at the Trondheim Developer Conference – good show!

Mozilla planet - di, 28/10/2014 - 21:56

TL;DR: The Trondheim Development Conference 2014 was incredible. Well worth my time and a fresh breath of great organisation.

trondheim Developer conference

I am right now on the plane back from Oslo to London – a good chance to put together a few thoughts on the conference I just spoke at. The Trondheim Developer Conference was – one might be amazed to learn – a conference for developers in Trondheim, Norway. All of the money that is left over after the organisers covered the cost goes to supporting other local events and developer programs. In stark contrast to other not-for-profit events this one shines with a classy veneer that is hard to find and would normally demand a mid-3 digit price for the tickets.

This is all the more surprising seeing that Norway is a ridiculously expensive place where I tend not to breathe in too much as I am not sure if they charge for air or not.

Clarion Hotel Trondheim - outside
Clarion Hotel Trondheim - inside

The location of the one day conference was the Clarion Hotel & Congress Trondheim, a high-class location with great connectivity and excellent catering. Before I wax poetic about the event here, let’s just give you a quick list:

  • TDC treats their speakers really well. I had full travel and accommodation coverage with airport pick-ups and public transport bringing me to the venue. I got a very simple list with all the information I needed and there was no back and forth about what I want – anything I could think of had already been anticipated. The speaker lounge was functional and easily accessible. The pre-conference speaker dinner lavish.
  • Everything about the event happened in the same building. This meant it was easy to go back to your room to get things or have undisturbed preparation or phone time. It also meant that attendees didn’t get lost on the way to other venues.
  • Superb catering. Coffee, cookies and fruit available throughout the day.
  • Great lunch organisation that should be copied by others. It wasn’t an affair where you had to queue up for ages trying to get the good bits of a buffet. Instead the food was already on the tables and all you had to do was pick a seat, start a chat and dig in. That way the one hour break was one hour of nourishment and conversation, not pushing and trying to find a spot to eat.
  • Wireless was strong and bountiful. I was able to upload my screencasts and cover the event on social media without a hitch. There was no need to sign up or get vouchers or whatever else is in between us and online bliss – simply a wireless name and a password.
  • Big rooms with great sound and AV setup. The organisers had a big box of cable connectors in case you brought exotic computers. We had enough microphones and the rooms had enough space.
  • Audience feedback was simple. When entering a session, attendees got a roulette chip and when leaving the session they dropped them in provided baskets stating “awesome” or “meh”. There was also an email directly after the event asking people to provide feedback.
  • Non-pushy exhibitors. There was a mix of commercial partners and supported not-for-profit organisations with booths and stands. Each of them had something nice to show (Oculus Rift probably was the overall winner) and none of them had booth babes or sales weasels. All the people I talked to had good info and were not pushy but helpful instead.
  • A clever time table. Whilst I am not a big fan of multi-track conferences, TDC had 5 tracks but limited the talks to 30 minutes. This meant there were 15 minute breaks in between tracks to have a coffee and go to the other room. I loved that. It meant speakers need to cut to the chase faster.
  • Multilingual presentations. Whilst my knowledge of Norwegian is to try to guess the German sounding words in it and wondering why everything is written very differently to Swedish I think it gave a lot of local presenters a better chance to reach the local audience when sticking to their mother tongue. The amounts of talks were even, so I could go to the one or two English talks in each time slot. With the talks being short it was no biggie if one slot didn’t have something that excited you.
  • A nice after party with a band and just the right amount of drinks. Make no mistake – alcohol costs an arm and a leg in Norway (and I think the main organiser ended up with a peg leg) but the party was well-behaved with a nice band and lots of space to have chats without having to shout at one another.
  • Good diversity of speakers and audience There was a healthy mix and Scandinavian countries are known to be very much about equality.
  • It started and ended with science and blowing things up. I was mesmerised by Selda Ekiz who started and wrapped up the event by showing some physics experiments of the explosive kind. She is a local celebrity and TV presenter who runs a children’s show explaining physics. Think Mythbusters but with incredible charm and a lot less ad breaks. If you have an event, consider getting her – I loved every second.

Selda Ekiz on stage

I was overwhelmed how much fun and how relaxing the whole event was. There was no rush, no queues, no confusion as to what goes where. If you want a conference to check out next October, TDC is a great choice.

My own contributions to the event were two sessions (as I filled in for one that didn’t work out). The first one was about allowing HTML5 to graduate, or – in other words – not being afraid of using it.

You can watch a the screencast with me talking about how HTML5 missed its graduation on YouTube.

The HTML5 graduation slides are on Slideshare.

How HTML5 missed its graduation – #TrondheimDC from Christian Heilmann

The other session was about the need to create offline apps for the now and coming market. Marketing of products keeps telling us that we’re always connected but this couldn’t be further from the truth. It is up to us as developers to condition our users to trust the web to work even when the pesky wireless is acting up again.

You can watch the screencast of the offline talk on YouTube.

The Working connected to create offline slides are on Slideshare.

Working connected to create offline – Trondheim Developer Conference 2014 from Christian Heilmann

I had a blast and I hope to meet many of the people I met at TDC again soon.

Categorieën: Mozilla-nl planet

Marco Zehe: Apps, the web, and productivity

Mozilla planet - di, 28/10/2014 - 19:28

Inspired by this public discussion on Asa Dotzler’s Facebook wall, I reflected on my own current use cases of web applications, native mobile apps, and desktop clients. I also thought about my post from 2012 where I asked the question whether web apps are accessible enough to replace desktop clients any time soon.

During my 30 days with Android experiment this summer, I also used Gmail on the web most of the time and hardly used my mail clients for desktop and mobile, except for the Gmail client on Android. The only exception was my Mozilla e-mail which I continued to do in Thunderbird on Windows.

After the experiment ended, I gradually migrated back to using clients of various shapes and sizes on the various platforms I use. And after a few days, I found that the Gmail web client was no longer one of them.

The problem is not one of accessibility in this case, because that has greatly improved over the last two years. So have web apps like Twitter and Facebook, for example. The reason I am still using dedicated clients for the most part are, first and foremost, these:

  1. Less clutter: All web apps I mentioned, and others, too, come with a huge overload of clutter that get in the way of productivity. Granted, the Gmail keyboard shortcuts, and mostly using the web app like a desktop client with NDA’s virtual mode turned off, mittigate this somewhat, but it still gets in the way far too often.
  2. Latency. I am on a quite fast VDSL 50 MBIT/S connection on my landline internet provider. Sufficing to say, this is quite fast already. The download of OS X Yosemite, 5.16 GB, takes under 20 minutes if the internet isn’t too busy. But still managing e-mail, loading conversations, switching labels, collecting tweets in the Twitter web app over time, browsing Facebook, especialy when catching up with the over-night news feed, take quite some noticeable time to load, refresh, or fetch new stuff. First the new data is pulled from servers, second they are being processed in the browser, which has to integrate it into the overloaded web applications it already has (see above), and third, all the changes need to be communicated to the screen reader I happen to be using at the time. On a single page load, this may not add up much. But on a news feed, 50 or so e-mail threads, or various fetches of tweets, this adds up time. I don’t even want to imagine how this would feel on a much slower connection that others have to cope with on a daily basis!

Yes, some of the above could probably be mittigated by using the mobile web offerings instead. But a) some web sites don’t allow a desktop browser to fetch their mobile site without the desktop browser faking a mobile one, and b) those are nowadays often so touch optimized that keyboard or simulated mouse interaction often fails or is as cumbersome as waiting for the latent loads of the desktop version.

So whether it’s e-mail, Twitter, or Facebook, I found that dedicated clients still do a much better job at allowing me to be productive. The amount of data they seem to fetch is much smaller, or it at least feels that way. The way this new data is integrated feels faster even on last year’s mobile device, and the whole interface is so geared to the task at hand, without any clutter getting in the way, that one simply gets things done much faster over-all.

What many many web applications for the desktop have not learned to do a good job at is to only give users what they currently need. For example as I write this in my WordPress installation backend, besides the editor, I have all the stuff that allows me to create new articles, pages, categories, go into the WordPress settings, install new plugins etc. I have to navigate past this to the main section to start editing my article. This, for example, is made quick by the quick navigation feature of my screen reader, but even the fact that this whole baggage is there to begin with proves the point. I want to write an article. Why offer me all those distractions? Yes, for quick access and quick ways of switching tasks, some would say. But if I write an article, I write an article. Thanks for the WordPress app for iOS or Android, which if I write an article, don’t put all other available options in my face at the same time!

Or take Twitter or Facebook. All the baggage that those web apps carry around while one just wants to browse tweets is daunting! My wife recently described to me what the FB web site looks to her in a browser, and fact is the point where the action is happening, the news feed, takes only a roughly estimated 10 or 15 percent of the whole screen estate. All the rest is either ads, or links to all kinds of things that Facebook has to offer besides the news feed. Zillions of groups, recommended friends, apps, games nobody plays, etc., etc., etc.

Same with Twitter. It shoves down one’s throat trendings, other recommendations, a huge menu of other stuff one would probably only need once a year, etc. OK, desktop screens are big nowadays. But offering so many bells, whistles and other distractions constantly and all around the place cannot seriously be considered a good user experience, can it?

I realize this is purely written from the perspective of a blind guy who has never seen a user interface. I only know them from descriptions by others. But I find myself always applauding the focused, concise, and clean user interfaces much more than those that shove every available option down my throat on first launch. And that goes for all operating systems and platforms I use.

And if the web doesn’t learn to give me better, and in this case that means, more focused user interfaces where I don’t have to dig for the UI of the task I want to accomplish, I will continue to use mobile and desktop clients for e-mail, Twitter and others over the similar web offerings, even when those are technically accessible to my browser and screen reader.

So, to cut a long story short, I think many mainstream web applications are still not ready, at least for me, for productive use, despite their advancements in technical accessibility. And the reason is the usability of things for one, and the latency of fetching all that stuff over the internet even on fast connections, on the other hand.

Categorieën: Mozilla-nl planet

Pascal Finette: The Open Source Disruption

Mozilla planet - di, 28/10/2014 - 16:52

Yesterday I gave a talk at Singularity University’s Executive Program on Open Source Disruption - it’s (somewhat) new content I developed; here’s the abstract of my talk:

The Open Source movement has upended the software world: Democratizing access, bringing billion dollar industries to their knees, toppling giants and simultaneously creating vast opportunities for the brave and unconventional. After decades in the making, the Open Source ideology, being kindled by ever cheaper and better technologies, is spreading like wildfire - and has the potential to disrupt many industries.

In his talk, Pascal will take you on a journey from the humble beginnings to the end of software as we knew it. He will make a case for why Open Source is an unstoppable force and present you with strategies and tactics to thrive in this brave new world.

And here’s the deck.

Categorieën: Mozilla-nl planet

Mozilla: Spidermonkey ATE Apple's JavaScriptCore, THRASHED Google V8 - Register

Nieuws verzameld via Google - di, 28/10/2014 - 16:42

Mozilla: Spidermonkey ATE Apple's JavaScriptCore, THRASHED Google V8
Register
Mozilla Distinguished Engineer Robert O'Callahan reports that the Spidermonkey JavaScript engine, used by the Firefox web browser, has surpassed the performance of Google's V8 engine (used by Chrome) and Apple's JavaScript Core (used by Safari) on ...

Google Nieuws
Categorieën: Mozilla-nl planet

Yunier José Sosa Vázquez: Disponible el Add-on SDK 1.17

Mozilla planet - di, 28/10/2014 - 13:10

Add-on-SDKUna nueva versión de la herramienta creada por Mozilla para  desarrollar complementos ha sido liberada y se encuentra disponible desde nuestro sitio web.

Descargar Add-on SDK 1.17.

En esta ocasión no encontraremos grandes novedades ni nuevas características añadidas al Add-on SDK pues este lanzamiento tiene como objetivo principal la actualización del comando cfx y la compatibilidad de las extensiones con las nuevas versiones de Firefox (32+).

El mayor cambio en el Add-on SDK lo veremos en la próxima versión ya que se dejará de utilizar cfx para emplear JPM (Jetpack Manager), un módulo de Node.JS. Según los desarrolladores de Mozilla con cfx era muy complejo empaquetar las dependencias en cada add-on y en su lugar JMP es más simple al eliminar algunas tareas que cfx hacía.

JPM también permitirá a los desarrolladores de complementos crear y usar los módulos npm como dependencias en sus complementos. En este artículo publicado en el sitio para Desarrolladores podrás aprender a trabajar con JPM y los cambios que debes realizar en tu complemento.

Si te interesa la creación de añadidos para Firefox puedes visitar nuestro sitio de Desarrolladores e investigar más al respecto. Allí encontrarás presentaciones, talleres y artículos que tocan este tema.

Antes de descargar el Add-on SDK 1.17 recuerda que puedes contribuir a la mejora de este reportando bugs, mirando el código para que contribuyas dando tus soluciones o simplemente dejar tu impresión sobre esta nueva versión.

Categorieën: Mozilla-nl planet

Will Kahn-Greene: Input: Removing the frontpage chart

Mozilla planet - di, 28/10/2014 - 11:00

I've been working on Input for a while now. One of the things I've actively disliked was the chart on the front page. This blog post talks about why I loathe it and then what's happening this week.

First, here's the front page dashboard as it is today:

Input front page dashboard

Input front page dashboard (October 2014)

When I started, Input gathered feedback solely on the Firefox desktop web browser. It was a one-product feedback gathering site. Because it was gathering feedback for a single product, the front page dashboard was entirely about that single product. All the feedback talked about that product. The happy/sad chart was about that product. Today, Input gathers feedback for a variety of products.

When I started, it was nice to have a general happy/sad chart on the front page because no one really looked at it and the people who did look at it understood why the chart slants so negatively. So the people who did look at it understood the heavy negative bias and could view the chart as such. Today, Input is viewed by a variety of people who have no idea how feedback on Input works or why it's so negatively biased.

When I started, Input didn't expose the data in helpful ways allowing people to build their own charts and dashboards to answer their specific questions. Thus there was a need for a dashboard to expose information from the data Input was gathering. I contend that the front page dashboard did this exceedingly poorly--what does the happy/sad lines actually mean? If it dips, what does that mean? If they spike, what does that mean? There's not enough information in the chart to make any helpful conclusions. Today, Input has an API allowing anyone to fetch data from Input in JSON format and generate their own dashboards of which there are several out there.

When I started, Input received some spam/abuse feedback, but the noise was far outweighed by the signal. Today, we get a ton of spam/abuse feedback. We still have no good way of categorizing spam/abuse as such and removing it from the system. That's something I want to work on more, but haven't had time to address. In the meantime, the front page dashboard chart has a lot of spammy noise in it. Thus the happy/sad lines aren't accurate.

Thus I argue we've far outlived the usefulness of the chart on the front page and it's time for it to go away.

So, what happens now? Bug 1080816 covers removing the front page dashboard chart. It covers some other changes to the front page, but I think I'm going to push those off until later since they're all pretty "up in the air".

If you depend on the front page dashboard chart, toss me an email. Depending on how many people depend on the front page chart and what the precise needs are, maybe we'll look into writing a better one.

Categorieën: Mozilla-nl planet

Mozilla Firefox 33.0.1 'Enhanced' Web Browser Now Available for Official ... - International Business Times UK

Nieuws verzameld via Google - di, 28/10/2014 - 08:41

Mozilla Firefox 33.0.1 'Enhanced' Web Browser Now Available for Official ...
International Business Times UK
Firefox 33.0.1 lets users connect to HTTP proxy, over HTTPS. Finally, users in Azerbaijan get to use Firefox in their native Azerbaijani language. Mozilla's latest Firefox instalment also features developer-specific enhancements, along with ...

Google Nieuws
Categorieën: Mozilla-nl planet

Byron Jones: happy bmo push day!

Mozilla planet - di, 28/10/2014 - 07:00

the following changes have been pushed to bugzilla.mozilla.org:

  • [1083790] Default version does not take into account is_active
  • [1078314] Missing links and broken unicode characters in some bugmail
  • [1072662] decrease the number of messages a jobqueue worker will process before terminating
  • [1086912] Fix BugUserLastVisit->get
  • [1062940] Please increase bmo’s alias length to match bugzilla 5.0 (40 chars instead of 20)
  • [1082113] The ComponentWatching extension should create a default watch user with a new database installation
  • [1082106] $dbh->bz_add_columns creates a foreign key constraint causing failure in checksetup.pl when it tries to re-add it later
  • [1084052] Only show “Add bounty tracking attachment” links to people who actually might do that (not everyone in core-security)
  • [1075281] bugmail filtering using “field name contains” doesn’t work correctly with flags
  • [1088711] New bugzilla users are unable to user bug templates
  • [1076746] Mentor field is missing in the email when a bug gets created
  • [1087525] fix movecomponents.pl creating duplicate rows in flag*clusions

discuss these changes on mozilla.tools.bmo.


Filed under: bmo, mozilla
Categorieën: Mozilla-nl planet

Karl Dubost: List Of Google Web Compatibility Bugs On Firefox

Mozilla planet - di, 28/10/2014 - 06:17

As of today, there are 706 Firefox Mobile bugs and 205 Firefox Desktop bugs in Mozilla Web Compatibility activity. These are OPEN bugs for many different companies (not only Google). We could add to that the any browser 237 bugs already collected on Webcompat.com. Help is welcome.

On these, we have a certain number of bugs related to Google Web properties.

Google and Mozilla

Let's make it very clear. We have an ongoing open discussions channel with Google about these bugs. I will not cite names of people at Google helping to deal with them for their own sake and privacy, but they do the best they can for getting us a resolution. It's not the case for all companies. So they know themselves and I want to thank them for this.

That said, there are long standing bugs where Firefox can't properly access the services developed by Google. The nature of bugs is diverse. It can be wrong user agent sniffing, -webkit- CSS or JS, or a codepath in JS using a very specific feature of Chrome. The most frustrating ones for the Web Compatibility people (and in return for users) are the ones where you can see that there's a bit of CSS here and there breaking the user experience but in the end, it seems it could work.

The issue is often with the "it seems it could work". We may have at first sight an impression that it will be working and then there is a hidden feature which has not be completely tested.

Also Firefox is not the only browser with issues with Google services. Opera browser, even after the switch to blink, has still issues with some Google services.

List Of Google Bugs For Firefox Browsers

Here the non exhaustive list of bugs which are still opened on Bugzilla. If you find bugs which are not valid anymore or resolved, please let us know. We do our best to find out what is resolved, but we might miss some sometimes. The more you test, the more it helps all users. With detail bug reports and analysis of the issues we have a better and more useful communications with Google Engineers for eventually fixing the bugs.

You may find similar bugs on webcompat.com. We haven't yet made the full transition.

You can help us.

Otsukare.

Categorieën: Mozilla-nl planet

Allison Naaktgeboren: Applying Privacy Series: The 1st meeting

Mozilla planet - di, 28/10/2014 - 01:05

The 1st Meeting

Product Manager: People, this could be a game changer! Think of the paid content we could open up to non-english speakers in those markets. How fast can we get it into trunk?

Engineer: First we have to figure out what *it* is.

Product Manager: I want users to be able to click a button in the main dropdown menu and translate all their text.

Engineering Manager: Shouldn’t we verify with the user which language they want? Many in the EU speak multiple languages. Also do we want translation per page?

Product Manager:  Worry about translation per page later. Yeah, verify with user is fine as long as we only do it once.

Engineering Manager: It doesn’t quite work like that. If you want translation per page later, we’ll need to architect this so it can support that in the future.

Product Manager: …Fine

Engineer: What about pages that fail translation? What would we display in that case?

Product Manager: Throw an error bar at the top and show the original page. That’ll cover languages the service can’t handle too. Use the standard error template from UX.

Engineering Manager: What device actually does the translation? The phone?

Product Manager: No, make the server do it, bandwidth on phones is precious and spotty. When they start up the phone next, it should download the content already translated to our app.

Engineer: Ok, well if there’s a server involved, we need to talk to the Ops folks.

Engineering Manager: and the DBAs. We’ll also need to find who is the expert on user data handling. We could be handling a lot of that before this is out.

Project Manager: Next UI release is in 6 weeks. I’ll see about scheduling some time with Ops and the database team.

Product Manager: Can you guys pull it off?

Engineer: Depends on the server folks’ schedule.

Who brought up user data safety & privacy concerns in this conversation?

The Engineering Manager.

Categorieën: Mozilla-nl planet

Robert O'Callahan: Are We Fast Yet? Yes We Are!

Mozilla planet - ma, 27/10/2014 - 23:58

Spidermonkey has passed V8 on Octane performance on arewefastyet, and is now leading V8 and JSC on Octane, Sunspider and Kraken.

Does this matter? Yes and no. On one hand, it's just a few JS benchmarks, real-world performance is much more complicated, and it's entirely possible that V8 (or even Chakra) could take the lead again in the future. On the other hand, beating your competitors on their own benchmarks is much more impressive than beating your competitors on benchmarks which you co-designed along with your engine to win on, which is the story behind most JS benchmarking to date.

This puts us in a position of strength, so we can say "these benchmarks are not very interesting; let's talk about other benchmarks (e.g. asm.js-related) and language features" without being accused of being sore losers.

Congratulations to the Spidermonkey team; great job!

Categorieën: Mozilla-nl planet

Gregory Szorc: Implications of Using Bugzilla for Firefox Patch Development

Mozilla planet - ma, 27/10/2014 - 23:25

Mozilla is very close to rolling out a new code review tool based on Review Board. When I became involved in the project, I viewed it as an opportunity to start from a clean slate and design the ideal code development workflow for the average Firefox developer. When the design of the code review experience was discussed, I would push for decisions that were compatible with my utopian end state.

As part of formulating the ideal workflows and design of the new tool, I needed to investigate why we do things the way we do, whether they are optimal, and whether they are necessary. As part of that, I spent a lot of time thinking about Bugzilla's role in shaping the code that goes into Firefox. This post is a summary of my findings.

The primary goal of this post is to dissect the practices that Bugzilla influences and to prepare the reader for the potential to reassemble the pieces - to change the workflows - in the future, primarily around Mozilla's new code review tool. By showing that Bugzilla has influenced the popularization of what I consider non-optimal practices, it is my hope that readers start to question the existing processes and open up their mind to change.

Since the impetus for this post in the near deployment of Mozilla's new code review tool, many of my points will focus on code review.

Before I go into my findings, I'd like to explicitly state that while many of the things I'm about to say may come across as negativity towards Bugzilla, my intentions are not to put down Bugzilla or the people who maintain it. Yes, there are limitations in Bugzilla. But I don't think it is correct to point fingers and blame Bugzilla or its maintainers for these limitations. I think we got where we are following years of very gradual shifts. I don't think you can blame Bugzilla for the circumstances that led us here. Furthermore, Bugzilla maintainers are quick to admit the faults and limitations of Bugzilla. And, they are adamant about and instrumental in rolling out the new code review tool, which shifts code review out of Bugzilla. Again, my intent is not to put down Bugzilla. So please don't direct ire that way yourself.

So, let's drill down into some of the implications of using Bugzilla.

Difficult to Separate Contexts

The stream of changes on a bug in Bugzilla (including review comments) is a flat, linear list of plain text comments. This works great when the activity of a bug follows a nice, linear, singular topic flow. However, real bug activity does not happen this way. All but the most trivial bugs usually involve multiple points of discussion. You typically have discussion about what the bug is. When a patch comes along, reviewer feedback comes in both high-level and low-level forms. Each item in each group is its own logical discussion thread. When patches land, you typically have points of discussion tracking the state of this patch. Has it been tested, does it need uplift, etc.

Bugzilla has things like keywords, flags, comment tags, and the whiteboard to enable some isolation of these various contexts. However, you still have a flat, linear list of plain text comments that contain the meat of the activity. It can be extremely difficult to follow these many interleaved logical threads.

In the context of code review, lumping all review comments into the same linear list adds overhead and undermines the process of landing the highest-quality patch possible.

Review feedback consists of both high-level and low-level comments. High-level would be things like architecture discussions. Low-level would be comments on the code itself. When these two classes of comments are lumped together in the same text field, I believe it is easy to lose track of the high-level comments and focus on the low-level. After all, you may have a short paragraph of high-level feedback right next to a mountain of low-level comments. Your eyes and brain tend to gravitate towards the larger set of more concrete low-level comments because you sub-consciously want to fix your problems and that large mass of text represents more problems, easier problems to solve than the shorter and often more abstract high-level summary. You want instant gratification and the pile of low-level comments is just too tempting to pass up. We have to train ourselves to temporarily ignore the low-level comments and focus on the high-level feedback. This is very difficult for some people. It is not an ideal disposition. Benjamin Smedberg's recent post on code review indirectly talks about some of this by describing his has rational approach of tackling high-level first.

As review iterations occur, the bug devolves into a mix of comments related to high and low-level comments. It thus becomes harder and harder to track the current high-level state of the feedback, as they must be picked out from the mountain of low-level comments. If you've ever inherited someone else's half-finished bug, you know what I'm talking about.

I believe that Bugzilla's threadless and contextless comment flow disposes us towards focusing on low-level details instead of the high-level. I believe that important high-level discussions aren't occurring at the rate they need and that technical debt increases as a result.

Difficulty Tracking Individual Items of Feedback

Code review feedback consists of multiple items of feedback. Each one is related to the review at hand. But oftentimes each item can be considered independent from others, relevant only to a single line or section of code. Style feedback is one such example.

I find it helps to model code review as a tree. You start with one thing you want to do. That's the root node. You split that thing into multiple commits. That's a new layer on your tree. Finally, each comment on those commits and the comments on those comments represent new layers to the tree. Code review thus consists of many related, but independent branches, all flowing back to the same central concept or goal. There is a one to many relationship at nearly every level of the tree.

Again, Bugzilla lumps all these individual items of feedback into a linear series of flat text blobs. When you are commenting on code, you do get some code context printed out. But everything is plain text.

The result of this is that tracking the progress on individual items of feedback - individual branches in our conceptual tree - is difficult. Code authors must pour through text comments and manually keep an inventory of their progress towards addressing the comments. Some people copy the review comment into another text box or text editor and delete items once they've fixed them locally! And, when it comes time to review the new patch version, reviewers must go through the same exercise in order to verify that all their original points of feedback have been adequately addressed! You've now redundantly duplicated the feedback tracking mechanism among at least two people. That's wasteful in of itself.

Another consequence of this unstructured feedback tracking mechanism is that points of feedback tend to get lost. On complex reviews, you may be sorting through dozens of individual points of feedback. It is extremely easy to lose track of something. This could have disastrous consequences, such as the accidental creation of a 0day bug in Firefox. OK, that's a worst case scenario. But I know from experience that review comments can and do get lost. This results in new bugs being filed, author and reviewer double checking to see if other comments were not acted upon, and possibly severe bugs with user impacting behavior. In other words, this unstructured tracking of review feedback tends to lessen code quality and is thus a contributor to technical debt.

Fewer, Larger Patches

Bugzilla's user interface encourages the writing of fewer, larger patches. (The opposite would be many, smaller patches - sometimes referred to as micro commits.)

This result is achieved by a user interface that handles multiple patches so poorly that it effectively discourages that approach, driving people to create larger patches.

The stream of changes on a bug (including review comments) is a flat, linear list of plain text comments. This works great when the activity of a bug follows a nice, linear flow. However, reviewing multiple patches doesn't work in a linear model. If you attach multiple patches to a bug, the review comments and their replies for all the patches will be interleaved in the same linear comment list. This flies in the face of the reality that each patch/review is logically its own thread that deserves to be followed on its own. The end result is that it is extremely difficult to track what's going on in each patch's review. Again, we have different contexts - different branches of a tree - all living in the same flat list.

Because conducting review on separate patches is so painful, people are effectively left with two choices: 1) write a single, monolithic patch 2) create a new bug. Both options suck.

Larger, monolithic patches are harder and slower to review. Larger patches require much more cognitive load to review, as the reviewer needs to capture the entire context in order to make a review determination. This takes more time. The increased surface area of the patch also increases the liklihood that the reviewer will find something wrong and will require a re-review. The added complexity of a larger patch also means the chances of a bug creeping in are higher, leading to more bugs being filed and more reviews later. The more review cycles the patch goes through, the greater the chances it will suffer from bit rot and will need updating before it lands, possibly incurring yet more rounds of review. And, since we measure progress in terms of code landing, the delay to get a large patch through many rounds of review makes us feel lethargic and demotivates us. Large patches have intrinsic properties that lead to compounding problems and increased development cost.

As bad as large patches are, they are roughly in the same badness range as the alternative: creating more bugs.

When you create a new bug to hold the context for the review of an individual commit, you are doing a lot of things, very few of them helpful. First, you must create a new bug. There's overhead to do that. You need to type in a summary, set up the bug dependencies, CC the proper people, update the commit message in your patch, upload your patch/attachment to the new bug, mark the attachment on the old bug obsolete, etc. This is arguably tolerable, especially with tools that can automate the steps (although I don't believe there is a single tool that does all of what I mentioned automatically). But the badness of multiple bugs doesn't stop there.

Creating multiple bugs fragments the knowledge and history of your change and diminishes the purpose of a bug. You got in the situation of creating multiple bugs because you were working on a single logical change. It just so happened that you needed/wanted multiple commits/patches/reviews to represent that singular change. That initial change was likely tracked by a single bug. And now, because of Bugzilla's poor user interface around mutliple patch reviews, you now find yourself creating yet another bug. Now you have two bug numbers - two identifiers that look identical, only varying by their numeric value - referring to the same logical thing. We've started with a single bug number referring to your logical change and created what are effectively sub-issues, but allocated them in the same namespace as normal bugs. We've diminished the importance of the average bug. We've introduced confusion as to where one should go to learn about this single, logical change. Should I go to bug X or bug Y? Sure, you can likely go to one and ultimately find what you were looking for. But that takes more effort.

Creating separate bugs for separate reviews also makes refactoring harder. If you are going the micro commit route, chances are you do a lot of history rewriting. Commits are combined. Commits are split. Commits are reordered. And if those commits are all mapping to individual bugs, you potentially find yourself in a huge mess. Combining commits might mean resolving bugs as duplicates of each other. Splitting commits means creating yet another bug. And let's not forget about managing bug dependencies. Do you set up your dependencies so you have a linear, waterfall dependency corresponding to commit order? That logically makes sense, but it is hard to keep in sync. Or, do you just make all the review bugs depend on a single parent bug? If you do that, how do you communicate the order of the patches to the reviewer? Manually? That's yet more overhead. History rewriting - an operation that modern version control tools like Git and Mercurial have enabled to be a lightweight operation and users love because it doesn't constrain them to pre-defined workflows - thus become much more costly. The cost may even be so high that some people forego rewriting completely, trading their effort for some poor reviewer who has to inherit a series of patches that isn't organized as logically as it could be. Like larger patches, this increases cognitive load required to perform reviews and increases development costs.

As you can see, reviewing multiple, smaller patches with Bugzilla often leads to a horrible user experience. So, we find ourselves writing larger, monolithic patches and living with their numerous deficiencies. At least with monolithic patches we have a predictable outcome for how interaction with Bugzilla will play out!

I have little doubt that large patches (whose existence is influenced by the UI of Bugzilla) slows down the development velocity of Firefox.

Commit Message Formatting

The heavy involvement of Bugzilla in our code development lifecycle has influenced how we write commit messages. Let's start with the obvious example. Here is our standard commit message format for Firefox:

Bug 1234 - Fix some feature foo; r=gps

The bug is right there at the front of the commit message. That prominent placement is effectively saying the bug number is the most important detail about this commit - everything else is ancillary.

Now, I'm sure some of you are saying, but Greg, the short description of the change is obviously more important than the bug number. You are right. But we've allowed ourselves to make the bug and the content therein more important than the commit.

Supporting my theory is the commit message content following the first/summary line. That data is almost always - wait for it - nothing: we generally don't write commit messages that contain more than a single summary line. My repository forensics show that that less than 20% of commit messages to Firefox in 2014 contain multiple lines (this excludes merge and backout commits). (We are doing better than 2013 - the rate was less than 15% then).

Our commit messages are basically saying, here's a highly-abbreviated summary of the change and a pointer (a bug number) to where you can find out more. And of course loading the bug typically reveals a mass of interleaved comments on various topics, hardly the high-level summary you were hoping was captured in the commit message.

Before I go on, in case you are on the fence as to the benefit of detailed commit messages, please lead Phabricator's recommendations on revision control and writing reviewable code. I think both write-ups are terrific and are excellent templates that apply to nearly everyone, especially a project as large and complex as Firefox.

Anyway, there are many reasons why we don't capture a detailed, multi-line commit message. For starters, you aren't immediately rewarded for doing it: writing a good commit message doesn't really improve much in the short term (unless someone yells at you for not doing it). This is a generic problem applicable to all organizations and tools. This is a problem that culture must ultimately rectify. But our tools shouldn't reinforce the disposition towards laziness: they should reward best practices.

I don't Bugzilla and our interactions with it do an adequate job rewarding good commit message writing. Chances are your mechanism for posting reviews to Bugzilla or posting the publishing of a commit to Bugzilla (pasting the URL in the simple case) brings up a text box for you to type review notes, a patch description, or extra context for the landing. These should be going in the commit message, as they are the type of high-level context and summarizations of choices or actions that people crave when discerning the history of a repository. But because that text box is there, taunting you with its presence, we write content there instead of in the commit message. Even where tools like bzexport exist to upload patches to Bugzilla, potentially nipping this practice in the bug, it still engages in frustrating behavior like reposting the same long commit message on every patch upload, producing unwanted bug spam. Even a tool that is pretty sensibly designed has an implementation detail that undermines a good practice.

Machine Processing of Patches is Difficult

I have a challenge for you: identify all patches currently under consideration for incorporation in the Firefox source tree, run static analysis on them, and tell me if they meet our code style policies.

This should be a solved problem and deployed system at Mozilla. It isn't. Part of the problem is because we're using Bugzilla for conducting review and doing patch management. That may sound counter-intuitive at first: Bugzilla is a centralized service - surely we can poll it to discover patches and then do stuff with those patches. We can. In theory. Things break down very quickly if you try this.

We are uploading patch files to Bugzilla. Patch files are representations of commits that live outside a repository. In order to get the full context - the result of the patch file - you need all the content leading up to that patch file - the repository data. When a naked patch file is uploaded to Bugzilla, you don't always have this context.

For starters, you don't know with certainly which repository the patch belongs to because that isn't part of the standard patch format produced by Mercurial or Git. There are patches for various repositories floating around in Bugzilla. So now you need a way to identify which repository a patch belongs to. It is a solvable problem (aggregate data for all repositories and match patches based on file paths, referenced commits, etc), albeit one Mozilla has not yet solved (but should).

Assuming you can identify the repository a patch belongs to, you need to know the parent commit so you can apply this patch. Some patches list their parent commits. Others do not. Even those that do may lie about it. Patches in MQ don't update their parent field when they are pushed, only after they are refreshed. You could be testing and uploading a patch with a different parent commit than what's listed in the patch file! Even if you do identify the parent commit, this commit could belong to another patch under consideration that's also on Bugzilla! So now you need to assemble a directed graph with all the patches known from Bugzilla applied. Hopefully they all fit in nicely.

Of course, some patches don't have any metadata at all: they are just naked diffs or are malformed commits produced by tools that e.g. attempt to convert Git commits to Mercurial commits (Git users: you should be using hg-git to produce proper Mercurial commits for Firefox patches).

Because Bugzilla is talking in terms of patch files, we often lose much of the context needed to build nice tools, preventing numerous potential workflow optimizations through automation. There are many things machines could be doing for us (such as looking for coding style violations). Instead, humans are doing this work and costing Mozilla a lot of time and lost developer productivity in the process. (A human costs ~$100/hr. A machine on EC2 is pennies per hour and should do the job with lower latency. In other words, you can operate over 300 machines 24 hours a day for what you may an engineer to work an 8 hour shift.)

Conclusion

I have outlined a few of the side-effects of using Bugzilla as part of our day-to-day development, review, and landing of changes to Firefox.

There are several takeways.

First, one cannot argue the fact that Firefox development is bug(zilla) centric. Nearly every important milestone in the lifecycle of a patch involves Bugzilla in some way. This has its benefits and drawbacks. This article has identified many of the drawbacks. But before you start crying to expunge Bugzilla from the loop completely, consider the benefits, such as a place anyone can go to to add metadata or comments on something. That's huge. There is a larger discussion to be had here. But I don't want to be inviting it quite yet.

A common thread between many of the points above is Bugzilla's unstructured and generic handling of code and metadata attached to it (patches, review comments, and landing information). Patches are attachments, which can be anything under the sun. Review comments are plain text comments with simple author, date, and tag metadata. Landings are also communicated by plain text review comments (at least initially - keywords and flags are used in some scenarios).

By being a generic tool, Bugzilla throws away a lot of the rich metadata that we produce. That data is still technically there in many scenarios. But it becomes extremely difficult if not practically impossible for both humans and machines to access efficiently. We lose important context and feedback by normalizing all this data to Bugzilla. This data loss creates overhead and technical debt. It slows Mozilla down.

Fortunately, the solutions to these problems and shortcomings are conceptually simple (and generally applicable): preserve rich context. In the context of patch distribution, push commits to a repository and tell someone to pull those commits. In the context of code review, create sub-reviews for different commits and allow tracking and easy-to-follow (likely threaded) discussions on found issues. Design workflow to be code first, not tool or bug first. Optimize workflows to minimize people time. Lean heavily on machines to do grunt work. Integrate issue tracking and code review, but not too tightly (loosely coupled, highly cohesive). Let different tools specialize in the handling of different forms of data: let code review handle code review. Let Bugzilla handle issue tracking. Let a landing tool handle tracking the state of landings. Use middleware to make them appear as one logical service if they aren't designed to be one from the start (such as is Mozilla's case with Bugzilla).

Another solution that's generally applicable is to refine and optimize the whole process to land a finished commit. Your product is based on software. So anything that adds overhead or loss of quality in the process of developing that software is fundamentally a product problem and should be treated as such. Any time and brain cycles lost to development friction or bugs that arise from things like inadequate code reviews tools degrade the quality of your product and take away from the user experience. This should be plain to see. Attaching a cost to this to convince the business-minded folks that it is worth addressing is a harder matter. I find management with empathy and shared understanding of what amazing tools can do helps a lot.

If I had to sum up the solution in one sentence, it would be: invest in tools and developer happiness.

I hope to soon publish a post on how Mozilla's new code review tool addresses many of the workflow deficiencies present today. Stay tuned.

Categorieën: Mozilla-nl planet

Mozilla preps Firefox OS for the Raspberry Pi - LinuxGizmos

Nieuws verzameld via Google - ma, 27/10/2014 - 23:19

LinuxGizmos

Mozilla preps Firefox OS for the Raspberry Pi
LinuxGizmos
At the Mozilla Festival (MozFest) in the UK, held Oct. 24-26, Mozilla revealed a version of Firefox OS for the Raspberry Pi single board computer, and said it would “prepare and maintain a way for current Raspberry Pi board owners to download and flash ...
Mozilla to make Firefox OS a tasty filling for a Raspberry PiInquirer
Mozilla hopes to challenge Raspbian as RPi OS of choiceRegister
Mozilla Wants Firefox OS to Have a Feed on Raspberry PiOS News

alle 5 nieuwsartikelen »
Categorieën: Mozilla-nl planet

Soledad Penades: MozFest 2014, day 2

Mozilla planet - ma, 27/10/2014 - 23:05

As I was typing the final paragraphs of my previous post, hundreds of Flame devices were being handed to MozFest attendees that had got involved on sessions the day before.

When I arrived (late, because I felt like a lazy slug), there was a queue in the flashing station, which was, essentially, a table with a bunch of awesome Mozilla employees and volunteers from all over the world, working in shifts to make sure all those people with phones using Firefox OS 1.3 were upgraded to 2.1. I don’t have the exact numbers, but I believe the amount was close to 1000 phones. One thousand phones. BAM. Super amazing work, friends. **HATS OFF**

Flamemania at the MEGABOOTH #mozfest pic.twitter.com/Q6RRDXBIss

— solendid (@supersole) October 26, 2014

Potch was also improving the Flame starter guide. It had been renamed to Flame On, so go grab that one if you got a Flame and want to know what you can do now. If you want to contribute to the guide, here’s the code.

New Flame? Start here http://t.co/i5uA0lU0Er #mozfest pic.twitter.com/NJbyWeHqXb

— solendid (@supersole) October 26, 2014

I (figuratively) rolled up my sleeves (I was wearing a t-shirt), and joined Potch’s table in their effort to enable ADB+DevTools in the newly unboxed phones, so that then the flashing table could jump straight to that part of the process. Not everybody knew about that and they went directly to the other queue, so at some point Marcia went person by person and enabled ADB+DevTools in those phones. Which I found when I tried to help by making sure everybody had that done… and turns out that had already happened. Too late, Sole!

"FLASHING MEANS UPGRADING" #mozfest #flame pic.twitter.com/7PBQmWPhoV

— solendid (@supersole) October 26, 2014

They called us for “the most iconic clipart in Mozilla” i.e. the group photo. After we posed seriously (“interview picture”), smiling and scaring, we went upstairs again to deal with the flow of newly Flame owners.

I helped a bunch of people setting up WebIDE and explained them how it could be used to quickly get started in developing apps, install the simulators, try their app in them and in the phones, etc. But (cue dramatic voice) I saw versions of Firefox I hadn’t seen for years, had to reminisce things I hadn’t done in even longer (installing udev rules) and also did things that looked like straight out of a nightmare (like installing an unsigned driver in Windows 8). Basically, getting this up and running is super easy in Mac, less so in Linux, and quite tedious in Windows. The good news is: we’re working on making the connection part easier!

My favourite part of helping someone set this environment up was then asking, and learning, about how they planned to use it, and how’s tech like in their countries. As I said, MozFest has people from all the places, and that gave me the chance to understand how they use our tools. For example they might just have intermittent internet access which is also metered, BUT they have pretty decent local networks in schools or unis, so it’s feasible to get just one person to get the data (e.g. an updated Firefox) and then everyone else can go to the uni with your laptop to copy all that data. We also had a chance to discuss what sort of apps they are looking to build, and hopefully we will keep in touch so that I can help empower and teach them and then they can spread that knowledge to more people locally! Yay collaboration!

At some point I went out to get some food and get some quiet time. I was drinking water constantly so that was good for my throat but I was feeling little stings of pain occasionally.

On the way back, I grabbed some coffee nearby, and when I entered the college I stumbled upon Rosana, Krupa and Amy who were having some interesting discussions on the lobby. We left with a great life lesson from Amy: if someone is acting like a jerk, perhaps they have a terrible shitty job.

Upstairs to the 6th floor again, I stumble upon Bobby this time and we run a quick postmortem-so-far: mostly good experience, but I feel there’s too much noise for bigger groups, I get distracted and it’s terrible. I also should not let supertechnical people hijack conversations that scare less tech-savvy people away, and I should also explicitly ask each person for questions, not leave it up to them to ask (because they might be afraid of taking the initiative). I should know better, but I don’t facilitate sessions every day so I’m a bit out of my element. I’ll get better! I let Bobby eat his lunch (while standing), and go back to the MEGABOOTH. It’s still a hive of activity. I help where I can. WebIDE questions, Firefox OS questions, you name it.

I also had a chance to chat with Ioana, Flaki, Bebe and other members of the Romania community. We had interacted via Twitter before but never met! They’re supercool and I’m going to be visiting their country next month so we’re all super excited! Yay!

As the evening approaches the area starts to calm down. At some point we can see the flashing station volunteers again, once the queue is gone. They are still in one piece! I start to think they’re superhuman.

BIG SHOUT OUT to the flashing champions who've basically updated 1000 phones today – THANKS #mozfest pic.twitter.com/C7B45gyKZp

— solendid (@supersole) October 26, 2014

It’s starting to be demo-time again! I move downstairs to the 4th floor where people are installing screens and laptops for their demos, but before I know it someone comes nearby and entices me to go back to the Art Room where they are starting the party already. How can I say no?

I go there, they’ve turned off the lights for better atmosphere and so we can see the projected works in all their glory. It feels like being in the Tate Tanks-i.e. great and definitely atmospheric!

Ended up in the art room again… #mozfest pic.twitter.com/G12wuf2Hyd

— solendid (@supersole) October 26, 2014

Forrest from NoFlo is there, he’s used Mirobot, a Logo-Like robot kit, connected to NoFlo to program it (and I think it used the webcam as input too):

Mirobot + Noflo #mozfest pic.twitter.com/J9M7UbGCfM

— solendid (@supersole) October 26, 2014

When I come back to the 4th they’re having some announcements and wrap-up speeches, thanking everyone who’s put their efforts into making the festival a success. There’s a mention for pretty much everyone, so our hands hurt! Also, Dees revealed his true inner self:

.@cyberdees revealing his true inner self #mozfest pic.twitter.com/OJIyoNfqUX

— solendid (@supersole) October 26, 2014

I realised Bobby had changed to wear even more GOLD. Space wranglers could be identified because they were wearing a sort of golden pashmina, but Bobby took it further. Be afraid, mr. Tom “Gold pants” Dale!

MEGABOOTH space wrangler @secretrobotron with the best outfit ever #mozfest pic.twitter.com/O9dlXVYKdF

— solendid (@supersole) October 26, 2014

And then on to the demos, there was a table full of locks and I didn’t know what it was about:

No idea what this is but retro paper is always a #win #mozfest pic.twitter.com/RJGN458yn0

— solendid (@supersole) October 26, 2014

until someone explained to me that those locks came in various levels of difficulty and were there to learn how to pick locks! Which I started doing:

So I'm learning to pick locks o_O #mozfest pic.twitter.com/wC9CK2zDss

— solendid (@supersole) October 26, 2014

Now I cannot stop thinking about cylinders and feeling the mechanism each time I open doors… the harm has been done!

Chris Lord had told me they would be playing at MozFest but I thought I had missed them the night before. No! they played on Sunday! Everyone, please welcome The Vanguards:

Did I mention the grassroots band? /cc @cwiiis #mozfest pic.twitter.com/Db5lfyb8xM

— solendid (@supersole) October 26, 2014

And it wasn’t too long until the party would be over and was time to go home! Exhausted, but exhilarated!

I said goodbye to the Mozilla Reps gathering in front of the college, and thanks, and wished them a happy safe journey back home. Not sure why, since they were just going to have dinner, but I was loaded with good vibes and that felt like the right thing to do.

And so that was MozFest 2014 for me. A chaos like usual and I hardly had time to see anything outside from the MEGABOOTH. I’m so sorry I missed so many interesting sessions, but I’m glad I helped so many people too, so there’s that!

flattr this!

Categorieën: Mozilla-nl planet

Kim Moir: Mozilla pushes - September 2014

Mozilla planet - ma, 27/10/2014 - 22:11
Here's September 2014's monthly analysis of the pushes to our Mozilla development trees.
You can load the data as an HTML page or as a json file.


Trends
Suprise!  No records were broken this month.

Highlights
12267 pushes
409 pushes/day (average)
Highest number of pushes/day: 646 pushes on September 10, 2014
22.6 pushes/hour (average)

General Remarks
Try has around 36% of pushes and Gaia-Try comprise about 32%.  The three integration repositories (fx-team, mozilla-inbound and b2g-inbound) account around 22% of all the pushes.

Records
August 2014 was the month with most pushes (13,090  pushes)
August 2014 has the highest pushes/day average with 620 pushes/day
July 2014 has the highest average of "pushes-per-hour" with 23.51 pushes/hour
August 20, 2014 had the highest number of pushes in one day with 690 pushes





Categorieën: Mozilla-nl planet

Yunier José Sosa Vázquez: Actualizados los canales de Firefox y Thunderbird

Mozilla planet - ma, 27/10/2014 - 21:17

Se encuentra disponible actualizaciones para Firefox y Thunderbird. Esto incluye, la versión 15 de plugin Adobe Flash Player y las versiones para Andriod de Firefox.

Release: Firefox 33.0.1, Thunderbird 31.2.0, Firefox Mobile 33.0

Beta: Firefox 34

Aurora: Firefox 35

Nightly: Firefox 36 (con procesos separados gracias a Electrolysis) y Thunderbird 36

Ir a Descargas

Categorieën: Mozilla-nl planet

Pagina's