mozilla

Mozilla Nederland LogoDe Nederlandse
Mozilla gemeenschap

Abonneren op feed Mozilla planet
Planet Mozilla - http://planet.mozilla.org/
Bijgewerkt: 14 uur 34 min geleden

Frédéric Harper: L’état de l’Open Source en 2014 au Salon du Logiciel Libre et des technologies ouvertes du Québec

wo, 01/10/2014 - 20:44

Fred@S2LQ

Il y a deux semaines, je me dirigeais à Québec pour présenter au Salon du Logiciel Libre et des technologies ouvertes du Québec (S2LQ). Lorsqu’on m’a approché pour présenter, je ne savais trop de quoi parler: l’audience visée étant moins technique que lorsque je présente habituellement, soit aux développeurs. J’avais pensé parler de Mozilla, mais la structure étant tellement différente d’autres entreprises vivant des technologies ouvertes que je me suis abstenu. Je pensais présenter Firefox OS comme je le fais souvent, mais j’aurais par défaut tanguer trop souvent vers un discours technique. J’ai donc décidé de retourner aux bases du pourquoi, mais aussi d’où vient l’Open Source et où on se dirige: une présentation plus haut niveau pour sensibiliser les gens sur place, qui fut ma foi, fort bien reçu.

L'état de l'Open Source en 2014 – Salon du Logiciel Libre et des technologies ouvertes du Québec (S2QL) – 2014-09-17 from Frédéric Harper

Encore une fois, Christian Aubry étant présent avec Savoir Faire Linux, j’ai eu droit à un enregistrement de qualité de ma présentation.

Au bout du compte, mon but étant bien sûr de sensibiliser les gens à l’Open Source, mais aussi de montrer une approche moins oppressante pour rallier les gens à la cause. Malgré mon court séjour à Québec, j’ai bien aimé ma journée à cet événement et fortement apprécié le keynote du colonel Guimard sur la migration vers l’Open Source à la gendarmerie française: un modèle utile et pragmatique que le gouvernement d’ici devrait envier et copier!


--
L’état de l’Open Source en 2014 au Salon du Logiciel Libre et des technologies ouvertes du Québec is a post on Out of Comfort Zone from Frédéric Harper

Related posts:

  1. Microsoft & l’Open Source, la guerre des étoiles version techno Dans une galaxie lointaine, Microsoft faisait la guerre à l’Open...
  2. Firefox OS à la semaine des technologies EPITA Ce matin, j’avais le plaisir de présenter à propos de...
  3. Entrevue Firefox OS de Savoir Faire Linux à Pycon 2014 Je ne suis pas un développeur Python, mais j’ai tout...
Categorieën: Mozilla-nl planet

Henrik Skupin: Firefox Automation report – week 31/32 2014

wo, 01/10/2014 - 18:16

In this post you can find an overview about the work happened in the Firefox Automation team during week 31 and 32. It’s a bit lesser than usual, mainly because many of us were on vacation.

Highlights

The biggest improvement as came in during week 32 were the fixes for the TPS tests. Cosmin spent a bit of time on investigating the remaining underlying issues, and got them fixed. Since then we have a constant green testrun, which is fantastic.

While development for the new TPS continuous integration system continued, we were blocked for a couple of days by the outage of restmail.net due to a domain move. After the DNS entries got fixed, everything was working fine again for Jenkins and Mozilla Pulse based TPS tests.

For Mozmill CI we agreed on that the Endurance tests we run across all branches are not that useful, but only take a lot of time to execute – about 2h per testrun! The most impact also regarding of new features landed will be for Nightly. So Henrik came up with a patch to only let those tests run for en-US Nightly builds.

Individual Updates

For more granular updates of each individual team member please visit our weekly team etherpad for week 31 and week 32.

Meeting Details

If you are interested in further details and discussions you might also want to have a look at the meeting agenda, the video recording, and notes from the Firefox Automation meetings of week 32. There was no meeting in week 31.

Categorieën: Mozilla-nl planet

Daniel Stenberg: Good bye Rockbox

wo, 01/10/2014 - 10:53

I’m officially not taking part in anything related to Rockbox anymore. I’ve unsubscribed and I’m out.

In the fall of 2001, my friend Linus and my brother Björn had both bought the portable Archos Player, a harddrive based mp3 player and slightly underwhelmed by its firmware they decided they would have a go at trying to improve it. All three of us had been working with embedded systems for many years already and I was immediately attracted to the idea of reverse engineering this kind of device and try to improve it. It sounded like a blast to me.

In December 2001 we had the first test program actually running on the device and flashing a led. The first little step of what would become a rather big effort. We wrote a GPLed mp3 player firmware replacement, entirely from scratch without re-using any original parts. A full home-grown tiny multitasking operating system with a UI.

Fast-forwarding through history: we managed to get a really good firmware done for the early Archos players and we managed to move on to follow-up mp3 players too. After a decade or so, we supported well over 60 different mp3 player models and we played every music format known to man, we usually had better battery life than the original firmwares. We could run doom and we had a video player, a plugin system and a system full of crazy things.

We gathered large amounts of skilled and intelligent hackers from all over the world who contributed to make this possible. We had yearly meetups, or developer conferences, and we hung out on IRC every day of the week. I still hang out on our off-topic IRC channel!

Over time, smart phones emerged as the preferred devices people would use to play music while on the go. We ported Rockbox over to Android as an app, but our pixel-based UI was never really suitable for the flexible Android world and I also think that most contributors were more interested in hacking devices than writing Android apps. The app never really attracted many users or developers so while functional it never “took off”.

mp3 players are now already a thing of the past and will soon fall into the cave of forgotten old things our children will never even know or care about.

Developers and users of Rockbox have mostly moved on to other ventures. I too stopped actually contributing to the project several years ago but I was running build clients for a long while and I’ve kept being subscribed to the development mailing list. Until now. I’m now finally cutting off the last rope. Good bye Rockbox, it was fun while it lasted. I had a massive amount of great fun and I learned a lot while in the project.

Rockbox

Categorieën: Mozilla-nl planet

Robert O'Callahan: Upcoming rr Talk

wo, 01/10/2014 - 03:51

Currently I'm in the middle of a 3-week visit to North America. Last week I was at a Mozilla graphics-team work week in Toronto. This week I'm mostly on vacation, but I'm scheduled to give a talk at MIT this Thursday about rr. This is a talk about the design of rr and how it compares to other approaches. I'll make the content of that talk available on the Web in some form as well. Next week I'm also mostly on vacation but will be in Mountain View for a couple of days for a planning meeting. Fun times!

Categorieën: Mozilla-nl planet

Soledad Penades: Using a Flame as my main phone, day 1

di, 30/09/2014 - 23:32

So here's my last tweet from an Android phone. Moving to using a @FirefoxOSFeed Flame full time! pic.twitter.com/FZLdY1sL1I

— ǝlosɹǝdns (@supersole) September 30, 2014

Today I finally got a Flame to use as my main phone (what they call dogfooding, but it sounds atrocious to me). I had been using a Flame for testing since June or so, but I kept flashing nightly builds and let me tell you… it’s risky at least.

Sadly I was busy attending other matters (namely the DevTools meetup which is happening this week at the London office) so I didn’t have much of a chance to experiment on the phone.

My main goal was basically flash it with an updated version of the operating system, since the Flame comes with 1.3 and I wanted to use 2.x. Then I took my SIM card out of my Android Nexus 5 and put it into the Flame. Bam, it works. Including data! No need to tinker with GPRS and APN settings and what not. Sweet! I already even got a spam call advising me on how to claim compensation on that accident I never had. Yay!

I also imported some of my contacts from my Google account. The importer lets you connect to GMail and then loads the contacts, and you can go through the list to choose which ones to import. Good time for some pruning of old contacts I haven’t spoken to in a while :-P
There were some weirdnesses on the rendering but I didn’t file a bug yet as I want to compare with the other phone and a freshly flashed version and see if the weirdnesses have been fixed or not.

I can also confirm that the Twitter “app” (it’s actually more like a glorified bookmark for m.twitter.com) for FxOS is as terrible as usual. I keep internally whispering to myself: OAuth, Oauth, tokens, rate limits each time I try to use the Twitter app and get frustrated by how badly it works on every single mobile browser, so as to scare myself and avoid writing my own client with support for offline and push notifications.

Now I have to find out how to configure the alarm clock. If it doesn’t work I’ll be late to the office tomorrow—it won’t be my fault! :P

Oh and before you ask: no one at Mozilla is forcing us to use this or that phone. This is just done on my own volition because other platforms keep creeping me out and I’d rather contribute to something I can trust.

PS I don’t actually have any grand plan for writing a long series of posts on my experiences on using the Flame as my main phone so don’t get too excited, teehee!

flattr this!

Categorieën: Mozilla-nl planet

Mozilla Release Management Team: Firefox 33 beta7 to beta8

di, 30/09/2014 - 20:44

  • 46 changesets
  • 110 files changed
  • 1976 insertions
  • 805 deletions

ExtensionOccurrences cpp34 h14 html13 js11 jsm6 css4 xul3 xml3 ini3 cc3 c2 xhtml1 webidl1 svg1 py1 properties1 nsh1 list1 in1 idl1 dtd1

ModuleOccurrences dom18 gfx16 browser14 layout12 toolkit9 security6 js6 content6 netwerk5 mobile3 media3 widget2 xpfe1 xpcom1 modules1 ipc1 embedding1

List of changesets:

David KeelerBug 1057123 - mozilla::pkix: Certificates with Key Usage asserting the keyCertSign bit may act as end-entities. r=briansmith, a=sledru - 599ae9ec1b9c Robert StrongBug 1070988 - Windows installer should remove leftover chrome.manifest on pave over install to prevent startup crash with Firefox 32 and above with unpacked omni.ja. r=tabraldes, a=sledru - 9286fb781568 Bobby HolleyBug 1072174 - Handle all the cases XrayWrapper.cpp. r=peterv, a=abillings - bb4423c0da47 Brian NicholsonBug 1067429 - Alphabetize theme styles. r=lucasr, a=sledru - f29b8812b6d0 Brian NicholsonBug 1067429 - Create GeckoAppBase as the parent for Gecko.App. r=lucasr, a=sledru - 112a9fe148d2 Brian NicholsonBug 1067429 - Add values-v14, removing v14-only styles from values-v11. r=lucasr, a=sledru - 89d93cece9fd David KeelerBug 1060929 - mozilla::pkix: Allow explicit encodings of default-valued BOOLEANs because lol standards. r=briansmith, a=sledru - 008eb429e655 Tim TaubertBug 1067173 - Bail out early if _resizeGrid() is called before the page has loaded. f=Mardak, r=adw, a=sledru - c043fec932a6 Markus StangeBug 1011166 - Improve the workarounds cairo does when rendering large gradients with pixman. r=roc, r=jrmuizel, a=sledru - a703ff0c7861 Edwin FloresBug 976023 - Fix crash in AppleMP3Reader. r=rillian, a=sledru - f2933e32b654 Nicolas SilvaBug 1066139 - Put stereo video behind a pref (off by default). r=Bas, a=sledru - e60e089a7904 Nicholas NethercoteBug 1070251 - Anonymize non-chrome inProcessTabChildGlobal URLs in memory reports when necessary. r=khuey, a=sledru - 09dcf9d94d33 Andrea MarchesiniBug 1060621 - WorkerScope should CC mLocation and mNavigator. r=bz, a=sledru - 32d5ee00c3ab Andrea MarchesiniBug 1062920 - WorkerNavigator strings should honor general.*.override prefs. r=khuey, a=sledru - 6d53cfba12f0 Andrea MarchesiniBug 1069401 - UserAgent cannot be changed for specific websites in workers, r=khuey, r=bz, a=sledru - e178848e43d1 Gijs KruitboschBug 1065998 - Empty-check Windows8WindowFrameColor's customizationColor in case its registry value is gone. r=jaws, a=sledru - 12a5b8d685b2 Richard BarnesBug 1045973 - sec_error_extension_value_invalid: mozilla::pkix does not accept certificates with x509v3 extensions in x509v1 or x509v2 certificates. r=keeler, a=sledru - a4697303afa6 Branislav RankovBug 1058024 - IonMonkey: (ARM) Fix jsapi-tests/testJitMoveEmitterCycles. r=mjrosenb, a=sledru - 371e802df4dc Rik CabanierBug 1072100 - mix-blend-mode doesn't work when set in JS. r=dbaron, a=sledru - badc5be25cc1 Jim ChenBug 1067018 - Make sure calloc/malloc/free usages match in Tools.h. r=jwatt, a=sledru - cf8866bd741f Bill McCloskeyBug 1071003 - Fix null crash in XULDocument::ExecuteScript. r=smaug, a=sledru - b57f0af03f78 Felipe GomesBug 1063848 - Disable e10s in safe mode. r=bsmedberg, r=ally, a=sledru, ba=jorgev - 2b061899d368 Gijs KruitboschBug 1069300 - strings for panic/privacy/forget-button for beta, r=jaws,shorlander, a=dolske, l10n=pike, DONTBUILD=strings-only - 16e19b9cec72 Valentin GosuBug 1011354 - Use a mutex to guard access to nsHttpTransaction::mConnection. r=mcmanus, r=honzab, a=abillings - ac926de428c3 Terrence ColeBug 1064346 - JSFunction's extended attributes expect POD-style initialization. r=billm, a=abillings - fd4720dd6a46 Marty RosenbergBug 1073771 - Add namespaces and whatnot to make JitMoveEmitterCycles compile. r=dougc, a=test-only - 97feda79279e Ed LeeBug 1058971 - [Legal]: text for sponsored tiles needs to be localized for Firefox 33 [r=adw a=sylvestre] - deaa75a553ac Ed LeeBug 1064515 - update learn more link for sponsored tiles overlay [r=adw a=sylvestre] - b58a231c328c Ed LeeBug 1071822 - update the learn more link in the tiles intro popup [r=adw a=sylvestre] - 0217719f20c5 Ed LeeBug 1059591 - Incorrectly formatted remotely hosted links causes new tab to be empty [r=adw a=sylvestre] - d34488e06177 Ed LeeBug 1070022 - Improve Contrast of Text on New Tab Page [r=adw a=sylvestre] - 8dd30191477e Ed LeeBug 1068181 - NEW Indicator for Pinned Tiles on New Tab Page [r=ttaubert a=sylvestre] - 02da3cf36508 Ed LeeBug 1062256 - Improve the design of the »What is this« bubble on about:newtab [r=adw a=sylvestre] - 2a8947c986ed Bas SchoutenBug 1072404: Firefox may crash when the D3D device is removed while rendering. r=mattwoodrow a=sylvestre - 3d41bbe16481 Bas SchoutenBug 1074045: Turn OMTC back on on beta. r=nical a=sylvestre - b9e8ce2a141b Jim MathiesBug 1068189 - Force disable browser.tabs.remote.autostart in non-nightly builds. r=felipe, a=sledru - d41af0c7fdaf Randell JesupBug 1033066 - Never let AudioSegments underflow mDuration and cause OOM allocation. r=karlt, a=sledru - 82f4086ba2c7 Georg FritzscheBug 1070036 - Catch NS_ERROR_NOT_AVAILABLE during OpenH264Provider startup. r=irving, a=sledru - b6985e15046b Nicolas SilvaBug 1061712 - Don't crash in DrawTargetDual::CreateSimilar if allocation fails. r=Bas, a=sledru - 69047a750833 Nicolas SilvaBug 1061699 - Only crash deBug builds if BorrowDrawTarget is called on an unlocked TextureClient. r=Bas, a=sledru - 4020480a6741 Aaron KlotzBug 1072752 - Make Chromium UI message loops for Windows call into WinUtils::WaitForMessage. r=jimm, a=sledru - 737fbc0e3df4 Florian QuèzeBug 1067367 - Tapping the icon of a second doorhanger reopens the first doorhanger if it was already open. r=Enn, a=sledru - 3ff9831143fd Robert LongsonBug 1073924 - Hovering over links in SVG does not cause cursor to change. r=jwatt, a=sledru - 19338c25065c Ryan VanderMeulenBacked out changeset d41af0c7fdaf (Bug 1068189) for reftest-ipc crashes/failures. - dabbfa2c0eac Randell JesupBug 1069646 - Scale frame rate initialization in webrtc media_opimization. r=gcp, a=sledru - bc5451d18901 David KeelerBug 1053565 - Update minimum system NSS requirement in configure.in (it is now 3.17.1). r=glandium, a=sledru - 0780dce35e25

Categorieën: Mozilla-nl planet

Mozilla Reps Community: ReMo Camp 2014: Impact through action

di, 30/09/2014 - 19:27

For the last 3 years the council, peers and mentors of the Mozilla Reps program have been meeting annually at ReMo Camp, a 3-day meetup to check the temperature of the program and plan for the next 12 months. This year’s Camp was particularly special because for the first time, Mitchell Baker, Mark Surman and Mary Ellen Muckerman participated in it. With such a great mix of leadership both at the program level and at the organization, it was clear this ReMo Camp would be our most interesting and productive one.
The meeting spanned 3 days:
Day 1:
The Council and Peers got together to add the finishing touches and tweaks to the program content and schedule but also to discuss the program’s governance structure. Council and Peers defined the different roles in the program that allow the Reps to keep each leadership body accountable and made sure there was general alignment. We will post a separate blog post on governance explaining the exact functions of the module owner, the peers, the council, mentors and Reps.
Day 2
The second day was very exciting and was coined the “challenges” day where we had Mitchell, Mark and Mary Ellen joining the Reps to work on 6 “contribution challenges”. These challenges are designed to be concrete initiatives that aim to have quick and concrete impact on Mozilla’s product goals with large scale volunteer participation. Mozillians around the globe work tireless to push the Mozilla mission forward and one of the most powerful ways of doing so is by improving our products. We worked on 6 specific areas to have an impact and identify the next steps. There’s a lot of excitement already and the Reps program will play a central role as a platform to mobilize and empower local communities participating in these challenges. More on this shortly…
Day 3
The last day of the was entirely dedicated to the Reps program. We had so many things to talk about, so many ideas and alas the day only has so many hours, so we focused on three thematic pillars: impact, mentorship training and getting stuff done. The council and peers had spent Friday setting those priorities, the rationale being that Mozilla Reps leadership is very good at identifying what needs to get done, and not as good with follow-through. The sessions on “impact” were prioritized over others as we wanted to figure out how to best enable/empower Reps to have an impact and follow up with all the great plans we do. Impact was broken down into three thematic buckets:

Accountability: how to we keep Reps accountable for what the have signed up for?

Impact measurement: how do we measure the impact of all the wonderful things we do?

Recognition: how do we recognize in a more systematic and fair way our volunteers who are going out of their way?

After the impact discussion, we changed gears and moved to the Mentorship training. During the preparations leading to ReMo Camp most of the mentors asked for training. Our mentors are really committed to helping Reps on the ground to do a great job, so the council and the peers facilitated a mentorship training divided in 5 different stations. We got a lot of great feedback and we’ll be producing videos with the materials of the training so that any mentor (or interested Rep) has access to this content. We will be also rolling out Q&A sessions for each mentorship station. Stay tuned if you want to learn more about mentorship and the Reps program in general.

The third part of Day 3 was “getting stuff done” a session where we identified 10 concrete tasks (most of them pending from the last ReMo Camp) that we could actually get done by the end of the day.

The overall take-away from this Camp was that instead of designing grand ambitious plans we need to be more agile and sometimes be more realistic with what work we can get accomplished. Ultimately, it will help us get more stuff done more quickly. That spirit of urgency and agility permeated the entire weekend, and we hope to be able to transmit this feeling to each and every Rep.

There wasn’t enough time, but we spent it in the best possible way. Having the Mozilla leadership with us was incredibly empowering and inspiring. The Reps have organized themselves and created this powerful platform. Now it’s time to focus our efforts. The weekend in Berlin proved that the Reps are a cohesive group of volunteer leaders with a lot of experience and the eyes and ears of Mozilla in every corner of the world. Now let’s get together and committing to doing everything we set ourselves to do before ReMo Camp 2015.

Categorieën: Mozilla-nl planet

Roberto A. Vitillo: Telemetry meets Clojure.

di, 30/09/2014 - 17:48

tldr: Data related telemetry alerts (e.g. histograms or main-thread IO) are now aggregated by medusa, which allows devs to post, view and filter alerts. The dashboard allows to subscribe to search criterias or individual metrics.

As mentioned in my previous post, we recently switched to a dashboard generator “iacomus” to visualize the data produced by some of our periodic map-reduce jobs. Given that each dashboard has some metadata that describes the datasets it handles, it became possible to write a regression detection algorithm for all our dashboards that use the iacomus data-format.

The algorithm generates a time-series for each possible combination of the filtering and sorting criterias of a dashboard, compares the latest data-point to the distribution of the previous N, and generates an alert if it detects an outlier. Stats 101.

Alerts are aggregated collected by medusa, which provides a RESTful API to submit alerts and exposes a dashboard that allows users to view and filter alerts using regular expressions and subscribe to alerts.

Coding the aggregator and regression detector in Clojure[script] has been a lot of fun. I found particularly attracting the fact that Clojure doesn’t have any big web framework a la Ruby or Python that forces you in one specific mindset. Instead one can roll his own using a wide set of libraries, like:

  • HTTP-Kit, an even-driven HTTP client/server
  • Compojure, a routing library
  • Korma, a SQL DSL
  • Liberator, RESTful resource handlers
  • om, React.js interface for Clojurescript
  • secretary, a client-side routing library

The ability to easily compose functionality from different libraries is exceptionally well expressed by Alan Perlis: “It is better to have 100 functions operate on one data structure than 10 functions on 10 data structures”. And so as it happens instead of each library having its own set of independent abstractions and data-structures, Clojure libraries tend to use mostly just list, vectors, sets and maps which greatly simplify interoperability.

Lisp gets criticized for its syntax, or lack thereof, but I don’t feel that’s fair. Using any editor that inserts and balances parentheses for you does the trick. I also feel like I didn’t have to run a background thread in my mind to think if what I was writing would please the compiler or not, unlike in Scala for instance. Not to speak of the ability to use macros which allows one to easily extend the compiler with user-defined code. The expressiveness of Clojure means also that more thought is required per LOC but that might be just a side-effect of not being a full-time functional programmer.

What I do miss in the clojure ecosystem is a strong set of tools for statistics and machine learning. Incanter is a wonderful library but coming from a R and python/scipy background there is still a lot of catching up to do.


Categorieën: Mozilla-nl planet

David Rajchenbach Teller: What David Did During Q3

di, 30/09/2014 - 17:14

September is ending, and with it Q3 of 2014. It’s time for a brief report, so here is what happened during the summer.

Session Restore

After ~18 months working on Session Restore, I am progressively switching away from that topic. Most of the main performance issues that we set out to solve have been solved already, we have considerably improved safety, cleaned up lots of the code, and added plenty of measurements.

During this quarter, I have been working on various attempts to optimize both loading speed and saving speed. Unfortunately, both ongoing works were delayed by external factors and postponed to a yet undetermined date. I have also been hard at work on trying to pin down performance regressions (which turned out to be external to Session Restore) and safety bugs (which were eventually found and fixed by Tim Taubert).

In the next quarter, I plan to work on Session Restore only in a support role, for the purpose of reviewing and mentoring.

Also, a rant The work on Session Restore has relied heavily on collaboration between the Perf team and the FxTeam. Unfortunately, the resources were not always available to make this collaboration work. I imagine that the FxTeam is spread too thin onto too many tasks, with too many fires to fight. Regardless, the symptom I experienced is that during the course of this work, both low-priority, high-priority and safety-critical patches have been left to rot without reviews, despite my repeated requests, for 6, 8 or 10 weeks, much to the dismay of everyone involved. This means man·months of work thrown to /dev/null, along with quarterly objectives, morale, opportunities, contributors and good ideas.

I will try and blog about this, eventually. But please, in the future, everyone: remember that in the long run, the priority of getting reviews done (or explaining that you’re not going to) is a quite higher than the priority of writing code.

Async Tooling

Many improvements to Async Tooling landed during Q3. We now have the PromiseWorker, which simplifies considerably the work of interacting between the main thread and workers, for both Firefox and add-on developers. I hear that the first add-on to make use of this new feature is currently being developed. New features, bugfixes and optimizations landed for OS.File. We have also landed the ability to watch for changes in a directory (under Windows only, for the time being).

Sadly, my work on interactions between Promise and the Test Suite is currently blocked until the DevTools team manages to get all the uncaught asynchronous errors under control. It’s hard work, and I can understand that it is not a high priority for them, so in Q4, I will try to find a way to land my work and activate it only for a subset of the mochitest suites.

Places

I have recently joined the newly restarted effort to improve the performance of Places, the subsystem that handles our bookmarks, history, etc. For the moment, I am still getting warmed up, but I expect that most of my work during Q4 will be related to Places.

Shutdown

Most of my effort during Q3 was spent improving the Shutdown of Firefox. Where we already had support for shutting down asynchronously JavaScript services/consumers, we now also have support for native services and consumers. Also, I am in the process of landing Telemetry that will let us find out the duration of the various stages of shutdown, an information that we could not access until now.

As it turns out, we had many crashes during asynchronous shutdown, a few of them safety-critical. At the time, we did not have the necessary tools to determine to prioritize our efforts or to find out whether our patches had effectively fixed bugs, so I built a dashboard to extract and display the relevant information on such crashes. This proved a wise investment, as we spent plenty of time fighting AsyncShutdown-related fires using this dashboard.

In addition to the “clean shutdown” mechanism provided by AsyncShutdown, we also now have the Shutdown Terminator. This is a watchdog subsystem, launched during shutdown, and it ensures that, no matter what, Firefox always eventually shuts down. I am waiting for data from our Crash Scene Investigators to tell us how often we need this watchdog in practice.

Community

I lost track of how many code contributors I interacted with during the quarter, but that represents hundreds of e-mails, as well as countless hours on IRC and Bugzilla, and a few hours on ask.mozilla.org. This year’s mozEdu teaching is also looking good.

We also launched FirefoxOS in France, with big success. I found myself in a supermarket, presenting the ZTE Open C and the activities of Mozilla to the crowds, and this was a pleasing experience.

For Q4, expect more mozEdu, more mentoring, and more sleepless hours helping contributors debug their patches :)


Categorieën: Mozilla-nl planet

Andrew Halberstadt: How many tests are disabled?

di, 30/09/2014 - 16:16
tl;dr Look for [reports like this][0] in the near future! At Mozilla, platform developers are culturally bound to [tbpl][1]. We spend a lot of time staring at those bright little letters, and their colour can mean the difference between hours, days or even weeks of work. With so many people performing over [420 pushes per day][2], all watching, praying, rejoicing and cursing, it's paramount that the whole process operates like a well oiled machine. So when a test starts intermittently failing, and there aren't any obvious changesets to blame, it'll often get disabled in the name of keeping things running. A bug will be filed, some people will be cc'ed, and more often than not, it will languish. People who really care about tests know this. They have an innate and deep fear that there are tests out there that would catch major and breaking regressions, but for the fact that they are disabled. Unfortunately, there was never a good way to see, at a high level, which tests were disabled for a given platform. So these people who care so much have to go about their jobs with a vague sense of impending doom. Until now. A Concrete Sense of Impending Doom ---------------------------------- [Test Informant][3] is a new service which aims to bring some visibility into the state of tests for a variety of suites and platforms. It listens to [pulse messages][4] from mozilla-central for a variety of build configurations, downloads the associated tests bundle, parses as many manifests as it can and saves the results to a mongo database. There is a script that queries the database and can generate reports (e.g [like this][0]), including how many tests have been enabled or disabled over a given period of time. This means instead of a vague sense of impending doom, you can tell at a glance exactly how doomed we are. There are still a few manual steps required to generate and post the reports, but I intend to fully automate the process (including a weekly digest link posted to dev.platform). Over the Hill and Far Away -------------------------- There are a number of improvements that can be made to this system. We may or may not implement them based on the initial feedback we get from these reports. Possible improvements include: * Support for additional suites and platforms. * A web dashboard with graphs and other visualizations. * Email notifications when tests are enabled/disabled on a per-module basis. * Exposing the database publicly so other tools can use it (e.g a mach command). There are also some known limitations: * No data for b2g or android platforms (blocked by bugs [1071642][5] and [1066735][6] respectively). * No data for suite \*. At the moment, only suites that live in the tests bundle and that have manifestparser based manifests (the .ini format) are supported. We may extend the tool to other formats at a later date. * Run-time filters not taken into account. Because the tool doesn't actually run any tests, it doesn't know about any filters added by the test harness at run-time. Because all of reftest's filtering happens at runtime, it's unlikely reftest will be supported anytime soon. If you would like to contribute, or just take a look at the source, it's [all on github][7]. As always, let me know if you have any questions! [0]: http://people.mozilla.org/~ahalberstadt/informant-reports/daily/2014-09-29.informant-report.html [1]: http://tbpl.mozilla.org [2]: http://relengofthenerds.blogspot.ca/2014/09/mozilla-pushes-august-2014.html [3]: https://wiki.mozilla.org/Auto-tools/Projects/Test-Informant [4]: https://wiki.mozilla.org/Auto-tools/Projects/Pulse [5]: https://bugzilla.mozilla.org/show_bug.cgi?id=1071642 [6]: https://bugzilla.mozilla.org/show_bug.cgi?id=1066735 [7]: https://github.com/ahal/test-informant
Categorieën: Mozilla-nl planet

Niko Matsakis: Multi- and conditional dispatch in traits

di, 30/09/2014 - 15:45

I’ve been working on a branch that implements both multidispatch (selecting the impl for a trait based on more than one input type) and conditional dispatch (selecting the impl for a trait based on where clauses). I wound up taking a direction that is slightly different from what is described in the trait reform RFC, and I wanted to take a chance to explain what I did and why. The main difference is that in the branch we move away from the crate concatenability property in exchange for better inference and less complexity.

The various kinds of dispatch

The first thing to explain is what the difference is between these various kinds of dispatch.

Single dispatch. Let’s imagine that we have a conversion trait:

1 2 3 trait Convert<Target> { fn convert(&self) -> Target; }

This trait just has one method. It’s about as simple as it gets. It converts from the (implicit) Self type to the Target type. If we wanted to permit conversion between int and uint, we might implement Convert like so:

1 2 impl Convert<uint> for int { ... } // int -> uint impl Convert<int> for uint { ... } // uint -> uint

Now, in the background here, Rust has this check we call coherence. The idea is (at least as implemented in the master branch at the moment) to guarantee that, for any given Self type, there is at most one impl that applies. In the case of these two impls, that’s satisfied. The first impl has a Self of int, and the second has a Self of uint. So whether we have a Self of int or uint, there is at most one impl we can use (and if we don’t have a Self of int or uint, there are zero impls, that’s fine too).

Multidispatch. Now imagine we wanted to go further and allow int to be converted to some other type MyInt. We might try writing an impl like this:

1 2 struct MyInt { i: int } impl Convert<MyInt> for int { ... } // int -> MyInt

Unfortunately, now we have a problem. If Self is int, we now have two applicable conversions: one to uint and one to MyInt. In a purely single dispatch world, this is a coherence violation.

The idea of multidispatch is to say that it’s ok to have multiple impls with the same Self type as long as at least one of their other type parameters are different. So this second impl is ok, because the Target type parameter is MyInt and not uint.

Conditional dispatch. So far we have dealt only in concrete types like int and MyInt. But sometimes we want to have impls that apply to a category of types. For example, we might want to have a conversion from any type T into a uint, as long as that type supports a MyGet trait:

1 2 3 4 5 6 7 8 9 10 11 trait MyGet { fn get(&self) -> MyInt; } impl<T> Convert<MyInt> for T where T:MyGet { fn convert(&self) -> MyInt { self.get() } }

We call impls like this, which apply to a broad group of types, blanket impls. So how do blanket impls interact with the coherence rules? In particular, does the conversion from T to MyInt conflict with the impl we saw before that converted from int to MyInt? In my branch, the answer is “only if int implements the MyGet trait”. This seems obvious but turns out to have a surprising amount of subtlety to it.

Crate concatenability and inference

In the trait reform RFC, I mentioned a desire to support crate concatenability, which basically means that you could take two crates (Rust compilation units), concatenate them into one crate, and everything would keep building. It turns out that the coherence rules already basically guarantee this without any further thought – except when it comes to inference. That’s where things get interesting.

To see what I mean, let’s look at a small example. Here we’ll use the same Convert trait as we saw before, but with just the original set of impls that convert between int and uint. Now imagine that I have some code which starts with a int and tries to call convert() on it:

1 2 3 4 5 6 trait Convert<T> { fn convert(&self) -> T; } impl Convert<uint> for int { ... } impl Convert<int> for uint { ... } ... let x: int = ...; let y = x.convert();

What can we say about the type of y here? Clearly the user did not specify it and hence the compiler must infer it. If we look at the set of impls, you might think that we can infer that y is of type uint, since the only thing you can convert a int into is a uint. And that is true – at least as far as this particular crate goes.

However, if we consider beyond a single crate, then it is possible that some other crate comes along and adds more impls. For example, perhaps another crate adds the conversion to the MyInt type that we saw before:

1 2 struct MyInt { i: int } impl Convert<MyInt> for int { ... } // int -> MyInt

Now, if we were to concatenate those two crates together, then this type inference step wouldn’t work anymore, because int can now be converted to either uint or MyInt. This means that the snippet of code we saw before would probably require a type annotation to clarify what the user wanted:

1 2 let x: int = ...; let y: uint = x.convert(); Crate concatenation and conditional impls

I just showed that the crate concatenability principle interferes with inference in the case of multidispatch, but that is not necessarily bad. It may not seem so harmful to clarify both the type you are converting from and the type you are converting to, even if there is only one type you could legally choose. Also, multidispatch is fairly rare; most traits has a single type that decides on the impl and then all other types are uniquely determined. Moreover, with the associated types RFC, there is even a syntactic way to express this.

However, when you start trying to implement conditional dispatch that is, dispatch predicated on where clauses, crate concatenability becomes a real problem. To see why, let’s look at a different trait called Push. The purpose of the Push trait is to describe collection types that can be appended to. It has one associated type Elem that describes the element types of the collection:

1 2 3 4 5 trait Push { type Elem; fn push(&mut self, elem: Elem); }

We might implement Push for a vector like so:

1 2 3 4 5 impl<T> Push for Vec<T> { type Elem = T; fn push(&mut self, elem: T) { ... } }

(This is not how the actual standard library works, since push is an inherent method, but the principles are all the same and I didn’t want to go into inherent methods at the moment.) OK, now imagine I have some code that is trying to construct a vector of char:

1 2 3 4 let mut v = Vec::new(); v.push('a'); v.push('b'); v.push('c');

The question is, can the compiler resolve the calls to push() here? That is, can it figure out which impl is being invoked? (At least in the current system, we must be able to resolve a method call to a specific impl or type bound at the point of the call – this is a consequence of having type-based dispatch.) Somewhat surprisingly, if we’re strict about crate concatenability, the answer is no.

The reason has to do with DST. The impl for Push that we saw before in fact has an implicit where clause:

1 2 3 impl<T> Push for Vec<T> where T : Sized { ... }

This implies that some other crate could come along and implement Push for an unsized type:

1 impl<T> Push for Vec<[T]> { ... }

Now, when we consider a call like v.push('a'), the compiler must pick the impl based solely on the type of the receiver v. At the point of calling push, all we know is that is the type of v is a vector, but we don’t know what it’s a vector of – to infer the element type, we must first resolve the very call to push that we are looking at right now.

Clearly, not being able to call push without specifying the type of elements in the vector is very limiting. There are a couple of ways to resolve this problem. I’m not going to go into detail on these solutions, because they are not what I ultimately opted to do. But briefly:

  • We could introduce some new syntax for distinguishing conditional dispatch vs other where clauses (basically the input/output distinction that we use for type parameters vs associated types). Perhaps a when clause, used to select the impl, versus a where clause, used to indicate conditions that must hold once the impl is selected, but which are not checked beforehand. Hard to understand the difference? Yeah, I know, I know.
  • We could use an ad-hoc rule to distinguish the input/output clauses. For example, all predicates applied to type parameters that are directly used as an input type. Limiting, though, and non-obvious.
  • We could create a much more involved reasoning system (e.g., in this case, Vec::new() in fact yields a vector whose types are known to be sized, but we don’t take this into account when resolving the call to push()). Very complicated, unclear how well it will work and what the surprising edge cases will be.

Or… we could just abandon crate concatenability. But wait, you ask, isn’t it important?

Limits of crate concatenability

So we’ve seen that crate concatenability conflicts with inference and it also interacts negatively with conditional dispatch. I now want to call into question just how valuable it is in the first place. Another way to phrase crate concatenability is to say that it allows you to always add new impls without disturbing existing code using that trait. This is actually a fairly limited guarantee. It is still possible for adding impls to break downstream code across two different traits, for example. Consider the following example:

1 2 3 4 5 6 7 8 9 10 11 12 13 struct Player { ... } trait Cowboy { // draw your gun! fn draw(&self); } impl Cowboy for Player { ...} struct Polygon { ... } trait Image { // draw yourself (onto a canvas...?) fn draw(&self); } impl Image for Polygon { ... }

Here you have two traits with the same method name (draw). However, the first trait is implemented only on Player and the other on Polygon. So the two never actually come into conflict. In particular, if I have a player player and I write player.draw(), it could only be referring to the draw method of the Cowboy trait.

But what happens if I add another impl for Image?

1 impl Image for Player { ... }

Now suddenly a call to player.draw() is ambiguous, and we need to use so-called “UFCS” notation to disambiguate (e.g., Player::draw(&player)).

(Incidentally, this ability to have type-based dispatch is a great strength of the Rust design, in my opinion. It’s useful to be able to define method names that overlap and where the meaning is determined by the type of the receiver.)

Conclusion: drop crate concatenability

So I’ve been turning these problems over for a while. After some discussions with others, aturon in particular, I feel the best fix is to abandon crate concatenability. This means that the algorithm for picking an impl can be summarized as:

  1. Search the impls in scope and determine those whose types can be unified with the current types in question and hence could possibly apply.
  2. If there is more than one impl in that set, start evaluating where clauses to narrow it down.

This is different from the current master in two ways. First of all, to decide whether an impl is applicable, we use simple unification rather than a one-way match. Basically this means that we allow impl matching to affect inference, so if there is at most one impl that can match the types, it’s ok for the compiler to take that into account. This covers the let y = x.convert() case. Second, we don’t consider the where clauses unless they are needed to remove ambiguity.

I feel pretty good about this design. It is somewhat less pure, in that it blends the role of inputs and outputs in the impl selection process, but it seems very usable. Basically it is guided only by the ambiguities that really exist, not those that could theoretically exist in the future, when selecting types. This avoids forcing the user to classify everything, and in particular avoids the classification of where clauses according to when they are evaluated in the impl selection process. Moreover I don’t believe it introduces any significant compatbility hazards that were not already present in some form or another.

Categorieën: Mozilla-nl planet

Gregory Szorc: Mozilla Mercurial Statistics

di, 30/09/2014 - 15:17

I recently gained SSH access to Mozilla's Mercurial servers. This allows me to run some custom queries directly against the data. I was interested in some high-level numbers and thought I'd share the results.

hg.mozilla.org hosts a total of 3,445 repositories. Of these, there are 1,223 distinct root commits (i.e. distinct graphs). Altogether, there are 32,123,211 commits. Of those, there are 865,594 distinct commits (not double counting commits that appear in multiple repositories).

We have a high ratio of total commits to distinct commits (about 37:1). This means we have high duplication of data on disk. This basically means a lot of repos are clones/forks of existing ones. No big surprise there.

What is surprising to me is the low number of total distinct commits. I was expecting the number to run into the millions. (Firefox itself accounts for ~240,000 commits.) Perhaps a lot of the data is sitting in Git, Bitbucket, and GitHub. Sounds like a good data mining expedition...

Categorieën: Mozilla-nl planet

Pascal Chevrel: My Q2-2014 report

di, 30/09/2014 - 12:19
Summary of what I did last quarter (regular l10n-drivers work such as patch reviews, pushes to production, meetings and past projects maintenance excluded) .
Australis release At the end of April, we shipped Firefox 29 which was our first major redesign of the Firefox user interface since Firefox 4 (released in 2011). The code name for that was Australis and that meant replacing a lot of content on mozilla.org to introduce this new UI and the new features that go with it. That also means that we were able to delete a lot of old content that now had become really obsolete or that was now duplicated on our support site.

Since this was a major UI change, we decided to show an interactive tour of the new UI to both new users and existing users upgrading to the new version. That tour was fully localized in a few weeks time in close to 70 languages, which represents 97.5% of our user base. For the last locales not ready on time, we either decided to show them a partially translated site (some locales had translated almost everything or some of the non-translated strings were not very visible to most users, such as alternative content to images for screen readers) or to let the page fall back to the best language available (like Occitan falling back to French for example).

Mozilla.org was also updated with 6 new product pages replacing a lot of old content as well as updates to several existing pages. The whole site was fully ready for the launch with 60 languages 100% ready and 20 partially ready, all that done in a bit less than 4 weeks, parallel to the webdev integration work.

I am happy to say that thanks to our webdev team, our amazing l10n community and with the help of my colleagues Francesco Lodolo (also Italian localizer) and my intern Théo Chevalier (also French localizer), we were able to not only offer a great upgrading experience for the quasi totality of our user base, we were also able to clean up a lot of old content, fix many bugs and prepare the site from an l10n perspective for the upcoming releases of our products.

Today, for a big locale spanning all of our products and activities, mozilla.org is about 2,000 strings to translate and maintain (+500 since Q1), for a smaller locale, this is about 800 strings (+200 since Q1). This quarter was a significant bump in terms of strings added across all locales but this was closely related to the Australis launch, we shouldn't have such a rise in strings impacting all locales in the next quarters.
Transvision releases Last quarter we did 2 releases of Transvision with several features targeting out 3 audiences: localizers, localization tools, current and potential Transvision developers.

For our localizers, I worked on a couple of features, one is quick filtering of search results per component for Desktop repositories (you search for 'home' and with one click, you can filter the results for the browser, for mail or for calendar for example). The other one is providing search suggestions when your search yields no results with the best similar matches ("your search for 'lookmark' yielded no result, maybe you were searching for 'Bookmark'?").

For the localization tools community (software or web apps like Pontoon, Mozilla translator, Babelzilla, OmegaT plugins...), I rewrote entirely our old Json API and extended it to provide more services. Our old API was initially created for our own purposes and basically was just giving the possibility to get our search results as a Json feed on our most popular views. Tools started using it a couple of years ago and we also got requests for API changes from those tool makers, therefore it was time to rewrite it entirely to make it scalable. Since we don't want to break anybody's workflow, we now redirect all the old API calls to the new API ones. One of the significant new service to the API is a translation memory query that gives you results and a quality index based on the Levenshtein distance with the searched terms. You can get more information on the new API in our documentation.

I also worked on improving our internal workflow and make it easier for potential developers wanting to hack on Transvision to install and run it locally. That meant that now we do continuous integration with Travis CI (all of our unit tests are ran on each commit and pull request on PHP 5.4 and 5.5 environments), we have made a lot of improvements to our unit tests suite and coverage, we expose to developers peak memory usage and time per request on all views so as to catch performance problems early, and we also now have a "dev" mode that allows getting Transvision installed and running on the PHP development server in a matter of minutes instead of hours for a real production mode. One of the blockers for new developers was the time required to install Transvision locally. Since it is a spidering tool looking for localized strings in Mozilla source repositories, it needed to first clone all the repositories it indexes (mercurial/git/svn) which is about 20GB of data and takes hours even with a fast connection. We are now providing a snapshot of the final extracted data (still 400MB ;)) every 6 hours that is used by the dev install mode.

Check the release notes for 3.3 and 3.4 to see what other features were added by the team (/ex: on demand TMX generation or dynamic Gaia comparison view added by Théo, my intern).
Web dashboard / Langchecker The main improvement I brought to the web dashboard is probably this quarter the deadline field to all of our .lang files, which allows to better communicate the urgency of projects and for localizers are an extra parameter allowing them to prioritize their work.

Theo's first project for his internship was to build a 'project' view on the web dashboard that we can use to get an overview of the translation of a set of pages/files, this was used for the Australis release (ex: http://l10n.mozilla-community.org/webdashboard/?project=australis_all) but can be used to any other project we want to define , here is an example for the localization of two Android Add-ons I did for the World Cup that we did and tracked with .lang files.

We brought other improvements to our maintenance scripts for example to be able to "bulk activate" a page for all the locales ready, we improved our locamotion import scripts, started adding unit tests etc. Generally speaking, the Web dashboard keeps improving regularly since I rewrote it last quarter and we regularly experiment using it for more projects, especially for projects which don't fit in the usual web/product categories and that also need tracking. I am pretty happy too that now I co-own the dashboard with Francesco who brings his own ideas and code to streamline our processes.
Théo's internship I mentionned it before, our main French localizer Théo Chevalier, is doing an internship with me and Delphine Lebédel as mentors, this is the internship that ends his 3rd year of engineering (in a 5 years curriculum). He is based in Montain View, started early April and will be with us until late July.

He is basically working on almost all of the projects I, Delphine and Flod work on.

So far, apart from regular work as an l10n-driver, he has worked for me on 3 projects, the Web Dashboard projects view, building TMX files on demand on Transvision and the Firefox Nightly localized page on mozilla.org. This last project I haven't talked about yet and he blogged about it recently, in short, the first page that is shown to users of localized builds of Firefox Nightly can now be localized, and by localized we don't just mean translated, we mean that we have a community block managed by the local community proposing Nightly users to join their local team "on the ground". So far, we have this page in French, Italian, German and Czech, if your locale workflow is to translate mozilla-central first, this is a good tooll for you to reach a potential technical audience to grow your community .
Community This quarter, I found 7 new localizers (2 French, 1 Marahati, 2 Portuguese/Portugal, 1 Greek, 1 Albanian) to work with me essentially on mozilla.org content. One of them, Nicolas Delebeque, took the lead on the Australis launch and coordinated the French l10n team since Théo, our locale leader for French, was starting his internship at Mozilla.

For Transvision, 4 people in the French community (after all, Transvision was created initially by them ;)) expressed interest or small patches to the project, maybe all the efforts we did in making the application easy to install and hack are starting to pay, we'll probably see in Q3/Q4 :)

I spent some time trying to help rebuild the Portugal community which is now 5 people (instead of 2 before), we recently resurrected the mozilla.pt domain name to actually point to a server, the MozFR one already hosting the French community and WoMoz (having the French community help the Portuguese one is cool BTW). A mailing list for Portugal was created (accessible also as nntp and via google groups) and the #mozilla-portugal IRC channel was created. This is a start, I hope to have time in Q3 to help launch a real Portugal site and help them grow beyond localization because I think that communities focused on only one activity have no room to grow or renew themselves (you also need coding, QA, events, marketing...).

I also started looking at Babelzilla new platform rewrite project to replace the current aging platform (https://github.com/BabelZilla/WTS/) to see if I can help Jürgen, the only Babelzilla dev, with building a community around his project. Maybe some of the experience I gained through Transvision will be transferable to Babelzilla (was a one man effort, now 4 people commit regularly out of 10 committers). We'll see in the next quarters if I can help somehow, I only had time to far to install the app locally.

In terms of events, this was a quiet quarter, apart from our l10n-drivers work week, the only localization event I was in was the localization sprint over a whole weekend in the Paris office. Clarista, the main organizer blogged about it in French, many thanks to her and the whole community that came over, it was very productive, we will definitely do it again and maybe make it a recurring event.
Summary

This quarter was a good balance between shipping, tooling and community building. The beginning of the quarter was really focused on shipping Australis and as usual with big releases, we created scripts and tools that will help us ship better and faster in the future. Tooling and in particular Transvision work which is probably now my main project, took most of my time in the second part of the quarter.

Community building was as usual a constant in my work, the one thing that I find more difficult now in this area is finding time for it in the evening/week end (when most potential volunteers are available for synchronous communication) basically because it conflicts with my family life a bit. I am trying to be more efficient recruiting using asynchronous communication tools (email, forums…) but as long as I can get 5 to 10 additional people per quarter to work with me, it should be fine with scaling our projects.

Categorieën: Mozilla-nl planet

Erik Vold: Jetpack Pro Tip - JPM --prefs

di, 30/09/2014 - 02:00

JPM allows you to dynamically set preferences which can be used when an add-on developer uses jpm run or jpm test. This new --prefs feature that I added yesterday because Firefox DevTools requested it JPM 0.0.16.

With --prefs you can point to a json file, which should include the an object with keys for each pref that you want set and the values of these keys should be the desired value for your pref setting, here is a json file example jpm test --prefs ~/firefox-prefs.json:

{ "extensions.test.pref": true }

This would be the static way to add prefs, if you want dynamic prefs then you can use a CommonJS file, with jpm test --prefs ~/firefox-prefs.js, where the ~/firefox-prefs.js looks something like this:

var prefs = {}; prefs["extensions.test.time"] = Date.now(); module.exports = prefs;
Categorieën: Mozilla-nl planet

Erik Vold: Processing Jetpack

di, 30/09/2014 - 02:00

This is the first post of my new Jetpacks Labs series, which is a project that I am working on in my personal time.

I think Processing is a great language because it is very simple and good and what it was meant to do. I’ve only had a little time to try hacking on some Processing arto projects, and it’s been a lot of fun (I will post those scripts when they are done). However, using the Java Processing client was not such a pleasent experience, and I thought that making a Firefox add-on, using Processing-js with the same features would not be hard. This is partly what led me to write about my Art Tools for Firefox idea in Feburary.

This week I found some time to hack a prototype together, and it’s working pretty well now, you can find the source on Github at jetpack-labs/processing-jetpack.

At the moment this add-on is using Scratchpad as an editor, but in the future I want to use the WebIDE. Also at the moment I’ve only added a “Processing Start” menuitem, and there should also be pause and stop menuitems, and there should be corresponding buttons for these actions. All of this and more are features that need to be added, and on top of that I would like to integrate this add-on with openprocessing.org, so if you’re intertested in the project, this is my request for contributors :)

There is a lot of work to do here still.

Categorieën: Mozilla-nl planet

Daniel Stenberg: A day in the curl project

ma, 29/09/2014 - 22:59

cURLI maintain curl and lead the development there. This is how I spend my time an ordinary day in the project. Maybe I don’t do all of these things every single day, but sometimes I do and sometimes I just do a subset of them. I just want to give you a look into what I do and why I don’t add new stuff more often or faster… I spend about one to three hours on the project every day. Let me also stress that curl is a tiny little project in comparison with many other open source projects. I’m certainly not saying otherwise.

the new bug

Someone submits a new bug in the bug tracker or on one of the mailing lists. Most initial bug reports lack sufficient details so the first thing I do is ask for more info and possibly ask the submitter to try a recent version as very often we get bug reported on very old versions. Many bug reports take several demands for more info before the necessary details have been provided. I don’t really start to investigate a problem until I feel I have a sufficient amount of details. We’re a very small core team that acts on other people’s bugs.

the question by a newbie in the project

A new person shows up with a question. The question is usually similar to a FAQ entry or an example but not exactly. It deserves a proper response. This kind of question can often be answered by anyone, but also most people involved in the project don’t feel the need or “familiarity” to respond to such questions and therefore remain quiet.

the old mail I haven’t responded to yet

I want every serious email that reaches the mailing lists to get a response, so all mails that neither I nor anyone else responds to I keep around in my inbox and when I have idle time over I go back and catch up on old mails. Some of them can then of course result in a new bug or patch or whatever. Occasionally I have to resort to simply saving away the old mail without responding in order to catch up, just to cut the list of outstanding things to do a little.

the TODO list for my own sake, things I’d like to get working on

There are always things I really want to see done in the project, and I work on them far too little really. But every once in a while I ignore everything else in my life for a couple of hours and spend them on adding a new feature or fixing something I’ve been missing. Actual development of new features is a very small fraction of all time I spend on this project.

the list of open bug reports

I regularly revisit this list to see what I can do to push the open ones forward. Follow-up questions, deep dives into source code and specifications or just the sad realization that a particular issue won’t be fixed within the nearest time (year?) so that I close it as “future” and add the problem to our KNOWN_BUGS document. I strive to keep the bug list clean and only keep relevant bugs open. Those issues that are not reproducible, are left without the proper attention from the reporter or otherwise stall will get closed. In general I feel quite lonely as responder in the bug tracker…

the mailing list threads that are sort of dying but I do want some progress or feedback on

In my primary email inbox I usually keep ongoing threads around. Lots of discussions just silently stop getting more posts and thus slowly wither away further up the list to become forgotten and ignored. With some interval I go back to see if the posters are still around, if there’s any more feedback or whatever in order to figure out how to proceed with the subject. Very often this makes me get nothing at all back and instead I just save away the entire conversation thread, forget about it and move on.

the blog post I want to do about a recent change or fix I did I’d like to highlight

I try to explain some changes to the world in blog posts. Not all changes but the ones that are somehow noteworthy as they perhaps change the way things have been or introduce new fun features perhaps not that easily spotted. Of course all features are always documented etc, but sometimes I feel I need to put some extra attention on focus on things in a more free-form style. Or I just write about meta stuff, like this very posting.

the reviewing and merging of patches

One of the most important tasks I have is to review patches. I’m basically the only person in the project who volunteers to review patches against any angle or corner of the project. When people have spent time and effort and gallantly send the results of their labor our way in the best possible format (a patch!), the submitter deserves a good review and proper feedback. Also, paving the road for more patches is one of the best way to scale the project. Helping newcomers become productive is important.

Patches are preferably posted on the mailing lists but there’s also some coming in via pull requests on github and while I strongly discourage that (due to them not getting the same attention and possible scrutiny on the list like the others) I sometimes let them through anyway just to be smooth.

When the patch looks good (or sometimes good enough and I just edit some minor detail), I merge it.

the non-disclosed discussions about a potential security problem

We’re a small project with a wide reach and security problems can potentially have grave impact on users. We take security seriously, and we very often have at least one non-public discussion going on about a problem in curl that may have security implications. We then often work on phrasing security advisories, working down exactly which versions that are vulnerable, producing patches for at least the most recent ones of those affected versions and so on.

tame stackoverflow

stackoverflow.com has become almost like a wikipedia for source code and programming related issues (although it isn’t wiki), and that site is one of the primary referrers to curl’s web site these days. I tend to glance over the curl and libcurl related questions and offer my answers at times. If nothing else, it is good to help keeping the amount of disinformation at low levels.

I strongly disapprove of people filing bug reports on such places or even very detailed (lib)curl core questions that should’ve been asked on the curl-library list.

there are idle times too

Yeah. Not very often, but sometimes I actually just need a day off all this. Sometimes I just don’t find motivation or energy enough to dig into that terrible seldom-happening bug on a platform I’ve never seen personally. A project like this never ends. The same day we release a new release, we just reset our clocks and we’re back on improving curl, fixing bugs and cleaning up things for the next release. Forever and ever until the end of time.

keep-calm-and-improve-curl

Categorieën: Mozilla-nl planet

Lukas Blakk: New to Bugzilla

ma, 29/09/2014 - 20:24

I believe it was a few years ago, possibly more, when someone (was it Josh Matthews? David Eaves) added a feature to Bugzilla that indicated when a person was “New to Bugzilla”. It was a visual cue next to their username and its purpose was to help others remember that not everyone in the Bugzilla soup is a veteran, accustomed to our jargon, customs, and best practices. This visual cue came in handy three weeks ago when I encouraged 20 new contributors to sign up for Bugzilla. 20 people who have only recently begun their journey towards becoming Mozilla contributors, and open source mavens. In setting them loose upon our bug tracker I’ve observed two things:

ONE: The “New to Bugzilla” flag does not stay up long enough. I’ll file a bug on this and look into how long it currently does stay up, and recommend that if possible we should have it stay up until the following criteria are met:
* The person has made at least 10 comments
* The person has put up at least one attachment
* The person has either reported, resolved, been assigned to, or verified at least one bug

TWO: This one is a little harder – it involves more social engineering. Sometimes people are might be immune to the “New to Bugzilla” cue or overlook it which has resulted in some cases there have been responses to bugs filed by my cohort of Ascenders where the commenter was neither helpful nor forwarding the issue raised. I’ve been fortunate to be in-person with the Ascend folks and can tell them that if this happens they should let me know, but I can’t fight everyone’s fights for them over the long haul. So instead we should build into the system a way to make sure that when someone who is not New to Bugzilla replies immediately after a “New to Bugzilla” user there is a reminder in the comment field – something along the lines of “You’re about to respond to someone who’s new around here so please remember to be helpful”. Off to file the bugs!

Categorieën: Mozilla-nl planet

Jordan Lund: This Week In Releng - Sept 21st, 2014

ma, 29/09/2014 - 20:08

Major Highlights:

  • shipped 10 products in less than one day

Completed work (resolution is 'FIXED'):


In progress work (unresolved and not assigned to nobody):

Categorieën: Mozilla-nl planet

Jordan Lund: This Week In Releng - Sept 7th, 2014

ma, 29/09/2014 - 19:44

Major Highlights

  • big time saving in releases thanks to:
    • Bug 807289 - Use hardlinks when pushing to mirrors to speed it up

Completed work (resolution is 'FIXED'):


In progress work (unresolved and not assigned to nobody):

Categorieën: Mozilla-nl planet

Curtis Koenig: The Curtis Report 2014-09-26

ma, 29/09/2014 - 19:38

So my last report failed to mention something important. There is a lot I do that is not on this report. This only covers note worthy items outside of run the business (RTB) activities. I do a good deal of bug handing, input, triage and routing to get things to the right people, remove bad/invalid or mis tagged items. Answer emails on projects and other items etc. Just general workstuff. Last week had lots of vendor stuff (as noted below) and while kind of RTB it’s usually not this heavy and we had 2 rush ones so I felt they worthy of note.

What I did this week
  • kit herder community stuff
  • [vendor redacted] communications
  • [vendor redacted] review followup
  • [vendor 2 redacted] rush review started
  • Tribe pre-planning for next month
  • [vender redacted] follow ups
  • triage security bugs
  • DerbyCon prep / registration
  • bitcoin vendor prep work
  • SeaSponge mentoring
Meetings Attended Mon
  • impromptu [vendor redacted] review discussion
  • status meeting for [vendor redacted] security testing
  • Monday meeting
Tue
  • cloud services team (sort of)
Wed
  • impromptu [vendor redacted] standup
  • MWoS SeaSponge Weekly team meeting
  • Cloud Services Show & Tell
  • Mozillians Town Hall – Brand Initiatives (Mozilla + Firefox)
  • Web Bug Triage
Thu
  • security open mic
Fri-Sun Non Work
  • deal with deer damage to car

Categorieën: Mozilla-nl planet

Pagina's