mozilla

Mozilla Nederland LogoDe Nederlandse
Mozilla gemeenschap

Abonneren op feed Mozilla planet
Planet Mozilla - http://planet.mozilla.org/
Bijgewerkt: 50 min 55 sec geleden

Leo McArdle: Letter to my MP on DRIP

zo, 13/07/2014 - 16:25

What follows is a copy of the email I just sent to my MP about the Data Retention and Investigatory Powers Bill (DRIP). I urge you to send a similar email right now.

Dear Robin Walker,

I have no doubt that by now you will have heard of the Data Retention and Investigatory Powers Bill (DRIP) which your Government and the Opposition will try to rail-road through Parliament next week. I also have no doubt that you will have heard of the great deal of criticism surrounding this bill, both from your colleagues within Westminster hailing from all parties, such as David Davis MP and Tom Watson MP, and those outside of Westminster, such as Jim Killock of the Open Rights Group.

In April the European Court of Justice (ECJ) ruled that the Data Retention Directive (DRD) was incompatible with the Charter of Fundamental Rights of the European Union and therefore that the 2006 act enabling the DRD in the UK was a breach of Human Rights. This means what was, and still is, the status quo when it comes to forcing companies to store data on their customers is a breach of fundamental Human Rights. This is the same status quo which the Home Secretary has said that DRIP merely retains. I think it is clear to see why I, and others, have such a problem with DRIP.

The ECJ ruling outlined some very clear ways in which the DRD could be made compatible with Human Rights law, by saying that this cannot be done on a blanket basis and that someone independent must supervise police access. These fundamental points are missing from DRIP.

Furthermore, DRIP goes far further than just retaining the status quo. It makes sweeping amendments to the Regulation of Investigatory Powers Act (RIPA) including the expansion of what a communications service provider is, the extension of these powers to outside the UK and an open door to allow the Government to make new regulations about data retention at will, without the need to debate them fully in Parliament. I am sure you agree that such huge amendments to RIPA need to be subject to full Parliamentary scrutiny.

It is perfectly clear to everybody, including you, I am sure, Mr Walker, that the Government is using the ECJ ruling as a pretext to force through, at great speed, legislation which affects Human Rights, without proper scrutiny or deliberation. The ECJ ruling was in April, and many warned as far back as 2006 that the DRD was flawed. The UK Government has had years to prepare for the DRD being struck down. There is no reason for this emergency legislation, other than to try and sneak sweeping changes under the noses of MPs who have been allowed to go on holiday.

Wherever you stand on where the balance should be between State Security and Civil Liberties (and I would not be surprised if we stand on opposite ends of that balance), you must agree that five days in nowhere near enough time to properly debate and represent all the views on this issue.

It is for this reason that I urge you as my elected representative to vote against DRIP, and do everything you can to urge your colleagues to do the same. At the very least, could you please push for a highly amended bill, with all the sections amending RIPA removed, which serves purely as a stopgap, not for a period of two years, but for a maximum of six months. We need to have this debate now, and not pass the buck on to the next Government in 2016, who will surely pass the buck on again.

In 2015 I will get my first opportunity to vote in a General Election, and while I may feel that this Government has done devastating things to this country, you, Mr Walker, may be able to differentiate yourself from a sea of blue if you stand up for Civil Liberties and Human Rights.

Yours sincerely,
Leo McArdle

Categorieën: Mozilla-nl planet

Nick Cameron: Rust for C++ programmers - part 8: destructuring

zo, 13/07/2014 - 06:13
First an update on progress. You probably noticed this post took quite a while to come out. Fear not, I have not given up (yet). I have been busy with other things, and there is a section on match and borrowing which I found hard to write and it turns out I didn't understand very well. It is complicated and probably deserves a post of its own, so after all the waiting, the interesting bit is going to need more waiting. Sigh.

I've also been considering the motivation of these posts. I really didn't want to write another tutorial for Rust, I don't think that is a valuable use of my time when there are existing tutorials and a new guide in the works. I do think there is something to be said for targeting tutorials at programmers with different backgrounds. My first motivation for this series of posts was that a lot of energy in the tutorial was expended on things like pointers and the intuition of ownership which I understood well from C++, and I wanted a tutorial that concentrated on the things I didn't know. That is hopefully where this has been going, but it is a lot of work, and I haven't really got on to the interesting bits. So I would like to change the format a bit to be less like a tutorial and more like articles aimed at programmers who know Rust to some extent, but know C++ a lot better and would like to bring up their Rust skills to their C++ level. I hope that complements the existing tutorials better and is more interesting for readers. I still have some partially written posts in the old style so they will get mixed in a bit. Let me know what you think of the idea in the comments.

Destructuring
Last time we looked at Rust's data types. Once you have some data structure, you will want to get that data out. For structs, Rust has field access, just like C++. For tuples, tuple structs, and enums you must use destructuring (there are various convenience functions in the library, but they use destructuring internally). Destructuring of data structures doesn't happen in C++, but it might be familiar from languages such as Python or various functional languages. The idea is that just as you can create a data structure by filling out its fields with data from a bunch of local variables, you can fill out a bunch of local variables with data from a data structure. From this simple beginning, destructuring has become one of Rust's most powerful features. To put it another way, destructuring combines pattern matching with assignment into local variables.

Destructuring is done primarily through the let and match statements. The match statement is used when the structure being desctructured can have difference variants (such as an enum). A let expression pulls the variables out into the current scope, whereas match introduces a new scope. To compare:
fn foo(pair: (int, int)) {
    let (x, y) = pair;
    // we can now use x and y anywhere in foo

    match pair {
        (x, y) => {
            // x and y can only be used in this scope
        }
    }
}
The syntax for patterns (used after `let` and before `=>` in the above example) in both cases is (pretty much) the same. You can also use these patterns in argument position in function declarations:
fn foo((x, y): (int, int)) {
}
(Which is more useful for structs or tuple-structs than tuples).

Most initialisation expressions can appear in a destructuring pattern and they can be arbitrarily complex. That can include references and primitive literals as well as data structures. For example,
struct St {
    f1: int,
    f2: f32
}

enum En {
    Var1,
    Var2,
    Var3(int),
    Var4(int, St, int)
}

fn foo(x: &En) {
    match x {
        &Var1 => println!("first variant"),
        &Var3(5) => println!("third variant with number 5"),
        &Var3(x) => println!("third variant with number {} (not 5)", x),
        &Var4(3, St{ f1: 3, f2: x }, 45) => {
            println!("destructuring an embedded struct, found {} in f2", x)
        }
        &Var4(_, x, _) => {
            println!("Some other Var4 with {} in f1 and {} in f2", x.f1, x.f2)
        }
        _ => println!("other (Var2)")
    }
}
Note how we destructure through a reference by using `&` in the patterns and how we use a mix of literals (`5`, `3`, `St { ... }`), wildcards (`_`), and variables (`x`).

You can use `_` wherever a variable is expected if you want to ignore a single item in a pattern, so we could have used `&Var3(_)` if we didn't care about the integer. In the first `Var4` arm we destructure the embedded struct (a nested pattern) and in the second `Var4` arm we bind the whole struct to a variable. You can also use `..` to stand in for all fields of a tuple or struct. So if you wanted to do something for each enum variant but don't care about the content of the variants, you could write:
fn foo(x: En) {
    match x {
        Var1 => println!("first variant"),
        Var2 => println!("second variant"),
        Var3(..) => println!("third variant"),
        Var4(..) => println!("fourth variant")
    }
}

When destructuring structs, the fields don't need to be in order and you can use `..` to elide the remaining fields. E.g.,
struct Big {
    field1: int,
    field2: int,
    field3: int,
    field4: int,
    field5: int,
    field6: int,
    field7: int,
    field8: int,
    field9: int,
}

fn foo(b: Big) {
    let Big { field6: x, field3: y, ..} = b;
    println!("pulled out {} and {}", x, y);
}
As a shorthand with structs you can use just the field name which creates a local variable with that name. The let statement in the above example created two new local variables `x` and `y`. Alternatively, you could write
fn foo(b: Big) {
    let Big { field6, field3, ..} = b;
    println!("pulled out {} and {}", field3, field6);
}
Now we create local variables with the same names as the fields, in this case `field3` and `field6`.

There are a few more tricks to Rust's destructuring. Lets say you want a reference to a variable in a pattern. You can't use `&` because that matches a reference, rather than creates one (and thus has the effect of dereferencing the object). For example,
struct Foo {
    field: &'static int
}

fn foo(x: Foo) {
    let Foo { field: &y } = x;
}
Here, `y` has type `int` and is a copy of the field in `x`.

To create a reference to something in a pattern, you use the `ref` keyword. For example,
fn foo(b: Big) {
    let Big { field3: ref x, ref field6, ..} = b;
    println!("pulled out {} and {}", *x, *field6);
}
Here, `x` and `field6` both have type `&int` and are references to the fields in `b`.

One last trick when destructuring is that if you are detructuring a complex object, you might want to name intermediate objects as well as individual fields. Going back to an earlier example, we had the pattern `&Var4(3, St{ f1: 3, f2: x }, 45)`. In that pattern we named one field of the struct, but you might also want to name the whole struct object. You could write `&Var4(3, s, 45)` which would bind the struct object to `s`, but then you would have to use field access for the fields, or if you wanted to only match with a specific value in a field you would have to use a nested match. That is not fun. Rust lets you name parts of a pattern using `@` syntax. For example `&Var4(3, s @ St{ f1: 3, f2: x }, 45)` lets us name both a field (`x`, for `f2`) and the whole struct (`s`).

That just about covers your options with Rust pattern matching. There are a few features I haven't covered, such as matching vectors, but hopefully you know how to use `match` and `let` and have seen some of the powerful things you can do. Next time I'll cover some of the subtle interactions between match and borrowing which tripped me up a fair bit when learning Rust.
Categorieën: Mozilla-nl planet

Anthony Ricaud: Adopting Omnifocus and GTD

za, 12/07/2014 - 17:07

I've tried to adopt the Getting Things Done method a few times already. Every time, it wasn't a success. I wasn't applying most principles and fell back to noting things down on a collection of small papers. This time, I had a huge advantage: at work, I'm sitting next to Étienne, a big proponent of GTD. He inspired me to try again and answered a lot of questions I had during my adoption.

This time, I chose Omnifocus for my GTD experimentation. It's a bit expensive to buy the three flavours but I was committed. I'll be talking about my experiences via Omnifocus but you should not focus too much on the software. You can adopt GTD with paper, with another software, whatever works for you.

Capturing

In january, I started the capture part. That's when you note down in your GTD system everything you'd like to do. You need to create that habit and do it every time something pops in your head. I use three main methods to collect:

  1. When I'm in front of my computer, I use the ^⌥Space shortcut to open the Quick Entry panel
  2. When I'm not in front of my computer, I use the iPod Touch app
  3. When an email requires some action, I send a message to the mail drop address

I got a huge inbox but I was ok with it because I knew collecting was the first part to get right. There is a big relief in knowing that everything you need or want to do is explicitly written somewhere. You're not afraid of forgetting something anymore.

Capturing your thoughts like this also allows you to stay focused on the current task. You don't have to do that new task right now, you don't have to explore that idea yet. Just trust the system to remind it to you later.

To start this, you may also want to start by doing a mind sweep: sit down in front of a piece of paper, no distractions, half an hour and write down everything that comes to mind.

Process

Once you have this exhaustive list of things you want to do, you process it in contexts and projects. You also flag some items you deem important and put important dates for those tasks. I only started doing this mid january. The tricky part for me was creating the projects and contexts.

Contexts

In GTD, Contexts are things you need to achieve a task. It could be a location, a person or an object. I'm not really using the contexts because most of the time, I just need to be in front of my computer to accomplish work related tasks. I may need to tweak this again but for now, I don't feel the need to dive more in that area.

My contexts:

  • Errands: When I'm neither at home nor at work
  • Home: I don't have an office context because I can work from anywhere. I have a few tasks that require me to be in an office (like printing) but not enough to warrant a full context.
  • People: A nested list of some people and also a phone context
  • Technology: This is where you'll find most of my tasks. I have a nested email folder.
  • Waiting: When I'm waiting on something else to happen.
Projects

Let me give you three example of real projects:

Fixing a bug

I try to do this a lot :) So I have a template project that I copy when I intend to work on a bug. This is a sequential project, meaning I need to achieve a task before the next one is available.

  1. Find a fix: Well that sounds dumb but this is my first step
  2. Write tests: Even though I may write the tests as I fix the problem, I still keep this reminder to make sure I wrote enough tests
  3. Test on a phone: I will certainly have done this while developing but for small fixes that look obvious, I have been bitten by not testing on a real phone. Hence this reminder.
  4. Put in review: Uploading my patch and explaining my fix.
  5. Wait for review: This is in a waiting context so I can forget about this project until I receive an email about that review. If it's not good, I'll add a task for each comment to adress.
  6. Wait for green tests: In a waiting context too because you shouldn't land something if the tests are not green.
  7. Land patch and clean branches: When all is good, I can land my work. This is usually where I'll clean the branches I had to create.
  8. Close bug with link to commit: This is the last step so that people can track the work later.
Feedback on Openweb articles

The crazy hard worker that Nicolas Hoffmann is wrote a few articles on modern CSS practices on the OpenWeb group. I told him I wanted to read them and provide some feedback but I have no idea when I'll come around doing that. So I created one task per article. It's out of my mind but I know I'll do it one day because I have this reminder.

Birthday ideas

This is not a project per se. But when someone talks about a topic they like, I try to take a note of it. Then during the review process, I mark it as due a few days before the actual birthday.

In addition to these kinds of projects, I have a few projects called "Work :: Miscelleanous" or "Personal :: Miscelleanous". That's just things I need to do and don't really fit in a project.

Flags, deferred dates and due dates

This is how I have things popping up for attention. I try to use due dates as little as possible because otherwise, one day you end up with 10 things urgent to do and you get stuck. So only tasks that have a hard deadline (like filing taxes) get a due date.

I use flags for the tasks that are important but without a real deadline. During my weekly review (see below), I'll flag things that I want to do next week.

The capture phase was really refreshing because I knew everything was stored somewhere. Via the process phase, it's even more relaxing because I know the system will remind me when I need to do something. That completely changed how I think about being overwhelmed. Before, I had this blurry collection of things to do in my head. They were all collapsing and I had no sense of what was important to do or if I was forgetting something that matters to me. Now, when I feel overwhelmed, I know it just means I need to spend a bit of time in front of Omnifocus to process my inbox.

Review

In february, I started doing reviews more often. First every two weeks and now every week. This is another step that gives me a great deal of comfort. This is when I'll decide what I want to do next week and put flags or due dates on things that I consider important for the coming week. I will also delete some items that I don't feel like doing anymore.

Do!

And this is the biggest part of GTD. Actually doing stuff. If you spend all that time in a tool to organise your tasks, it's not for the sake of it. That's why I did it gradually, to not spend too much time finding the perfect workflow.

I'm really happy with my adoption of the GTD method. It's not perfect, I'm still tweaking here and there.

I encourage you to try it. Reach out to me if you'd like to discuss it, I'd be happy to!

Categorieën: Mozilla-nl planet

Nigel Babu: Jinxed!

za, 12/07/2014 - 14:10

A couple of weeks ago, I requested L3 access as part of my Sheriffing work and my request was granted. I think I’ve totally jinxed things since then ;)

The tree. IT'S BURNING!

The first Sunday afterward, we had a patch that was landed into aurora inadvertently causing massive spike in crashes. I saw it myself and suspect that my copy was corrupt and downloaded the latest! Of course, to no avail. I finally noticed the right bug and Kairo was looking for someone to back it out. I backed it out and triggered a rebuild which fixed the issue.

The next Saturday, we had mobile imaging failures. This one was fun fixing, I talked to Nick Thomas and Chris Cooper on the phone. All it needed was one command, but it took us some time to get there :-) But hey, it got me mentioned under Friends of Mozilla.

Having more access to fix things somehow makes me feel responsible!

Categorieën: Mozilla-nl planet

Nigel Babu: Training in Tanzania

za, 12/07/2014 - 13:50

On the last Monday of April, I found myself nervously standing in a room of about 15 people from the e-Government Agency and National Bureau of Statistics in Dar es Salaam. They were waiting for me to start training them in Python and CKAN. I’ve been programming in Python since 2011, but I’ve never actually trained people in Python. On the first day, I didn’t have any slides. All I had was one [PDF][pdf] from Wikibooks which I was using as material. I didn’t even cover the whole material. By the end of the day though, I could sense that it was sinking into the attendees a bit.

It all started with an email from my manager asking me if I was available to do a training in Tanzania in April. After lots of back and forth, we finalized on a date and a trainer to assist in the trainings, and I flew in. Dar es Salaam, strangely, reminded of growing up in Salalah. I got in a day early to prep for the week and settle in. The trainer looking groggy on a Monday does not bode well!

People who train often don’t tell you this - Trainings are exhausting. You’re most likely to be on your feet all day and walk around the room helping people who’re lagging behind. Looking back, the training was both fun and exhausting. I enjoyed talking about Python, though I feel like I need more practice to do it well. The CKAN training, I was pretty satisfied with the outcome, by the end of the week, the folks from e-Gov Agency went in and setup a server with CKAN!

Note to self: Write these posts immediately after the trip before I forget :-)

Categorieën: Mozilla-nl planet

Armen Zambrano: Introducing Http authentication for Mozharness.

vr, 11/07/2014 - 21:42
A while ago, I asked a colleague (you know who you are! :P) of mine how to run a specific type of test job on tbpl on my local machine and he told me with a smirk, "With mozharness!"

I wanted to punch him (HR: nothing to see here! This is not a literal punch, a figurative one), however he was right. He had good reason to say that, and I knew why he was smiling. I had to close my mouth and take it.

Here's the explanation on why he said that: most jobs running inside of tbpl are being driven by Mozharness, however they're optimized to run within the protected network of Release Engineering. This is good. This is safe. This is sound. However, when we try to reproduce a job outside of the Releng network, it becomes problematic for various reasons.

Many times we have had to guide people who are unfamiliar with mozharness as they try to run it locally with success. (Docs: How to run Mozharness as a developer). However, on other occasions when it comes to binaries stored on private web hosts, it becomes necessary to loan a machine. A loaned machine can reach those files through internal domains since it is hosted within the Releng network.

Today, I have landed a piece of code that does two things:
  • Allows Http authentication to download files behind LDAP credentials
  • Changes URLs to point to publicly reachable domains
This change, plus the recently-introduced developer configs for Mozharness, makes it much easier to run mozharness outside of continuous integration infrastructure.
I hope this will help developers have a better experience reproducing the environments used in the tbpl infrastructure. One less reason to loan a machine!

This makes me *very* happy (see below) since I don't have VPN access anymore.



Creative Commons License
This work by Zambrano Gasparnian, Armen is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 Unported License.
Categorieën: Mozilla-nl planet

Armen Zambrano: Using developer configs for Mozharness

vr, 11/07/2014 - 21:15
To help run mozharness by developers I have landed some configs that can be appended to the command appearing on tbpl.
All you have to do is:
  • Find the mozharness script line in a log from tbpl (search for "script/scripts")
  • Look for the --cfg parameter and add it again but it should end with "_dev.py"
    • e.g. --cfg android/androidarm.py --cfg android/androidarm_dev.py
  • Also add the --installer-url and --test-url parameters as explained in the docs
Developer configs have these things in common:
  • They have the same name as the production one but instead end in "_dev.py"
  • They overwrite the "exes" dict with an empty dict
    • This allows to use the binaries in your personal $PATH
  • They overwrite the "default_actions" list
    • The main reason is to remove the action called read-buildbot-configs
  • They fix URLs to point to the right public reachable domains 
Here are the currently available developer configs:You can help by adding more of them!














Creative Commons License
This work by Zambrano Gasparnian, Armen is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 Unported License.
Categorieën: Mozilla-nl planet

Kent James: The Thunderbird Tree is Green!

vr, 11/07/2014 - 21:05

For the first time in a while, the Thunderbird build tree is all green. That means that all platforms are building, and passing all tests:

The Thunderbird build tree is green!

The Thunderbird build tree is green!

Many thanks to Joshua Cranmer for all of his hard work to make it so!

Categorieën: Mozilla-nl planet

Hub Figuière: Github tracks you by email.

vr, 11/07/2014 - 20:45

That's right. Github tracks you by email. Each Github notification email contains in the HTML part a beacon. Beacons are usually one pixel images with a unique URL to know who did view the email or not - triggered by the HTML rendered downloading the image to display.

Two safeguards against that tracking:

  1. don't automatically download images in emails - lot of clients allow or default to this.
  2. view email only in plain text: impossible with some email system or client. Like K9-Android or just GMail. (by far this is what I do in Thunderbird)

Now I complain over twitter and according to Github Zach Holman:

"It’s a pretty rad feature for a ton of our users; reading a notification in one should mark the web UI as read too. We dig it."*.

Sorry, but there is no optout to tracking. Holman also said:

"you can just disable images. It’s the same functionality in the email as on the web, though. We’re not spying on anything."*

and

"[...] It’s just in this case there’s zero additional information trading hands."*.

Note that recent events showed me I couldn't trust Github ethics anyway, so I'd rather have them not have the info that them claiming it never change hands.

This wouldn't be important if Mozilla didn't mostly require Github to contribute to certain projects including. I filed bug 1031899. While I can understand the feature, I believe user privacy should be paramount, therefor not being able to disable tracking is a serious ethics issue.

Categorieën: Mozilla-nl planet

Gervase Markham: Why Do Volunteers Work On Free Software Projects?

vr, 11/07/2014 - 19:04

Why do volunteers work on free software projects?

When asked, many claim they do it because they want to produce good software, or want to be personally involved in fixing the bugs that matter to them. But these reasons are usually not the whole story. After all, could you imagine a volunteer staying with a project even if no one ever said a word in appreciation of his work, or listened to him in discussions? Of course not. Clearly, people spend time on free software for reasons beyond just an abstract desire to produce good code. Understanding volunteers’ true motivations will help you arrange things so as to attract and keep them. The desire to produce good software may be among those motivations, along with the challenge and educational value of working on hard problems. But humans also have a built-in desire to work with other humans, and to give and earn respect through cooperative activities. Groups engaged in cooperative activities must evolve norms of behavior such that status is acquired and kept through actions that help the group’s goals.

– Karl Fogel, Producing Open Source Software

Categorieën: Mozilla-nl planet

Joel Maher: I invite you to join me in welcoming 3 new tests to Talos

vr, 11/07/2014 - 16:52

2 months ago, we added session restore tests to Talos, today I am pleased to announce 3 new tests:

  • media_tests – only runs on linux64 and is our first test to measure audio processing.  Much thanks to :jesup, and Suhas Nandaku from Cisco.
  •  tp5o_scroll - imagine if tscrollx and tp5 had a child- not only do we load the page, but we scroll the page.  Big thanks go to :avih for tackling this project.
  •  glterrain – The first webgl benchmark to show up in Talos.  Credit goes to :avih for driving this and delivering it.  There are others coming, this was the easiest to get going.

 

Stay tuned in the coming weeks as we have 2 others tests in the works:

  • ts_paint_cold
  • mainthreadio detection

 


Categorieën: Mozilla-nl planet

Bogomil Shopov: It’s live: Usersnap’s JavaScript error- and XHR log-recorder.

vr, 11/07/2014 - 16:28

I am so happy. Today we released our console recorder (JavaScript errors and XHR logs) and now every web developer can fight bugs like a superhero.

It’s so awesome!

Categorieën: Mozilla-nl planet

Pascal Chevrel: What I did in Q3

vr, 11/07/2014 - 15:06
A quick recap of what I did in Q3  so as that people know what kind of work we do in the l10n-drivers team and because as a service team to other department, a lot of what we do is not necessarily visible.

Tools and code

I spent significantly more time on tool this quarter than in the past, I am also happy to say that Transvision is now a 6 people team and that we will all be at Brussels for the Summit (see my blog post in April). I like it, I like to create the small tools and scripts that make my life or localizers life better.

  • Two releases of Transvision (release notes) + some preparatory work for future l20n compatibility
  • Created a mini-dashboard for our team so as to help us follow FirefoxOS work
  • Wrote the conversion script to convert our Serbian Cyrillic string repository to Serbian Latin (see this blog post)
  • Turned my langchecker scripts (key part of the Web Dashboard) into a github project and worked with Flod on improving our management scripts for mozilla.org and fhr. A recent improvement is that we can now import automatically translations done on Locamotion You can see a list of the changes in the release notes.
  • Worked on scripts allowing to query bugzilla without using the official API (because the data I want is specific to the mozilla customizations we need for locales), that will probably be part of the Webdashboard soon so as to be able to extract Web localization bugs from multiple components (gist here). Basically I had the idea to use the CSV export feature for advanced search in Bugzilla as a public read-only API :)
  • Several python patches to mozilla.org to fix l10n bugs or improve our tools to ship localized pages (Bug 891835, Bug 905165, Bug 904703).
Mozilla.org localization

Since we merged all of our major websites (mozilla.org, mozilla.com, mozilla-europe.org, mozillamessaging.com) under the single mozilla.org domain name two years ago with a new framework based on Django, we have gained in consistency but localization of several backends under one single domain and a new framework slowed us down for a while. I'd say that we are now mostly back to the old mozilla.com speed of localization, lots of bugs and features were added to Bedrok (nickname of our Django-powered site), we have a very good collaboration with the webdev/webprod teams on the site and we are more people working on it. I think this quarter localizer felt that a lot more work was asked from them on mozilla.org, I'll try to make sure we don't loose locales on the road, this is a website that hosts content for 90 locales, but we are back to speed with tons of new people!

  • Main Firefox download page (and all the download buttons across the site) finally migrated to Bedrock, our Django instance.  Two major updates to that page this quarter (+50 locales), more to come next quarter, this is part of a bigger effort to simplify our download process, stop maintaining so many different specialized download pages and SEO improvements.
  • Mozilla.org home page is now l10n-friendly and we just shipped it in 28 languages. Depending on your locale, visitor see custom content (news items, calls for contribution or translation...)
  • Several key high traffic pages (about products updade) are now localized and maintained at large scale (50+ locales)
  • Newsletter center and newsletter subscription process largely migrated to Bedrock and additional locales supported (but there is still work to do there)
  • The plugincheck web application is also largely migrated to Bedrock (61 locales on bedrock, about 30 more to migrate before we can delete the older backend and maintain only one version)
  • The contribute page scaled up tp 28 locales with local teams of volunteers behind answering people that contact us
  • Firefox OS consumer and industry sub-sites released/updated for +10 locales, with some geoIP in addition to locale detection for tailored content!
  • Many small updates to other existing pages and templates
Community growth

This quarter, I tried to spend some time looking for localizers to work on web content as well as acompanying volunteers that contact us. I know that I am good at finding volunteers that share our values and are achievers, unfortunately I don't have that much time to spend on that. Hopefully I will be able to spend a few days on that every quarter because we need to grow and we need to grow with the best open source contributors! :)

  • About 20 people got involved for the folowing locales: French, Czech, Catalan, Slovenian, Tamil, Bengali, Greek, Spanish (Spain variant), Swedish. Several became key localizers and will be at the Mozilla summit
  • A couple of localizers moved from mozilla.org localization to product localization where their help was more needed, I helped them by finding new people to replace them on web localization and/or empowering existing community members to avoid any burn-out
  • I spent several days in a row specifically helping the Catalan community as it needed help to scale since they now also do all the mobile stuff. I opened a #mozilla-cat IRC channel and found 9 brand new volunteers, some of them professional translators, some of them respected localizers from other open source projects. I'll probably spend some more time to help them next quarter to consolidate this growth. I may keep this strategy every quarter since it seems to be efficient (look for localizers in general and also help one specific team to grow and organize itself to scale up)
Other
  • Significant localization work for Firefox Health Report, both Desktop (shipped) and Mobile versions (soon to be shipped)
  • Lots of meetings for lots of different projects for next quarter :)
  • Two work weeks, one focused on tooling in Berlin, one focused on training my new colleagues Peying and Francesco (but to be honest, Francesco didn't need much of it thanks to his 10 years of involvement in Mozilla as a contributor :) )
  • A lot of work to adjust my processes to work with my new colleague Francesco Lodolo (also an old-timer in the community, he is the Italian Firefox localizer). Kudos to Francesco for helping me with all of the projects! Now I can go on holidays knowing that i have a good backup :)
French community involvement
  • In the new Mozilla paris office I organized a meeting with the LinuxFR admins, because I think it's important to work with the rest of the Open Source ecosystem
  • With Julien Wajsberg (Gaia developer) we organized a one day meeting with the Dotclear community, a popular blogging platform alternative to  Wordpress in France (purely not-for-profit), because we think it's important to work with project that build software that allows people to create content on the Web
  • Preparation of more open source events in the Paris office
  • We migrated our server (hosting Transvision, womoz.org, mozfr.org...) to the latest Debian Stable, which finally brings us a decent modern version of PHP (5.4). We grew our admin community to 2 more people with Ludo and Geb :). Our server flies!

In a nutshell, a very busy quarter! If you want to speak about some of it with me, I will be at the Mozilla Summit in Brussels this week :)

Categorieën: Mozilla-nl planet

Pascal Chevrel: My Q2-2014 report

vr, 11/07/2014 - 15:06
Summary of what I did last quarter (regular l10n-drivers work such as patch reviews, pushes to production, meetings and past projects maintenance excluded) .
Australis release At the end of April, we shipped Firefox 29 which was our first major redesign of the Firefox user interface since Firefox 4 (released in 2011). The code name for that was Australis and that meant replacing a lot of content on mozilla.org to introduce this new UI and the new features that go with it. That also means that we were able to delete a lot of old content that now had become really obsolete or that was now duplicated on our support site.

Since this was a major UI change, we decided to show an interactive tour of the new UI to both new users and existing users upgrading to the new version. That tour was fully localized in a few weeks time in close to 70 languages, which represents 97.5% of our user base. For the last locales not ready on time, we either decided to show them a partially translated site (some locales had translated almost everything or some of the non-translated strings were not very visible to most users, such as alternative content to images for screen readers) or to let the page fall back to the best language available (like Occitan falling back to French for example).

Mozilla.org was also updated with 6 new product pages replacing a lot of old content as well as updates to several existing pages. The whole site was fully ready for the launch with 60 languages 100% ready and 20 partially ready, all that done in a bit less than 4 weeks, parallel to the webdev integration work.

I am happy to say that thanks to our webdev team, our amazing l10n community and with the help of my colleagues Francesco Lodolo (also Italian localizer) and my intern Théo Chevalier (also French localizer), we were able to not only offer a great upgrading experience for the quasi totality of our user base, we were also able to clean up a lot of old content, fix many bugs and prepare the site from an l10n perspective for the upcoming releases of our products.

Today, for a big locale spanning all of our products and activities, mozilla.org is about 2,000 strings to translate and maintain (+500 since Q1), for a smaller locale, this is about 800 strings (+200 since Q1). This quarter was a significant bump in terms of strings added across all locales but this was closely related to the Australis launch, we shouldn't have such a rise in strings impacting all locales in the next quarters.
Transvision releases Last quarter we did 2 releases of Transvision with several features targeting out 3 audiences: localizers, localization tools, current and potential Transvision developers.

For our localizers, I worked on a couple of features, one is quick filtering of search results per component for Desktop repositories (you search for 'home' and with one click, you can filter the results for the browser, for mail or for calendar for example). The other one is providing search suggestions when your search yields no results with the best similar matches ("your search for 'lookmark' yielded no result, maybe you were searching for 'Bookmark'?").

For the localization tools community (software or web apps like Pontoon, Mozilla translator, Babelzilla, OmegaT plugins...), I rewrote entirely our old Json API and extended it to provide more services. Our old API was initially created for our own purposes and basically was just giving the possibility to get our search results as a Json feed on our most popular views. Tools started using it a couple of years ago and we also got requests for API changes from those tool makers, therefore it was time to rewrite it entirely to make it scalable. Since we don't want to break anybody's workflow, we now redirect all the old API calls to the new API ones. One of the significant new service to the API is a translation memory query that gives you results and a quality index based on the Levenshtein distance with the searched terms. You can get more information on the new API in our documentation.

I also worked on improving our internal workflow and make it easier for potential developers wanting to hack on Transvision to install and run it locally. That meant that now we do continuous integration with Travis CI (all of our unit tests are ran on each commit and pull request on PHP 5.4 and 5.5 environments), we have made a lot of improvements to our unit tests suite and coverage, we expose to developers peak memory usage and time per request on all views so as to catch performance problems early, and we also now have a "dev" mode that allows getting Transvision installed and running on the PHP development server in a matter of minutes instead of hours for a real production mode. One of the blockers for new developers was the time required to install Transvision locally. Since it is a spidering tool looking for localized strings in Mozilla source repositories, it needed to first clone all the repositories it indexes (mercurial/git/svn) which is about 20GB of data and takes hours even with a fast connection. We are now providing a snapshot of the final extracted data (still 400MB ;)) every 6 hours that is used by the dev install mode.

Check the release notes for 3.3 and 3.4 to see what other features were added by the team (/ex: on demand TMX generation or dynamic Gaia comparison view added by Théo, my intern).
Web dashboard / Langchecker The main improvement I brought to the web dashboard is probably this quarter the deadline field to all of our .lang files, which allows to better communicate the urgency of projects and for localizers are an extra parameter allowing them to prioritize their work.

Theo's first project for his internship was to build a 'project' view on the web dashboard that we can use to get an overview of the translation of a set of pages/files, this was used for the Australis release (ex: http://l10n.mozilla-community.org/webdashboard/?project=australis_all) but can be used to any other project we want to define , here is an example for the localization of two Android Add-ons I did for the World Cup that we did and tracked with .lang files.

We brought other improvements to our maintenance scripts for example to be able to "bulk activate" a page for all the locales ready, we improved our locamotion import scripts, started adding unit tests etc. Generally speaking, the Web dashboard keeps improving regularly since I rewrote it last quarter and we regularly experiment using it for more projects, especially for projects which don't fit in the usual web/product categories and that also need tracking. I am pretty happy too that now I co-own the dashboard with Francesco who brings his own ideas and code to streamline our processes.
Théo's internship I mentionned it before, our main French localizer Théo Chevalier, is doing an internship with me and Delphine Lebédel as mentors, this is the internship that ends his 3rd year of engineering (in a 5 years curriculum). He is based in Montain View, started early April and will be with us until late July.

He is basically working on almost all of the projects I, Delphine and Flod work on.

So far, apart from regular work as an l10n-driver, he has worked for me on 3 projects, the Web Dashboard projects view, building TMX files on demand on Transvision and the Firefox Nightly localized page on mozilla.org. This last project I haven't talked about yet and he blogged about it recently, in short, the first page that is shown to users of localized builds of Firefox Nightly can now be localized, and by localized we don't just mean translated, we mean that we have a community block managed by the local community proposing Nightly users to join their local team "on the ground". So far, we have this page in French, Italian, German and Czech, if your locale workflow is to translate mozilla-central first, this is a good tooll for you to reach a potential technical audience to grow your community .
Community This quarter, I found 7 new localizers (2 French, 1 Marahati, 2 Portuguese/Portugal, 1 Greek, 1 Albanian) to work with me essentially on mozilla.org content. One of them, Nicolas Delebeque, took the lead on the Australis launch and coordinated the French l10n team since Théo, our locale leader for French, was starting his internship at Mozilla.

For Transvision, 4 people in the French community (after all, Transvision was created initially by them ;)) expressed interest or small patches to the project, maybe all the efforts we did in making the application easy to install and hack are starting to pay, we'll probably see in Q3/Q4 :)

I spent some time trying to help rebuild the Portugal community which is now 5 people (instead of 2 before), we recently resurrected the mozilla.pt domain name to actually point to a server, the MozFR one already hosting the French community and WoMoz (having the French community help the Portuguese one is cool BTW). A mailing list for Portugal was created (accessible also as nntp and via google groups) and the #mozilla-portugal IRC channel was created. This is a start, I hope to have time in Q3 to help launch a real Portugal site and help them grow beyond localization because I think that communities focused on only one activity have no room to grow or renew themselves (you also need coding, QA, events, marketing...).

I also started looking at Babelzilla new platform rewrite project to replace the current aging platform (https://github.com/BabelZilla/WTS/) to see if I can help Jürgen, the only Babelzilla dev, with building a community around his project. Maybe some of the experience I gained through Transvision will be transferable to Babelzilla (was a one man effort, now 4 people commit regularly out of 10 committers). We'll see in the next quarters if I can help somehow, I only had time to far to install the app locally.

In terms of events, this was a quiet quarter, apart from our l10n-drivers work week, the only localization event I was in was the localization sprint over a whole weekend in the Paris office. Clarista, the main organizer blogged about it in French, many thanks to her and the whole community that came over, it was very productive, we will definitely do it again and maybe make it a recurring event.
Summary

This quarter was a good balance between shipping, tooling and community building. The beginning of the quarter was really focused on shipping Australis and as usual with big releases, we created scripts and tools that will help us ship better and faster in the future. Tooling and in particular Transvision work which is probably now my main project, took most of my time in the second part of the quarter.

Community building was as usual a constant in my work, the one thing that I find more difficult now in this area is finding time for it in the evening/week end (when most potential volunteers are available for synchronous communication) basically because it conflicts with my family life a bit. I am trying to be more efficient recruiting using asynchronous communication tools (email, forums…) but as long as I can get 5 to 10 additional people per quarter to work with me, it should be fine with scaling our projects.

Categorieën: Mozilla-nl planet

Pascal Chevrel: My 2009 yearly report

vr, 11/07/2014 - 15:05

I am not great at blogging in English and communicating about my work so I thought that publishing my yearly report would compensate that ;)

All in all, it has been a busy year, nobody in the localization drivers team and among our localization teams had time to get bored, lots of product releases, lots of pages, lots of travel and events too. I am listing below what I have been directly leading and/or participating in, not some other projects where I was just giving a minor help (usually to my colleagues Stas and Delphine).

Products:
  • 2 major releases: Firefox 3.5 and Thunderbird 3 (with a new multilingual Mozilla Messaging website)
  • 26 other releases (maintenance, beta and RC releases)
Mozilla Europe website(s):
  • 3 new locales: Serbian, Bulgarian, Swedish, our geographic coverage of Europe is now almost complete
  • New content for 3.5 release, minor releases and many side projects
  • major cleanups of content and code for easier maintenance (especially maintenance releases) and more features (html5 support, per locale menu navigation, visits now with referrer hints for better locale detection...)
  • Site now sharing same metrics application as other mozilla sites
  • More per country news items than previous years (events, new community sites, community meetings...)
  • 46 blog posts written by our European community on blogs.mozilla-europe.org
  • Our events management web application was used for 10 European events (I created it back in summer 2008)
Mozilla.com website
  • We now have a localized landing page for our 74 locales on top of up to date in-product pages
  • Geolocation page for all locales
  • 3.0 and 3.5 major updates offered for all locales
  • Localized beta download pages to incitate beta-testing of non-English versions of Firefox
  • Better code for our localized pages (better right-to-left, language switching, simpler templates...)
  • Whatsnew/Firstun pages now warn the user in his language if his Flash plugin is outdated  (for better security and stability)
  • Lots of content, css, graphics updates all along the year, everywhere
  • Firefox 3.6 in-product pages (firstrun, whatsnew, major update) localization underway, pluginscheck page localization almost technically ready for localization
  • Fennec pages being localized for 1.0 final
Marketing Sites made multilingual Mozilla Education:
  • Gave a lecture at the Madrid university about opensource, the mozilla project and community management.
  • MMTC Madrid one week sprint in July, gave Mozilla classes with Paul Rouget and Vivien Nicolas to 30 students (evaluation TBD)
  • Organized CoMeTe project at Evry university, France,  in October with Fabien Cazenave and Laurent Jouanneau as teachers
Community work
  • Found new localizers for a dozain locales, helped some creating blogs, community sites and local events
  • Many community meetings, IRC or IRL
  • Participated in Firefox 3.5 party in Madrid
  • I am since May on twitter, communicating about my work and Mozilla in Europe
  • Organized a theming contest in collaboration with the Dotclear project for our community blog, won by Marie Alhomme
  • Created with Julia a Mozilla Planet for French Speakers
  • Lots of Web l10n QA with Delphine plus some personal QA work on 3.6 looking for Linux specific Firefox bugs
  • Went to 21 events (7 of them internal Mozilla events) like Fosdem, MozCamps Chile + Prague, Ubuntu parties, Solutions Linux, W3C event, Firefox 5 year anniversary, Firefox 3.5 party Madrid, JDLL, Geneva Community meetup... Lots of time abroad and travelling.
  • Blogging in French about the Mozilla project and its place in the FLOSS ecosystem, current focus on Mozilla QA and how to get involved in the project.
Other
  • Some documentation work (mostly on QA of localized pages)
  • Many updates to the webdashboard
  • Helped Delphine setting up Womoz website and general advices on community building
  • Several press interviews for Spain as well as conferences given about the Mozilla project
  • Started this week with Stas and Patrick the localization work needed for the Browser Choice Screen in Windows for Europe
  • Lots of technical self teaching while building projects, I even did my first Jetpack extension this week, Yay!
  • A new expresso machine :)

Happy new year 2010 to all mozillians and FOSS lovers in the world :)
Categorieën: Mozilla-nl planet

Pascal Chevrel: Transliterating Serbian Cyrillic to Serbian Latin on Linux with PHP

vr, 11/07/2014 - 15:05

Mozilla has beeen shipping Firefox in Serbian for many years and we ship it in cyrillic script, that means that our software, our sites, our documentation is all in cyrillic for Serbian.

You may not know it (especially if you are not European), but Serbian can be written in both Cyrillic and Latin scripts, people live with the two writing systems, that is a phenomenon called synchronic digraphia.

I was wondering of it would be easy to create a version of Firefox or Firefox OS in Latin script and since our l10n community server just got an upgrade and now has PHP 5.4, I played a bit with the recent transliterator class in that version that uses the ICU library.

Basically, it works, and it works well. With one caveat though, I found out that the ICU library shipped with Linux distro is old and exposes a bug in Serbian transliteration that was fixed in more recent ICU libraries.

How does it work? Here is a code example:

$source = 'Завирите у будућност'; $t = Transliterator::create('Serbian-Latin/BGN'); print "Serbian (Cyrillic): $source <br>"; print "Serbian (Latin): {$t->transliterate($source)}";

And here is the output:

Cyrillic: Завирите у будућност
Latin: Zavirite u budućnost

The bug I mentioned earlier is that the cyrillic letter j is systematically converted to an uppercase J even if the letter is inside a word and should be lowercase.

Example: This string : Најгледанији сајтови
Should be transliterated to: Najgledaniji sajtovi
But my script transliterated it to: NaJgledaniJi saJtovi

I filed a bug in the PHP ticket system and got an inmediate response that my test script actually works on Windows. After some investigation by the PHP dev, it turns out that there is no bug on the PHP side, the bug is in the ICU library that ships with the OS and it happens to be version 48.x on Linux distros while Windows enjoys a more recent version 50 and the ICU project itself is at version 51.2

Unfortunately, I couldn't find any .deb package or ppa for Ubuntu that would propose a more recent ICU library version, Chris Coulson from Canonical pointed me to this ticket in Launchpad: [request] upgrade to icu 50, but this was an unassigned one.

As a consequence, I had to compile the newer ICU library myself to make it work. Fortunately, I could follow almost all the steps indicated in this post for a CentOS distro, I only had to adjust the php.ni locations (and also update the php.ini file for the development server) and restart Apache :)

So now, I can transliterate easily from cyrillic to Latin a full repository, I put a gist file online with the full script doing the conversion of a repo if you want to use it.

Categorieën: Mozilla-nl planet

Soledad Penades: Speaking at OneShotLondon NodeConf

vr, 11/07/2014 - 13:06

“Just turn it into a node module,” and other mantras Edna taught me

The story of leaving behind a random mix of Python + php + bash + makefile + Scons scripts to totally embrace using Node, modules, standard callbacks, browserify, and friends to build toys that bleep and bloop with MIDI, WebGL and Web Audio.

As you can maybe deduct, this might not be your average super expert node.js talk, but a story of learning with a non-appointed mentor and an spontaneous community, and improving and making the most out of node.js—and how it informed and shaped the rest of my coding philosophy, both inside and outside of Mozilla.

I must confess that I’m really humbled and flattered to be amongst this amazing line up of true node experts.

UUUUUUUHHH THE EXPECTATIONS!—feeling a bit like an impostor now.

Next next Saturday 19th of July. See you there? :-)

flattr this!

Categorieën: Mozilla-nl planet

Ludovic Hirlimann: Thunderbird 31 coming soon to you and needs testing love

vr, 11/07/2014 - 12:39

We just released the second beta of Thunderbird 31. Please help us improve Thunderbird quality by uncovering bugs now in Thunderbird 31 beta so that developers have time to fix them.

There are two ways you can help

- Use Thunderbird 31 beta in your daily activities. For problems that you find, file a bug report that blocks our tracking bug 1008543.

- Use Thunderbird 31 beta to do formal testing.  Use the moztrap testing system to tests : choose run test - find the Thunderbird product and choose 31 test run.

Visit https://etherpad.mozilla.org/tbird31testing for additional information, and to post your testing questions and results.

Thanks for contributing and helping!

Ludo for the QA team

Updated links

Categorieën: Mozilla-nl planet

Henrik Skupin: Firefox Automation report – week 23/24 2014

vr, 11/07/2014 - 10:59

In this post you can find an overview about the work happened in the Firefox Automation team during week 23 and 24.

Highlights

To continue the training for Mozilla related test frameworks, we had the 3rd automation training day on June 4th. This time lesser people attended, but we were still able to get a couple of tasks done on oneanddone.

Something which bothered us already for a while, is that for our mozmill-tests repository no push_printurl hook was setup. As result the landed changeset URL gets not printed to the console during its landing. Henrik fixed that on bug 1010563 now, which allows an easier copy&paste of the link to our bugs.

Our team started to work on the new continuous integration system for TPS tests. To be able to manage all the upcoming work ourselves, Henrik asked Jonathan Griffin to move the Coversheet repository from his own account to the Mozilla account. That was promptly done.

In week 24 specifically on June 11th we had our last automation training day for quarter 2 in 2014. Given the low attendance from people we might have to do some changes for future training days. One change might be to have the training on another day of the week. Andreea probably will post updates on that soon.

Henrik was also working on getting some big updates out for Mozmill-CI. One of the most important blockers for us was the upgrade of Jenkins to the latest LTS release. With that a couple of issues got fixed, including the long delays in sending out emails for failed jobs. For more details see the full list of changes.

Individual Updates

For more granular updates of each individual team member please visit our weekly team etherpad for week 23 and week 24.

Meeting Details

If you are interested in further details and discussions you might also want to have a look at the meeting agenda, the video recording, and notes from the Firefox Automation meetings of week 23 and week 24.

Categorieën: Mozilla-nl planet

Pagina's