mozilla

Mozilla Nederland LogoDe Nederlandse
Mozilla gemeenschap

Abonneren op feed Mozilla planet
Planet Mozilla - http://planet.mozilla.org/
Bijgewerkt: 21 uur 50 min geleden

Mozilla Open Policy & Advocacy Blog: Mozilla Submits Comments on FCC Net Neutrality Proposal

di, 15/07/2014 - 16:32

Today, Mozilla is filing comments in response to the first of two major deadlines set out by the U.S. Federal Communications Commission (FCC) for its latest net neutrality proposal. The FCC describes these rules as a means of “protecting and promoting the open Internet,” and we are encouraging the FCC to stay true to that ideal.

The FCC’s initial proposal offers weak rules, based on fragile “Title I” authority. The proposal represents a significant departure from current law and precedent in this space by expanding on a new area of authority without establishing clear limits. This approach makes it likely that it will be overturned on appeal.

Our comments, like our earlier Petition, urge the FCC to change course from its proposed path, and instead use its “Title II” authority as a basis for real net neutrality protections. We recommended that the FCC modernize the agency’s approach to how Internet Service Providers (ISPs) provide Internet access service. Specifically, we asked the agency to define ISPs’ offerings to edge providers – companies like Dropbox and Netflix that offer valuable services to Internet users – as a separate service. We explained why such a service would need to fall under “Title II” authority, and how in using that basis, the FCC can adopt effective and enforceable rules prohibiting blocking, discrimination, and paid prioritization online, to protect all users, both wired and wireless.

In addition to reiterating support for Title II remote delivery classification, today’s comments address some questions that arose about our initial proposal over the past two months, such as:

• How the Mozilla petition addresses interconnection,
• How forbearance would work,
• How the services we describe can be “services” without direct payment, and
• How the FCC can prohibit paid prioritization under Title II.

Our comments also articulate our views on net neutrality rules:

• A clean rule prohibiting blocking is the most workable and sustainable approach, rather than complex level of service standards;
• Prohibiting unreasonable discrimination is more effective than weaker alternatives such as “commercially unreasonable practices”;
• Paid prioritization inherently degrades the open Internet; and
• Mobile access services should have the same protections as fixed.

Mozilla will continue engaging closely with policymakers and stakeholders on this issue, and we encourage you to make your voice heard as well, before the next deadline for reply comments on September 10th. Here are some easy ways to contact the FCC and members of Congress and tell them to take the necessary steps to protect net neutrality and all Internet users and developers.

Categorieën: Mozilla-nl planet

Rizky Ariestiyansyah: Webmaker Party Starts today! Hai Indonesia

di, 15/07/2014 - 15:45
Maker Party starts today! FYI, Maker Party is Mozilla’s global campaign to teach the web. Through thousands of community-run events around the world, Maker Party unites educators, organizations and enthusiastic web users with hands-on...
Categorieën: Mozilla-nl planet

Byron Jones: happy bmo push day!

di, 15/07/2014 - 08:40

the following changes have been pushed to bugzilla.mozilla.org:

  • [1029500] bug.attachments shouldn’t include attachment data by default
  • [1032323] canonicalise_query() should omit parameters with empty values so generated URLs are shorter
  • [1027114] When sending error to Sentry for webservice failures, we need to first scrub the username/login/password from the query string
  • [1026586] Using Fira as default font in Bugzilla
  • [1027182] merge-users.pl – SQL to remove bug_user_last_visit not correct
  • [1036268] REST webservice should return http/404 for invalid methods
  • [1027025] comment.creator has no real_name
  • [1036795] comment.raw_text is returned by the bzapi compatibility extension
  • [1036225] Return a link to the REST documentation in “method not found” errors
  • [1036301] change the description of the “bug id” field on bugmail filtering preferences tab to “new bug”
  • [1028269] Firefox OS Pre-load App Info Request Form
  • [1036303] add a list of tracking/project/etc tracking flags to the bugmail filtering prefs page

discuss these changes on mozilla.tools.bmo.


Filed under: bmo, mozilla
Categorieën: Mozilla-nl planet

Nicholas Nethercote: Poor battery life on the Flame device?

di, 15/07/2014 - 01:28

The new Firefox OS reference phone is called the Flame. I have one that I’m using as my day-to-day phone, and as soon as I got it I found that the battery life was terrible — I use my phone very lightly, but the battery was draining in only 24 hours or so.

It turns out this was because a kernel daemon called ksmd (kernel samepage merging daemon) was constantly running and using about 3–7% of CPU. I detected this by running the following command which prints CPU usage stats for all running processes every five seconds.

adb shell top -m 5

ksmd doesn’t seem very useful for a device such as the Flame, and Alexandre Lissy kindly built me a new kernel with it disabled, which improved my battery life by at least 3x. Details are in the relevant bug.

It seems that plenty of other Flame users are not having problems with ksmd, but if your Flame’s battery life is poor it would be worth checking if this is the cause.

Categorieën: Mozilla-nl planet

Amy Tsay: The AMO Reviewer Community Turns 10

ma, 14/07/2014 - 20:07

A decade ago, Firefox introduced the world to a customizable web browser. For the first time, you could use add-ons to personalize your entire browsing experience—from the look and feel of buttons, to tab behaviors, to content filtering. Anyone with coding skills could create an add-on and submit it to addons.mozilla.org (AMO) for others to use. The idea that you could experience the web on your own terms was a powerful one, and today, add-ons have been downloaded close to 4 billion times.

Each add-on listed on AMO is thoroughly reviewed to ensure its privacy and safety, and volunteer reviewers have shouldered much of this effort. To properly inspect an add-on, a reviewer has to dig into the code—a taxing and often thankless chore. Nobody notices when an add-on works as expected, but everybody notices when an add-on with a security flaw gets through. These reviewers are truly unsung heroes.

From the beginning, volunteers recognized the importance of reviewing add-ons, and self-organized on wiki pages. As add-ons grew in popularity, it became necessary to hire a few people out of this community to keep it organized and nurtured. Ten years later, volunteers are still responsible for about half of all add-on reviews (about 150 per week). Our top volunteer reviewer is approaching 9,000 reviews.

As a community manager working with volunteer reviewers, I’m sometimes asked what the secret is behind this enduring and resilient community. The secret is there isn’t just one thing. Anyone who’s ever tried giving away free food and booze as their primary community-building strategy has learned how quickly the law of diminishing returns kicks in.

What’s In It For Me?

To understand why people get involved with reviewing add-ons, and why they stay involved, you only have to understand human nature. Altruism tells just part of the story. People are often surprised when I tell them that many reviewers began volunteering for selfish reasons. They are add-on developers themselves, and wanted their add-ons to be reviewed faster.

Some of these developers authored add-ons that are used by tens of thousands, sometimes millions of people, so it’s important to be able to push out updates quickly. Since reviewers are not allowed to review their own add-ons, the only way to speed things up is to help burn down the queue. (Reviewers can also request expedited reviews of their add-ons.) Also, they can learn how other people make add-ons, which in turn helps them improve their own.

Intrinsic Motivation

People who create add-ons are people who write code, so the code itself can be interesting and intrinsically motivating. In Drive: The Surprising Truth About What Motivates Us, Daniel Pink writes that self-motivated work tends to be creative, challenging, and non-routine, and add-on reviewing has it all: every piece of code is different (creative), security flaws can be cleverly concealed (challenging), and reviewers contribute at their own pace (non-routine).

Not Just Carrots and Sticks

A few years ago, we began awarding points for add-on reviews and introduced a leaderboard that lets reviewers see their progress against other reviewers. The points could also be redeemed for swag as part of an incentive program.

While this is admittedly a carrot-and-stick approach to engaging contributors, it serves a larger purpose. By devoting time and resources to sending handwritten notes and small tokens, we are also sending the message that reviewers are important and appreciated. When you open your mailbox and there’s a Fedex package containing a special-edition t-shirt in your size, you know your efforts haven’t gone unnoticed.

Community and Responsibility

AMO reviewers know that they play an important role in keeping Firefox extensible, and that their work directly impacts the experience people have installing add-ons. Since about half of the hundreds of millions of Firefox users have add-ons installed, that is no small feat. I’ve heard from reviewers that they stick around because they like being part of a community of awesome people who are responsible for keeping add-ons safe to use in Firefox.

The Magic Formula

Online communities are complex, their fabric woven from a mesh of intrinsic and extrinsic, selfish and altruistic motivations. A healthy, lasting community benefits from a combination of these factors, in varying proportions, some of them driven by the community and some by the attentive community-builders tasked with nurturing it. There isn’t a silver bullet; rather, it’s about finding your own magic formula and knowing that often, the secret ingredient is whatever it is that makes us human.

Happy 10th birthday, AMO reviewers.


Categorieën: Mozilla-nl planet

Mark Côté: BMO mid-2014 update

ma, 14/07/2014 - 19:42

Here’s your mid-year report from the offices, basements, and caverns of BMO!

Performance

This year we’re spending a lot of time on performance. As nearly everyone knows, Bugzilla’s an old Perl app from the early days of the Web, written way before all the technologies, processes, and standards of today were even dreamt of. Furthermore, Bugzilla (including BMO) has a very flexible extension framework, which makes broad optimizations difficult, since extensions can modify data at many points during the loading and transforming of data. Finally, Bugzilla has evolved a very fine-grained security system, crucial to an open organization like Mozilla that still has to have a few secrets, at least temporarily (for security and legal reasons, largely). This means lots of security checks when loading or modifying a bug—and, tangentially, it makes the business logic behind the UI pretty complex under the hood.

That said, we’ve made some really good progress, starting with retrofitting Bugzilla to use memcached, and then instrumenting the database and templating code to give of reams of data to analyze. Glob has lots of details in his post on BMO perf work; read it if you’re interested in the challenges of optimizing a legacy web app. The tl;dr is that BMO is faster than last year; our best progress has been on the server side of show_bug (the standard Bug view), which, for authenticated users, is about 15% faster on average than last year, with far fewer spikes.

Bugs updated since last visit

Part of an effort to improve developer productivity, in June we rolled out a feature to give a new way for users to track changes to bugs. BMO now notes when you visit a bug you’re involved in (when you load it in the main Bugzilla UI or otherwise perform actions on it), and any changes to that bug which occur since you last visited it will show up in a table in My Dashboard. Read more.

Bugmail filtering

Another improvement to developer productivity centred around notifications is the new bugmail filtering feature. Bugzilla sends out quite a lot of mail, and the controls for deciding when you want to receive a notification of a change to a bug have been pretty coarse-grained. This new feature is extremely customizable, so you can get only the notifications you really care about.

BzAPI compatibility

There have been several broad posts about this recently, but it’s worth repeating. The original Bugzilla REST API, known as BzAPI, is deprecated in favor of the new native REST API (on BMO at least; it isn’t yet available in any released version of the Bugzilla product). If possible, sites currently using BzAPI should be modified to use the new API (they are largely, but not entirely, compatible), but at a minimum they should be updated to use the new BzAPI compatibility layer, which is hosted directly on BMO and sits atop the new REST API. The compatibility layer should act almost exactly the same as BzAPI (the exceptions being that a few extra fields are returned in a small number of calls). At some point in the not-too-distant future, we’ll be (transparently) redirecting all requests to BzAPI to this layer and shutting down the BzAPI server, so it’s better to try to migrate now while the original BzAPI is still around, in case there are any lingering bugs in the compatibility layer.

More stuff

As usual, you can see our current goals and high-priority items for the quarter on the BMO wiki page.

Categorieën: Mozilla-nl planet

Doug Belshaw: Web Literacy 'maker' badges

ma, 14/07/2014 - 18:34

{cross-posted from the Webmaker blog)

 Navigation

Introduction

To help with Maker Party (launching tomorrow!) we’ve been working on a series of Web Literacy ‘maker’ badges. These will be issued to those who can make digital artefacts related to one or more competencies on the Web Literacy Map.

The structure of each of the Webmaker resources page for each competency (e.g. Navigation) is:

  • Discover
  • Make
  • Teach

We’re not currently badging the ‘Discover’ level, and the ‘Teach’ level is currently covered by the Webmaker Mentor badge. These new ‘Make’ badges are our first badges specifically for web literacy.

How you can help

We’re planning to launch these badges at the end of July. Before we do so, we want to make sure the process works smoothly for everyone, for each badge. We’re also very much interested in your feedback on the whole process.

Here’s what to do. Go to the link below and follow the instructions. You’ll need to either make something related to one of the Web Literacy Map competencies, or link to something you’ve made before.

https://teach.etherpad.mozilla.org/weblit-make-badges-QA

Questions? Comments? I’m @dajbelshaw or you can email me at doug@mozillafoundation.org

Categorieën: Mozilla-nl planet

Mark Surman: The Instagram Effect: can we make app making easy?

ma, 14/07/2014 - 18:04

Do you remember how hard digital photography used to be? I do. When my first son was born, I was still shooting film, scanning things in and manually creating web pages to show off a few choice pictures. By the time my second son was walking I had my first good digital camera. Things were better, but I still had to drag pictures onto a hard drive, bring them into Photoshop, painstakingly process them and then upload to Flickr. And then, seemingly overnight, we took a leap. Phones got good cameras. Photo processing right on the camera got dead simple. And Instagram happened. We rarely think about it, but: digital photography went from hard and expensive to cheap and ubiquitous in a very short period of time.

Mozilla on-device app making concept from MWC 2013 (Frog Design)

Mozilla on-device app making concept from MWC 2013 (Frog Design)

I want to make the same thing happen with mobile apps. Today: making a mobile app — or a complex interactive web page — is slow, hard and only for the brave and talented few. I want to make making a mobile app as easy as posting to Instagram.

At Mozilla, we’ve been talking about this for while now. At Mobile World Congress 2013 we floated the idea of making easy to make apps. And we’ve been prototyping a tool for making mobile apps in a desktop browser since last fall. We’ve built some momentum, but we have yet to solve two key problems: crafting a vision of app making that’s valuable to everyday people and making app making easy on a phone.

We came one step closer to solving these problems last week win London. In partnership with the GSMA, we organized a design workshop that asked: What if anyone could make a mobile app? What would this unlock for people? And, more interestingly, what kind of opportunity and imagination would is create in places where large numbers (billions) of people are coming online for the first time using affordable smartphones? These are the right questions to be asking if we want to create an Instagram Effect for apps.

Screen Shot 2014-07-14 at 6.08.08 PMScreen Shot 2014-07-14 at 6.09.00 PM Screen Shot 2014-07-14 at 6.08.47 PM

The London design workshop created some interesting case studies of why and how people would create and remix their own apps on their phones. A DJ in Rio who wanted to gain fans and distribute her music. A dabbawalla in Mumbai who wants to grow and manage the list of customers he delivers food to. A teacher in Durban who wants to use her Google doc full on student records to recruit parents to combat truancy. All of these case studies pointed to problems that non-technical people could more easily solve for themselves if they could easily make their own mobile apps.

Over the next few months, Mozilla will start building on-device authoring for mobile phones and interactive web pages. The case studies we developed in London — and others we’ll be pulling together over the coming weeks — will go a long way towards helping us figure out what features and app templates to build first. As we get to some first prototypes, we’re going get the Mozilla community around the world to test out our thinking via Maker Parties and other events.

At the same time, we’re going to be working on a broader piece of research on the role of locally generated content in creating opportunity for people in places whee smartphones are just starting to take at off. At the London workshop, we dug into this question with people from organizations like Equity Bank, Telefonica, USAID, EcoNet Wireless, Caribou Digital, Orange, Dalberg, Vodaphone. Working with GSMA, we plan to research this local content question and field test easy app making with partners like these over next six months. I’ll post more soon about this partnership.


Filed under: education, mozilla, webmakers
Categorieën: Mozilla-nl planet

David Burns: WebDriver F2F - London 2014

ma, 14/07/2014 - 16:15

Last week saw the latest face to face of the WebDriver Working Group held at Facebook. This meeting was important as this is hopefully the last face to face before we go to Last call allowing us to concentrate on issues that come up during last call.

This meeting was really useful as we were a number of discussions around the prose of the spec when it comes to conformance and usability of the spec, especially when given to implementors who have never worked on WebDriver.

The Agenda from the meeting can be found here

The notable items that were discussed are:

  • Merge getLocation and getSize to single call called getElementRect. This has been implemented in FirefoxDriver already
  • Describe restrictions around localhost in security section
  • How the conformance test will look (Microsoft have a huge raft tests they are cleaning up and getting ready to upstream!)
  • Actions has been tweaked from the original straw man delivered by Mozilla, hopefully see the new version in the next few weeks.

To read what was discussed you can see the notes for Monday and Tuesday.

Categorieën: Mozilla-nl planet

Mozilla Reps Community: Rep Of The Month : June 2014 – Shreyas Narayanan Kutty

ma, 14/07/2014 - 12:39

Shreyas Narayanan KuttyShreyas Narayanan Kutty came to Reps as an already inspirational leader and role model in the Firefox Student Ambassadors program. In addition to organizing a number of successful MozCafes, Shreyas has led a charge to empower kids on the web through the Webmaker initiative ‘Kidzilla’ and a longer-term call to action in schools to start Webmaker Clubs.

Shreyas has inspired others in his community and across the world with blog posts and photos and a teaching kit which have been featured in Mozilla publications.

In addition to his FSA and Reps contribution, Shreyas has been a key participant in Hive India and most recently, Mozcamp Beta, where his Popcorn video ‘I am Mozillian’, featuring 19 different states of India stole the show.

See past featured Reps..

Categorieën: Mozilla-nl planet

David Burns: Bugsy 0.3.0 - Comments!

ma, 14/07/2014 - 12:07

I have just released the latest version of Bugsy. This allows you to get comments from Bugzilla and add new comments too. This API is still experimental so please send back some feedback since I may change it to real world usage.

I have updated the documentation to get you started.

>>> comments = bug.get_comments() >>> comment[0].text "I <3 Cheese" >>> bug.add_comment("And I love bacon")

You can see the Changelog for more details.

Please raise issues on GitHub

Categorieën: Mozilla-nl planet

Aaron Train: Proxy Server Testing in Firefox for Android

ma, 14/07/2014 - 09:00

Recent work on standing up a proxy server for web browsing in Firefox for Android is now ready for real world testing. Eugen, Sylvain, and James, from the mobile platform team have been working towards the goal of building a proxy server to ultimately increase privacy (via a secure connection), reduce bandwidth usage, and improve latency. Reduced page load times is also a high level goal. A detailed Wiki page is available at: https://wiki.mozilla.org/Mobile/Janus

The time for testing is now.

How to Help
  • Install this (available here) proxy configuration (development server) add-on in Firefox for Android
  • Browse as you normally would (try your network connection and or WiFi connections)
  • File bugs in GitHub (make sure to compare with the proxy enabled and disabled)
  • Talk to us on IRC
Categorieën: Mozilla-nl planet

Leo McArdle: Letter to my MP on DRIP

zo, 13/07/2014 - 16:25

What follows is a copy of the email I just sent to my MP about the Data Retention and Investigatory Powers Bill (DRIP). I urge you to send a similar email right now.

Dear Robin Walker,

I have no doubt that by now you will have heard of the Data Retention and Investigatory Powers Bill (DRIP) which your Government and the Opposition will try to rail-road through Parliament next week. I also have no doubt that you will have heard of the great deal of criticism surrounding this bill, both from your colleagues within Westminster hailing from all parties, such as David Davis MP and Tom Watson MP, and those outside of Westminster, such as Jim Killock of the Open Rights Group.

In April the European Court of Justice (ECJ) ruled that the Data Retention Directive (DRD) was incompatible with the Charter of Fundamental Rights of the European Union and therefore that the 2006 act enabling the DRD in the UK was a breach of Human Rights. This means what was, and still is, the status quo when it comes to forcing companies to store data on their customers is a breach of fundamental Human Rights. This is the same status quo which the Home Secretary has said that DRIP merely retains. I think it is clear to see why I, and others, have such a problem with DRIP.

The ECJ ruling outlined some very clear ways in which the DRD could be made compatible with Human Rights law, by saying that this cannot be done on a blanket basis and that someone independent must supervise police access. These fundamental points are missing from DRIP.

Furthermore, DRIP goes far further than just retaining the status quo. It makes sweeping amendments to the Regulation of Investigatory Powers Act (RIPA) including the expansion of what a communications service provider is, the extension of these powers to outside the UK and an open door to allow the Government to make new regulations about data retention at will, without the need to debate them fully in Parliament. I am sure you agree that such huge amendments to RIPA need to be subject to full Parliamentary scrutiny.

It is perfectly clear to everybody, including you, I am sure, Mr Walker, that the Government is using the ECJ ruling as a pretext to force through, at great speed, legislation which affects Human Rights, without proper scrutiny or deliberation. The ECJ ruling was in April, and many warned as far back as 2006 that the DRD was flawed. The UK Government has had years to prepare for the DRD being struck down. There is no reason for this emergency legislation, other than to try and sneak sweeping changes under the noses of MPs who have been allowed to go on holiday.

Wherever you stand on where the balance should be between State Security and Civil Liberties (and I would not be surprised if we stand on opposite ends of that balance), you must agree that five days in nowhere near enough time to properly debate and represent all the views on this issue.

It is for this reason that I urge you as my elected representative to vote against DRIP, and do everything you can to urge your colleagues to do the same. At the very least, could you please push for a highly amended bill, with all the sections amending RIPA removed, which serves purely as a stopgap, not for a period of two years, but for a maximum of six months. We need to have this debate now, and not pass the buck on to the next Government in 2016, who will surely pass the buck on again.

In 2015 I will get my first opportunity to vote in a General Election, and while I may feel that this Government has done devastating things to this country, you, Mr Walker, may be able to differentiate yourself from a sea of blue if you stand up for Civil Liberties and Human Rights.

Yours sincerely,
Leo McArdle

Categorieën: Mozilla-nl planet

Nick Cameron: Rust for C++ programmers - part 8: destructuring

zo, 13/07/2014 - 06:13
First an update on progress. You probably noticed this post took quite a while to come out. Fear not, I have not given up (yet). I have been busy with other things, and there is a section on match and borrowing which I found hard to write and it turns out I didn't understand very well. It is complicated and probably deserves a post of its own, so after all the waiting, the interesting bit is going to need more waiting. Sigh.

I've also been considering the motivation of these posts. I really didn't want to write another tutorial for Rust, I don't think that is a valuable use of my time when there are existing tutorials and a new guide in the works. I do think there is something to be said for targeting tutorials at programmers with different backgrounds. My first motivation for this series of posts was that a lot of energy in the tutorial was expended on things like pointers and the intuition of ownership which I understood well from C++, and I wanted a tutorial that concentrated on the things I didn't know. That is hopefully where this has been going, but it is a lot of work, and I haven't really got on to the interesting bits. So I would like to change the format a bit to be less like a tutorial and more like articles aimed at programmers who know Rust to some extent, but know C++ a lot better and would like to bring up their Rust skills to their C++ level. I hope that complements the existing tutorials better and is more interesting for readers. I still have some partially written posts in the old style so they will get mixed in a bit. Let me know what you think of the idea in the comments.

Destructuring
Last time we looked at Rust's data types. Once you have some data structure, you will want to get that data out. For structs, Rust has field access, just like C++. For tuples, tuple structs, and enums you must use destructuring (there are various convenience functions in the library, but they use destructuring internally). Destructuring of data structures doesn't happen in C++, but it might be familiar from languages such as Python or various functional languages. The idea is that just as you can create a data structure by filling out its fields with data from a bunch of local variables, you can fill out a bunch of local variables with data from a data structure. From this simple beginning, destructuring has become one of Rust's most powerful features. To put it another way, destructuring combines pattern matching with assignment into local variables.

Destructuring is done primarily through the let and match statements. The match statement is used when the structure being desctructured can have difference variants (such as an enum). A let expression pulls the variables out into the current scope, whereas match introduces a new scope. To compare:
fn foo(pair: (int, int)) {
    let (x, y) = pair;
    // we can now use x and y anywhere in foo

    match pair {
        (x, y) => {
            // x and y can only be used in this scope
        }
    }
}
The syntax for patterns (used after `let` and before `=>` in the above example) in both cases is (pretty much) the same. You can also use these patterns in argument position in function declarations:
fn foo((x, y): (int, int)) {
}
(Which is more useful for structs or tuple-structs than tuples).

Most initialisation expressions can appear in a destructuring pattern and they can be arbitrarily complex. That can include references and primitive literals as well as data structures. For example,
struct St {
    f1: int,
    f2: f32
}

enum En {
    Var1,
    Var2,
    Var3(int),
    Var4(int, St, int)
}

fn foo(x: &En) {
    match x {
        &Var1 => println!("first variant"),
        &Var3(5) => println!("third variant with number 5"),
        &Var3(x) => println!("third variant with number {} (not 5)", x),
        &Var4(3, St{ f1: 3, f2: x }, 45) => {
            println!("destructuring an embedded struct, found {} in f2", x)
        }
        &Var4(_, x, _) => {
            println!("Some other Var4 with {} in f1 and {} in f2", x.f1, x.f2)
        }
        _ => println!("other (Var2)")
    }
}
Note how we destructure through a reference by using `&` in the patterns and how we use a mix of literals (`5`, `3`, `St { ... }`), wildcards (`_`), and variables (`x`).

You can use `_` wherever a variable is expected if you want to ignore a single item in a pattern, so we could have used `&Var3(_)` if we didn't care about the integer. In the first `Var4` arm we destructure the embedded struct (a nested pattern) and in the second `Var4` arm we bind the whole struct to a variable. You can also use `..` to stand in for all fields of a tuple or struct. So if you wanted to do something for each enum variant but don't care about the content of the variants, you could write:
fn foo(x: En) {
    match x {
        Var1 => println!("first variant"),
        Var2 => println!("second variant"),
        Var3(..) => println!("third variant"),
        Var4(..) => println!("fourth variant")
    }
}

When destructuring structs, the fields don't need to be in order and you can use `..` to elide the remaining fields. E.g.,
struct Big {
    field1: int,
    field2: int,
    field3: int,
    field4: int,
    field5: int,
    field6: int,
    field7: int,
    field8: int,
    field9: int,
}

fn foo(b: Big) {
    let Big { field6: x, field3: y, ..} = b;
    println!("pulled out {} and {}", x, y);
}
As a shorthand with structs you can use just the field name which creates a local variable with that name. The let statement in the above example created two new local variables `x` and `y`. Alternatively, you could write
fn foo(b: Big) {
    let Big { field6, field3, ..} = b;
    println!("pulled out {} and {}", field3, field6);
}
Now we create local variables with the same names as the fields, in this case `field3` and `field6`.

There are a few more tricks to Rust's destructuring. Lets say you want a reference to a variable in a pattern. You can't use `&` because that matches a reference, rather than creates one (and thus has the effect of dereferencing the object). For example,
struct Foo {
    field: &'static int
}

fn foo(x: Foo) {
    let Foo { field: &y } = x;
}
Here, `y` has type `int` and is a copy of the field in `x`.

To create a reference to something in a pattern, you use the `ref` keyword. For example,
fn foo(b: Big) {
    let Big { field3: ref x, ref field6, ..} = b;
    println!("pulled out {} and {}", *x, *field6);
}
Here, `x` and `field6` both have type `&int` and are references to the fields in `b`.

One last trick when destructuring is that if you are detructuring a complex object, you might want to name intermediate objects as well as individual fields. Going back to an earlier example, we had the pattern `&Var4(3, St{ f1: 3, f2: x }, 45)`. In that pattern we named one field of the struct, but you might also want to name the whole struct object. You could write `&Var4(3, s, 45)` which would bind the struct object to `s`, but then you would have to use field access for the fields, or if you wanted to only match with a specific value in a field you would have to use a nested match. That is not fun. Rust lets you name parts of a pattern using `@` syntax. For example `&Var4(3, s @ St{ f1: 3, f2: x }, 45)` lets us name both a field (`x`, for `f2`) and the whole struct (`s`).

That just about covers your options with Rust pattern matching. There are a few features I haven't covered, such as matching vectors, but hopefully you know how to use `match` and `let` and have seen some of the powerful things you can do. Next time I'll cover some of the subtle interactions between match and borrowing which tripped me up a fair bit when learning Rust.
Categorieën: Mozilla-nl planet

Anthony Ricaud: Adopting Omnifocus and GTD

za, 12/07/2014 - 17:07

I've tried to adopt the Getting Things Done method a few times already. Every time, it wasn't a success. I wasn't applying most principles and fell back to noting things down on a collection of small papers. This time, I had a huge advantage: at work, I'm sitting next to Étienne, a big proponent of GTD. He inspired me to try again and answered a lot of questions I had during my adoption.

This time, I chose Omnifocus for my GTD experimentation. It's a bit expensive to buy the three flavours but I was committed. I'll be talking about my experiences via Omnifocus but you should not focus too much on the software. You can adopt GTD with paper, with another software, whatever works for you.

Capturing

In january, I started the capture part. That's when you note down in your GTD system everything you'd like to do. You need to create that habit and do it every time something pops in your head. I use three main methods to collect:

  1. When I'm in front of my computer, I use the ^⌥Space shortcut to open the Quick Entry panel
  2. When I'm not in front of my computer, I use the iPod Touch app
  3. When an email requires some action, I send a message to the mail drop address

I got a huge inbox but I was ok with it because I knew collecting was the first part to get right. There is a big relief in knowing that everything you need or want to do is explicitly written somewhere. You're not afraid of forgetting something anymore.

Capturing your thoughts like this also allows you to stay focused on the current task. You don't have to do that new task right now, you don't have to explore that idea yet. Just trust the system to remind it to you later.

To start this, you may also want to start by doing a mind sweep: sit down in front of a piece of paper, no distractions, half an hour and write down everything that comes to mind.

Process

Once you have this exhaustive list of things you want to do, you process it in contexts and projects. You also flag some items you deem important and put important dates for those tasks. I only started doing this mid january. The tricky part for me was creating the projects and contexts.

Contexts

In GTD, Contexts are things you need to achieve a task. It could be a location, a person or an object. I'm not really using the contexts because most of the time, I just need to be in front of my computer to accomplish work related tasks. I may need to tweak this again but for now, I don't feel the need to dive more in that area.

My contexts:

  • Errands: When I'm neither at home nor at work
  • Home: I don't have an office context because I can work from anywhere. I have a few tasks that require me to be in an office (like printing) but not enough to warrant a full context.
  • People: A nested list of some people and also a phone context
  • Technology: This is where you'll find most of my tasks. I have a nested email folder.
  • Waiting: When I'm waiting on something else to happen.
Projects

Let me give you three example of real projects:

Fixing a bug

I try to do this a lot :) So I have a template project that I copy when I intend to work on a bug. This is a sequential project, meaning I need to achieve a task before the next one is available.

  1. Find a fix: Well that sounds dumb but this is my first step
  2. Write tests: Even though I may write the tests as I fix the problem, I still keep this reminder to make sure I wrote enough tests
  3. Test on a phone: I will certainly have done this while developing but for small fixes that look obvious, I have been bitten by not testing on a real phone. Hence this reminder.
  4. Put in review: Uploading my patch and explaining my fix.
  5. Wait for review: This is in a waiting context so I can forget about this project until I receive an email about that review. If it's not good, I'll add a task for each comment to adress.
  6. Wait for green tests: In a waiting context too because you shouldn't land something if the tests are not green.
  7. Land patch and clean branches: When all is good, I can land my work. This is usually where I'll clean the branches I had to create.
  8. Close bug with link to commit: This is the last step so that people can track the work later.
Feedback on Openweb articles

The crazy hard worker that Nicolas Hoffmann is wrote a few articles on modern CSS practices on the OpenWeb group. I told him I wanted to read them and provide some feedback but I have no idea when I'll come around doing that. So I created one task per article. It's out of my mind but I know I'll do it one day because I have this reminder.

Birthday ideas

This is not a project per se. But when someone talks about a topic they like, I try to take a note of it. Then during the review process, I mark it as due a few days before the actual birthday.

In addition to these kinds of projects, I have a few projects called "Work :: Miscelleanous" or "Personal :: Miscelleanous". That's just things I need to do and don't really fit in a project.

Flags, deferred dates and due dates

This is how I have things popping up for attention. I try to use due dates as little as possible because otherwise, one day you end up with 10 things urgent to do and you get stuck. So only tasks that have a hard deadline (like filing taxes) get a due date.

I use flags for the tasks that are important but without a real deadline. During my weekly review (see below), I'll flag things that I want to do next week.

The capture phase was really refreshing because I knew everything was stored somewhere. Via the process phase, it's even more relaxing because I know the system will remind me when I need to do something. That completely changed how I think about being overwhelmed. Before, I had this blurry collection of things to do in my head. They were all collapsing and I had no sense of what was important to do or if I was forgetting something that matters to me. Now, when I feel overwhelmed, I know it just means I need to spend a bit of time in front of Omnifocus to process my inbox.

Review

In february, I started doing reviews more often. First every two weeks and now every week. This is another step that gives me a great deal of comfort. This is when I'll decide what I want to do next week and put flags or due dates on things that I consider important for the coming week. I will also delete some items that I don't feel like doing anymore.

Do!

And this is the biggest part of GTD. Actually doing stuff. If you spend all that time in a tool to organise your tasks, it's not for the sake of it. That's why I did it gradually, to not spend too much time finding the perfect workflow.

I'm really happy with my adoption of the GTD method. It's not perfect, I'm still tweaking here and there.

I encourage you to try it. Reach out to me if you'd like to discuss it, I'd be happy to!

Categorieën: Mozilla-nl planet

Nigel Babu: Jinxed!

za, 12/07/2014 - 14:10

A couple of weeks ago, I requested L3 access as part of my Sheriffing work and my request was granted. I think I’ve totally jinxed things since then ;)

The tree. IT'S BURNING!

The first Sunday afterward, we had a patch that was landed into aurora inadvertently causing massive spike in crashes. I saw it myself and suspect that my copy was corrupt and downloaded the latest! Of course, to no avail. I finally noticed the right bug and Kairo was looking for someone to back it out. I backed it out and triggered a rebuild which fixed the issue.

The next Saturday, we had mobile imaging failures. This one was fun fixing, I talked to Nick Thomas and Chris Cooper on the phone. All it needed was one command, but it took us some time to get there :-) But hey, it got me mentioned under Friends of Mozilla.

Having more access to fix things somehow makes me feel responsible!

Categorieën: Mozilla-nl planet

Nigel Babu: Training in Tanzania

za, 12/07/2014 - 13:50

On the last Monday of April, I found myself nervously standing in a room of about 15 people from the e-Government Agency and National Bureau of Statistics in Dar es Salaam. They were waiting for me to start training them in Python and CKAN. I’ve been programming in Python since 2011, but I’ve never actually trained people in Python. On the first day, I didn’t have any slides. All I had was one [PDF][pdf] from Wikibooks which I was using as material. I didn’t even cover the whole material. By the end of the day though, I could sense that it was sinking into the attendees a bit.

It all started with an email from my manager asking me if I was available to do a training in Tanzania in April. After lots of back and forth, we finalized on a date and a trainer to assist in the trainings, and I flew in. Dar es Salaam, strangely, reminded of growing up in Salalah. I got in a day early to prep for the week and settle in. The trainer looking groggy on a Monday does not bode well!

People who train often don’t tell you this - Trainings are exhausting. You’re most likely to be on your feet all day and walk around the room helping people who’re lagging behind. Looking back, the training was both fun and exhausting. I enjoyed talking about Python, though I feel like I need more practice to do it well. The CKAN training, I was pretty satisfied with the outcome, by the end of the week, the folks from e-Gov Agency went in and setup a server with CKAN!

Note to self: Write these posts immediately after the trip before I forget :-)

Categorieën: Mozilla-nl planet

Armen Zambrano: Introducing Http authentication for Mozharness.

vr, 11/07/2014 - 21:42
A while ago, I asked a colleague (you know who you are! :P) of mine how to run a specific type of test job on tbpl on my local machine and he told me with a smirk, "With mozharness!"

I wanted to punch him (HR: nothing to see here! This is not a literal punch, a figurative one), however he was right. He had good reason to say that, and I knew why he was smiling. I had to close my mouth and take it.

Here's the explanation on why he said that: most jobs running inside of tbpl are being driven by Mozharness, however they're optimized to run within the protected network of Release Engineering. This is good. This is safe. This is sound. However, when we try to reproduce a job outside of the Releng network, it becomes problematic for various reasons.

Many times we have had to guide people who are unfamiliar with mozharness as they try to run it locally with success. (Docs: How to run Mozharness as a developer). However, on other occasions when it comes to binaries stored on private web hosts, it becomes necessary to loan a machine. A loaned machine can reach those files through internal domains since it is hosted within the Releng network.

Today, I have landed a piece of code that does two things:
  • Allows Http authentication to download files behind LDAP credentials
  • Changes URLs to point to publicly reachable domains
This change, plus the recently-introduced developer configs for Mozharness, makes it much easier to run mozharness outside of continuous integration infrastructure.
I hope this will help developers have a better experience reproducing the environments used in the tbpl infrastructure. One less reason to loan a machine!

This makes me *very* happy (see below) since I don't have VPN access anymore.



Creative Commons License
This work by Zambrano Gasparnian, Armen is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 Unported License.
Categorieën: Mozilla-nl planet

Armen Zambrano: Using developer configs for Mozharness

vr, 11/07/2014 - 21:15
To help run mozharness by developers I have landed some configs that can be appended to the command appearing on tbpl.
All you have to do is:
  • Find the mozharness script line in a log from tbpl (search for "script/scripts")
  • Look for the --cfg parameter and add it again but it should end with "_dev.py"
    • e.g. --cfg android/androidarm.py --cfg android/androidarm_dev.py
  • Also add the --installer-url and --test-url parameters as explained in the docs
Developer configs have these things in common:
  • They have the same name as the production one but instead end in "_dev.py"
  • They overwrite the "exes" dict with an empty dict
    • This allows to use the binaries in your personal $PATH
  • They overwrite the "default_actions" list
    • The main reason is to remove the action called read-buildbot-configs
  • They fix URLs to point to the right public reachable domains 
Here are the currently available developer configs:You can help by adding more of them!














Creative Commons License
This work by Zambrano Gasparnian, Armen is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 Unported License.
Categorieën: Mozilla-nl planet

Kent James: The Thunderbird Tree is Green!

vr, 11/07/2014 - 21:05

For the first time in a while, the Thunderbird build tree is all green. That means that all platforms are building, and passing all tests:

The Thunderbird build tree is green!

The Thunderbird build tree is green!

Many thanks to Joshua Cranmer for all of his hard work to make it so!

Categorieën: Mozilla-nl planet

Pagina's