Mozilla Nederland LogoDe Nederlandse

Abonneren op feed Mozilla planet
Planet Mozilla -
Bijgewerkt: 6 uur 41 min geleden

Jen Kagan: day 16: helpful git things

ma, 13/06/2016 - 20:54

it’s been important for me to get comfortable-ish with git. i’m slowly learning about best practices on a big open source project that’s managed through github.

one example: creating a separate branch for each feature i work on. in the case of min-vid, this means i created one branch to add support, a different branch to add to the project’s README, a different branch to work on vine support, etc. that way, if my changes aren’t merged into the main project’s master, i don’t have to re-clone the project. i just keep working on the branch or delete it or whatever. this also lets me bounce between different features if i get stuck on one and need to take a break by working on another one. i keep the workflow on a post-it on my desktop so i don’t have to think about it (a la atul gawande’s so good checklist manifesto):

git checkout master

git pull upstream master
(to get new changes from the main project’s master branch)

git push origin master
(to push new changes up to my own master branch)

git checkout -b [new branch]
(to work on a feature)

npm run package
(to package the add-on before submitting the PR)

git add .

git commit -m '[commit message]

git push origin [new branch]
(to push my changes to my feature branch; from here, i can submit a PR)

git checkout master

another important git practice: squashing commits so my pull request doesn’t include 1000 commits that muddy the project history with my teensy changes. this is the most annoying thing ever and i always mess it up and i can’t even bear to explain it because this person has done a pretty good job already. just don’t ever ever forget to REBASE ON TOP OF MASTER, people!

last thing, which has been more important on my side project that i’m hosting on gh-pages: updating my gh-pages branch with changes from my master branch. this is crucial because the gh-pages branch, which displays my website, doesn’t automatically incorporate changes i make to my index.html file on my master branch. so here’s the workflow:

git checkout master
(work on stuff on the master branch)

git add .

git commit -m '[commit message]'

git push origin master
(the previous commands push your changes to your master branch. now, to update your gh-pages branch:)

git checkout gh-pages

git merge master

git push origin gh-pages

yes, that’s it, the end, congrats!

p.s. that all assumes that you already created a gh-pages branch to host your website. if you haven’t and want to, here’s how you do it:

git checkout master
(work on stuff on the master branch)

git add .

git commit -m '[message]'

git push origin master
(same as before. this is just normal, updating-your-master-branch stuff. so, next:)

git checkout -b gh-pages
(-b creates a new branch, gh-pages names the new branch “gh-pages”)

git push origin gh-pages
(this pushes changes from your origin/master branch to your new gh-pages branch)

yes, that’s it, the end, congrats!

Categorieën: Mozilla-nl planet

This Week In Rust: This Week in Rust 134

ma, 13/06/2016 - 06:00

Hello and welcome to another issue of This Week in Rust! Rust is a systems language pursuing the trifecta: safety, concurrency, and speed. This is a weekly summary of its progress and community. Want something mentioned? Tweet us at @ThisWeekInRust or send us an email! Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub. If you find any errors in this week's issue, please submit a PR.

This week's edition was edited by: nasa42 and llogiq.

Updates from Rust Community News & Blog Posts New Crates & Project Updates Crate of the Week

This week's Crate of the Week is petgraph, which provides graph structures and algorithms. Thanks to /u/diwic for the suggestion!

Submit your suggestions for next week!

Call for Participation

Always wanted to contribute to open-source projects but didn't know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

If you are a Rust project owner and are looking for contributors, please submit tasks here.

Updates from Rust Core

110 pull requests were merged in the last two weeks.

New Contributors
  • Andrew Brinker
  • Chris Tomlinson
  • Hendrik Sollich
  • Horace Abenga
  • Jacob Clark
  • Jakob Demler
  • James Alan Preiss
  • James Lucas
  • Joachim Viide
  • Mark Côté
  • Mathieu De Coster
  • Michael Necio
  • Morten H. Solvang
  • Wojciech Nawrocki
Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

No RFCs were approved this week.

Final Comment Period

Every week the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now. This week's FCPs are:

New RFCs Upcoming Events

If you are running a Rust event please add it to the calendar to get it mentioned here. Email Erick Tryzelaar or Brian Anderson for access.

fn work(on: RustProject) -> Money

No jobs listed for this week.

Tweet us at @ThisWeekInRust to get your job offers listed here!

Quote of the Week

Isn’t rust too difficult to be widely adopted?

I believe in people.

Steve Klabnik on TRPLF

Thanks to Steven Allen for the suggestion.

Submit your quotes for next week!

Categorieën: Mozilla-nl planet

The Servo Blog: This Week In Servo 67

ma, 13/06/2016 - 02:30

In the last week, we landed 85 PRs in the Servo organization’s repositories.

That number is a bit low this week, due to some issues with our CI machines (especially the OSX boxes) that have hurt our landing speed. Most of the staff are in London this week for the Mozilla All Hands meeting, but we’ll try to look at it.

Planning and Status

Our overall roadmap and quarterly goals are available online.

This week’s status updates are here.

Notable Additions
  • glennw upgraded our GL API usage to rely on more GLES3 features
  • ms2ger removed some usage of transmute
  • nox removed some of the dependencies on crates that are very fragile to rust nightly changes
  • nox reduced the number of fonts that we load unconditionally
  • larsberg added the ability to open web pages in Servo on Android
  • anderco fixed some box shadow issues
  • ajeffrey implemented the beginnings of the top level browsing context
  • izgzhen improved the implementation and tests for the file manager thread
  • edunham expanded the ./mach package command to handle desktop platforms
  • daoshengmu implemented TexSubImage2d for WebGL
  • pcwalton fixed an issue with receiving mouse events while scrolling in certain situations
  • danlrobertson continued the quest to build Servo on FreeBSD
  • manishearth reimplemented XMLHttpRequest in terms of the Fetch specification
  • kevgs corrected a spec-incompatibility in Document.defaultView
  • fduraffourg added a mechanism to update the list of public suffixes
  • farodin91 enabled using WindowProxy types in WebIDL
  • bobthekingofegypt prevented some unnecesary echoes of websocket quit messages
New Contributors

There were no new contributors this week.

Interested in helping build a web browser? Take a look at our curated list of issues that are good for new contributors!


No screenshots this week.

Categorieën: Mozilla-nl planet

Air Mozilla: Hackathon Open Democracy Now Day 2

za, 11/06/2016 - 09:30

Hackathon Open Democracy Now Day 2 Hackathon d'ouverture du festival Futur en Seine 2016 sur le thème de la Civic Tech.

Categorieën: Mozilla-nl planet

Robert O'Callahan: Some Dynamic Measurements Of Firefox On x86-64

za, 11/06/2016 - 05:25

This follows up on my previous measurements of static properties of Firefox code on x86-64 with some measurements of dynamic properties obtained by instrumenting code. These are mostly for my own amusement but intuitions about how programs behave at the machine level, grounded in data, have sometimes been unexpectedly useful.

Dynamic properties are highly workload-dependent. Media codecs are more SSE/AVX intensive than regular code so if you do nothing but watch videos you'd expect qualitatively different results than if you just load Web pages. I used a mixed workload that starts Firefox (multi-process enabled, optimized build), loads the NZ Herald, scrolls to the bottom, loads an article with a video, plays the video for several seconds, then quits. It ran for about 30 seconds under rr and executes about 60 billion instructions.

I repeated my register usage result analysis, this time weighted by dynamic execution count and taking into account implicit register usage such as push using rsp. The results differ significantly on whether you count the consecutive iterations of a repeated string instruction (e.g. rep movsb) as a single instruction execution or one instruction execution per iteration, so I show both. Unlike the static graphs, these results for all instructions executed anywhere in the process(es), including JITted code, not just libxul.

  • As expected, registers involved in string instructions get a big boost when you count string instruction repetitions individually. About 7 billion of the 64 billion instruction executions "with string repetitions" are iterations of string instructions. (In practice Intel CPUs can optimize these to execute 64 iterations at a time, under favourable conditions.)
  • As expected, sp is very frequently used once you consider its implicit uses.
  • String instructions aside, the dynamic results don't look very different from the static results. Registers R8 to R11 look a bit more used in this graph, which may be because they tend to be allocated in highly optimized leaf functions, which are more likely to be hot code.

  • The surprising thing about the results for SSE/AVX registers is that they still don't look very different to the static results. Even the bottom 8 registers still aren't frequently used compared to most general-purpose registers, even though I deliberately tried to exercise codec code.
  • I wonder why R5 is the least used bottom-8 register by a significant margin. Maybe these results are dominated by a few hot loops that by chance don't use that register much.

I was also interested in exploring the distribution of instruction execution frequencies:

A dot at position x, y on this graph means that fraction y of all instructions executed at least once is executed at most x times. So, we can see that about 19% of all instructions executed are executed only once. About 42% of instructions are executed at most 10 times. About 85% of instructions are executed at most 1000 times. These results treat consecutive iterations of a string instruction as a single execution. (It's hard to precisely define what it means for an instruction to "be the same" in the presence of dynamic loading and JITted code. I'm assuming that every execution of an instruction at a particular address in a particular address space is an execution of "the same instruction".)

Interestingly, the five most frequently executed instructions are executed about 160M times. Those instructions are for this line, which is simply filling a large buffer with 0xff000000. gcc is generating quite slow code:

132e7b2: cmp %rax,%rdx
132e7b5: je 132e7d1
132e7b7: movl $0xff000000,(%r9,%rax,4)
132e7bf: inc %rax
132e7c2: jmp 132e7b2 That's five instructions executed for every four bytes written. This could be done a lot faster in a variety of different ways --- rep stosd or rep stosq would probably get the fast-string optimization, but SSE/AVX might be faster.
Categorieën: Mozilla-nl planet

Anthony Hughes: London Calling

za, 11/06/2016 - 02:08

I’m happy to share that I will be hosting my first ever All-hands session, Graphics Stability – Tools and Measurements (12:30pm on Tuesday in the Hilton Metropole), at the upcoming Mozilla All-hands in London. The session is intended to be a conversation about taking a more collaborative approach to data-driven decision-making as it pertains to improving Graphics stability.

I will begin by presenting how the Graphics team is using data to tackle the graphics stability problem, reflecting on the problem at hand and the various approaches we’ve taken to date. My hope is this serves as a catalyst to lively discussion for the remainder of the session, resulting in a plan for more effective data-driven decision-making in the future through collaboration.

I am extending an invitation to those outside the Graphics team, to draw on a diverse range of backgrounds and expertise. As someone with a background in QA, data analysis is an interesting diversion (some may call it an evolution) in my career — it’s something I just fell in to after a fairly lengthy and difficult transitional period. While I’ve learned a lot recently, I am an amateur data scientist at best and could certainly benefit from more developed expertise.

I hope you’ll consider being a part of this conversation with the Graphics team. It should prove to be both educational and insightful. If you cannot make it, not to worry, I will be blogging more on this subject after I return from London.

Feel free to reach out to me if you have questions.

Categorieën: Mozilla-nl planet

Karl Dubost: [worklog] Edition 025. Forest and bugs

vr, 10/06/2016 - 12:00

In France, in the forest, listening to the sound of leaves on oak trees, in between bugs and preparing for London work week. Tune of the week: Working Class Hero.

Webcompat Life

Progress this week:

Today: 2016-06-14T06:14:05.461238 338 open issues ---------------------- needsinfo 4 needsdiagnosis 92 needscontact 23 contactready 46 sitewait 161 ----------------------

You are welcome to participate

London agenda.

Webcompat issues

(a selection of some of the bugs worked on this week). dev
  • I'm wondering if we are not missing a step once we have contacted a Web site.
Reading List Follow Your Nose TODO
  • Document how to write tests on using test fixtures.
  • ToWrite: Amazon prefetching resources with <object> for Firefox only.


Categorieën: Mozilla-nl planet

Air Mozilla: Hackathon Open Democracy Now

vr, 10/06/2016 - 09:30

Hackathon Open Democracy Now Hackathon d'ouverture du festival Futur en Seine 2016 sur le thème de la Civic Tech.

Categorieën: Mozilla-nl planet

Robert O'Callahan: Are Dynamic Control-Flow Integrity Schemes Worth Deploying?

vr, 10/06/2016 - 05:45

Most exploits against C/C++ code today rely on hijacking CPU-level control flow to execute the attacker's code. Researchers have developed schemes to defeat such attacks based on the idea of control flow integrity: characterize a program's "valid control flow", and prevent deviations from valid control flow at run time. There are lots of CFI schemes, employing combinations of static and dynamic techniques. Some of them don't even call themselves CFI, but I don't have a better term for the general definition I'm using here. Phrased in this general way, it includes control-transfer instrumentation (CCFIR etc), pointer obfuscation, shadow stacks, and even DEP and ASLR.

Vendors of C/C++ software need to consider whether to deploy CFI (and if so, which scheme). It's a cost/benefit analysis. The possible benefit is that many bugs may become significantly more difficult --- or even impossible --- to exploit. The costs are complexity and run-time overhead.

A key question when evaluating the benefit is, how difficult will it be for CFI-aware attackers to craft exploits that bypass CFI? That has two sub-questions: how often is it possible to weaponize a memory-safety bug that's exploited via control-flow hijacking today, with an exploit that is permitted by the CFI scheme? And, crucially, will it be possible to package such exploitation techniques so that weaponizing common C/C++ bugs into CFI-proof exploits becomes cheap? A very interesting paper at Oakland this year, and related work by other authors, suggests that the answer to the first sub-question is "very often" and the answer to the second sub-question is "don't bet against it".

Coincidentally, Intel has just unveiled a proposal to add some CFI features to their CPUs. It's a combination of shadow stacks with dynamic checking that the targets of indirect jumps/calls are explicitly marked as valid indirect destinations. Unlike some more precise CFI schemes, you only get one-bit target identification; a given program point is a valid destination for all indirect transfers or none.

So will CFI be worth deploying? It's hard to say. If you're offered a turnkey solution that "just works" with negligible cost, there may be no reason not to use it. However, complexity has a cost, and we've seen that sometimes complex security measures can even backfire. The tail end of Intel's document is rather terrifying; it tries to enumerate the interactions of their CFI feature with all the various execution modes that Intel currently supports, and leaves me with the impression that they're generally heading over the complexity event horizon.

Personally I'm skeptical that CFI will retain value over the long term. The Oakland DOP paper is compelling, and I think we generally have lots of evidence that once an attacker has a memory safety bug to work on, betting against the attacker's ingenuity is a loser's game. In an arms race between dynamic CFI (and its logical extension to dynamic data-flow integrity) and attackers, attackers will probably win, not least because every time you raise the CFI bar you'll pay with increased complexity and overhead. I suggest that if you do deploy CFI, you should do so in a way that lets you pull it out if the cost-benefit equation changes. Baking it into the CPU does not have that property...

One solution, of course, is to reduce the usage of C/C++ by writing code in a language whose static invariants are strong enough to give you CFI, and much stronger forms of integrity, "for free". Thanks to Rust, the old objections that memory-safe languages were slow, tied to run-time support and cost you control over resources don't apply anymore. Let's do it.

Categorieën: Mozilla-nl planet

Mozilla Addons Blog: WebExtensions for Firefox 49

do, 09/06/2016 - 22:46

Firefox 49 landed in Developer Edition this week, so we have another update on WebExtensions for you!

Since the release of Firefox 48, we feel WebExtensions are in a stable state. We recommend developers start to use the WebExtensions API for their add-on development. Since the last release, more than 35 bugs were closed on WebExtensions alone.

If you have authored an add-on in the past and are curious about how it’s affected by the upcoming changes, please use this lookup tool. There is also a wiki page and MDN articles filled with resources to support you through the changes.

APIs Implemented

The history API allows you to interact with the browser history. In Firefox 49 the APIs to add, query, and delete browser history have been merged. This is ideal for developers who want to build add-ons to manage user privacy.

In Firefox 48, Android support was merged, and in Firefox 49 support for some of the native elements has started to land. Firefox 49 on Android supports some of the pageAction APIs. This work lays the foundation for new APIs such as tabs, windows, browserAction in Android.

The WebNavigation API now supports the manual_subframe transitionType and keeps track of user interaction with the url bar appropriately. The downloads API now lets you download a blob created in a background script.

For a full list of bugs, please check out Bugzilla.

In progress

Things are a little bit quieter recently because there are things in progress that have absorbed a lot of developer time. They won’t land in the tree for Firefox 49, but we’ll keep you updated on their progress in later releases.


This API allows you to store some data for an add-on in Firefox and have it synced to another Firefox browser. It is most commonly used for storing add-on preferences, because it is not designed to be a robust data storage and syncing tool. For sync, we will use Firefox Accounts to authenticate users and enforce quota limits.

Whilst the API contains the word “sync” and it uses Firefox Accounts, it should be noted that it is different from Firefox Sync. In Firefox Sync there is an attempt to merge data and resolve conflicts. There is no similar logic in this API.

You can track the progress of storage.sync in Bugzilla.


This API allows you to communicate with other processes on the host’s operating system. It’s a commonly used API for password managers and security software which needs to communicate with external processes.

To communicate with a native process, there’s a two-step process. First, your installer needs to install a JSON manifest file at an appropriate file location on the target computer. That JSON manifest provides the link between Firefox and the process. Secondly, the user installs the add-on. Then the add-on can call the connectNative, sendNativeMessage and other APIs:

chrome.runtime.sendNativeMessage('your-application', { text: "Hello" }, function(response) { console.log("Received " + response); });

Firefox will start the process if it hasn’t started already, and pipe commands through to the process. Follow along with the progress of runtime.connectNative on Bugzilla.

WebExtensions transition

With these ongoing improvements, we realise there are lots of add-ons that might want to start moving towards WebExtensions and utilise the new APIs.

To allow this, you will soon be able to embed a WebExtension inside an add-on. This allows you to message the WebExtension add-on.

The following example works with SDK add-ons, but this should work with any bootstrapped add-on. Inside your SDK add-on you’d have a directory called webextensions containing a full-fledged WebExtension. In the background page of that WebExtension will be the following:

chrome.runtime.sendMessage("test message", (reply) => { console.log("embedded webext got a reply", reply); });

Then you’d be able to reply in the SDK add-on:

var { api } = require('sdk/webextension'); api.onMessage.addListener((msg, sender, sendReply) => console.log("SDK add-on got a message", {msg,sender}); sendReply("reply"); });

This demonstrates sending a message from the WebExtension to the SDK add-on.Persistent bi-directional ports will also be available.

Using this technique, we hope add-on developers can leverage WebExtensions APIs as they start migrating their add-on over to WebExtensions. Follow along with the progress of communication on Bugzilla.

There are also lots of other ways to get involved with WebExtensions, so please check them out!

Categorieën: Mozilla-nl planet

Mitchell Baker: Joi Ito changes role and starts new “Practicing Open” project with Mozilla Foundation

do, 09/06/2016 - 22:37

Since the Mozilla Foundation was founded in 2003, we’ve grown remarkably – from impact to the size of our staff and global community. We’re indebted to the people whose passion and creativity made this possible, people like Joi Ito.

Joi is a long-time friend of Mozilla. He’s a technologist, a thinker, an activist and an entrepreneur. He’s been a Mozilla Foundation board member for many years. He’s also Director of the MIT Media Lab and was very recently appointed Professor of the Practice by MIT.

As Joi has become more deeply involved with the Media Lab over the past few years, we’ve come to understand that his most important future contributions are, rather than as a Board member, to spur innovative activities that advance the goals of both the Mozilla Foundation and the Media Lab.

The first such project and collaboration between Mozilla and the Media Lab, is an “Open Leadership Camp” for senior executives in the nonprofit and public sectors.

The seeds of this idea have been germinating for a while. Joi and I have had an ongoing discussion about how people build open, participatory, web-like organizations for a year or so now. The NetGain consortium led by Ford, Mozilla and a number of foundations, has shown the pressing need for deeper Internet knowledge in the nonprofit and public sectors. Also, Mozilla’s nascent Leadership Network has been working on how to provide innovative ways for leaders in the more publicly-minded tech space to learn new skills. All these things felt like the perfect storm for a collaborative project on open leadership and to work with other groups already active in this area.

The project we have in mind is simple:

  1. Bring together a set of experienced leaders from ‘open organizations’ and major non-profit and public sector organizations.
  2. Get them working on practical projects that involve weaving open techniques into their organizations.
  3. Document and share the learning as we go.

Topics we’ll cover include everything from design thinking (think: sticky notes) to working in the open (think: github) to the future of open technologies (think: blockchain). The initial camp will run at MIT in early 2017, with Joi and myself as the hosts. Our hope is that a curriculum and method can grow from there to seed similar camps within public-interest leadership programs in many other places.

I’m intensely grateful for Joi’s impact. We’ve been lucky to have him involved with Mozilla and the open Internet. We’re lucky to have him at the Media Lab and I’m looking forward to our upcoming work together.

Categorieën: Mozilla-nl planet

Chris AtLee: PyCon 2016 report

do, 09/06/2016 - 21:39

I had the opportunity to spend last week in Portland for PyCon 2016. I'd like to share some of my thoughts and some pointers to good talks I was able to attend. The full schedule can be found here and all the videos are here.


Brandon Rhodes' Welcome to PyCon was one of the best introductions to a conference I've ever seen. Unfortunately I can't find a link to a recording... What I liked about it was that he made everyone feel very welcome to PyCon and to Portland. He explained some of the simple (but important!) practical details like where to find the conference rooms, how to take transit, etc. He noted that for the first time, they have live transcriptions of the talks being done and put up on screens beside the speaker slides for the hearing impaired.

He also emphasized the importance of keeping questions short during Q&A after the regular sessions. "Please form your question in the form of a question." I've been to way too many Q&A sessions where the person asking the question took the opportunity to go off on a long, unrelated tangent. For the most part, this advice was followed at PyCon: I didn't see very many long winded questions or statements during Q&A sessions.

Machete-mode Debugging

(abstract; video)

Ned Batchelder gave this great talk about using python's language features to debug problematic code. He ran through several examples of tricky problems that could come up, and how to use things like monkey patching and the debug trace hook to find out where the problem is. One piece of advice I liked was when he said that it doesn't matter how ugly the code is, since it's only going to last 10 minutes. The point is the get the information you need out of the system the easiest way possible, and then you can undo your changes.

Refactoring Python

(abstract; video)

I found this session pretty interesting. We certainly have lots of code that needs refactoring!

Security with object-capabilities

(abstract; video; slides)

I found this interesting, but a little too theoretical. Object capabilities are a completely orthogonal way to access control lists as a way model security and permissions. It was hard for me to see how we could apply this to the systems we're building.

Awaken your home

(abstract; video)

A really cool intro to the Home Assistant project, which integrates all kinds of IoT type things in your home. E.g. Nest, Sonos, IFTTT, OpenWrt, light bulbs, switches, automatic sprinkler systems. I'm definitely going to give this a try once I free up my raspberry pi.

Finding closure with closures

(abstract; video)

A very entertaining session about closures in Python. Does Python even have closures? (yes!)

Life cycle of a Python class

(abstract; video)

Lots of good information about how classes work in Python, including some details about meta-classes. I think I understand meta-classes better after having attended this session. I still don't get descriptors though!

(I hope Mike learns soon that __new__ is pronounced "dunder new" and not "under under new"!)

Deep learning

(abstract; video)

Very good presentation about getting started with deep learning. There are lots of great libraries and pre-trained neural networks out there to get started with!

Building protocol libraries the right way

(abstract; video)

I really enjoyed this talk. Cory Benfield describes the importance of keeping a clean separation between your protocol parsing code, and your IO. It not only makes things more testable, but makes code more reusable. Nearly every HTTP library in the Python ecosystem needs to re-implement its own HTTP parsing code, since all the existing code is tightly coupled to the network IO calls.

Tuesday Guido's Keynote


Some interesting notes in here about the history of Python, and a look at what's coming in 3.6.


(abstract; video)

An intro to the click module for creating beautiful command line interfaces.

I like that click helps you to build testable CLIs.

HTTP/2 and asynchronous APIs

(abstract; video)

A good introduction to what HTTP/2 can do, and why it's such an improvement over HTTP/1.x.

Remote calls != local calls

(abstract; video)

Really good talk about failing gracefully. He covered some familiar topics like adding timeouts and retries to things that can fail, but also introduced to me the concept of circuit breakers. The idea with a circuit breaker is to prevent talking to services you know are down. For example, if you have failed to get a response from service X the past 5 times due to timeouts or errors, then open the circuit breaker for a set amount of time. Future calls to service X from your application will be intercepted, and will fail early. This can avoid hammering a service while it's in an error state, and works well in combination with timeouts and retries of course.

I was thinking quite a bit about Ben's redo module during this talk. It's a great module for handling retries!

Diving into the wreck

(abstract; video)

A look into diagnosing performance problems in applications. Some neat tools and techniques introduced here, but I felt he blamed the DB a little too much :)

Wednesday Magic Wormhole

(abstract; video; slides)

I didn't end up going to this talk, but I did have a chance to chat with Brian before. magic-wormhole is a tool to safely transfer files from one computer to another. Think scp, but without needing ssh keys set up already, or direct network flows. Very neat tool!

Computational Physics

(abstract; video)

How to do planetary orbit simulations in Python. Pretty interesting talk, he introduced me to Feynman, and some of the important characteristics of the simulation methods introduced.

Small batch artisinal bots

(abstract; video)

Hilarious talk about building bots with Python. Definitely worth watching, although unfortunately it's only a partial recording.


(abstract; video)

The infamous GIL is gone! And your Python programs only run 25x slower!

Larry describes why the GIL was introduced, what it does, and what's involved with removing it. He's actually got a fork of Python with the GIL removed, but performance suffers quite a bit when run without the GIL.

Lars' Keynote


If you watch only one video from PyCon, watch this. It's just incredible.

Categorieën: Mozilla-nl planet

Support.Mozilla.Org: What’s Up with SUMO – 9th June

do, 09/06/2016 - 20:02

Hello, SUMO Nation!

I wonder how many football fans do we have among you… The Euro’s coming! Some of us will definitely be watching (and being emotional) about the games played out in the next few weeks. If you’re a football fan, let’s talk about it in our forums!

Welcome, new contributors! If you just joined us, don’t hesitate – come over and say “hi” in the forums! Contributors of the week

We salute you!

Don’t forget that if you are new to SUMO and someone helped you get started in a nice way you can nominate them for the Buddy of the Month! Most recent SUMO Community meeting The next SUMO Community meeting
  • …is most likely happening after the London Work Week (which is happening next week)
  • Reminder: if you want to add a discussion topic to the upcoming meeting agenda:
    • Start a thread in the Community Forums, so that everyone in the community can see what will be discussed and voice their opinion here before Wednesday (this will make it easier to have an efficient meeting).
    • Please do so as soon as you can before the meeting, so that people have time to read, think, and reply (and also add it to the agenda).
    • If you can, please attend the meeting in person (or via IRC), so we can follow up on your discussion topic during the meeting with your feedback.
Community Social Support Forum Knowledge Base & L10n
  • for Android
    • Version 47 launched – woohoo!
      • You can now show or hide web fonts” in advanced settings, to save your data and increase page loading speeds.
    • Final reminder: Android 2.3 is no longer a supported platform after the recent release.
    • Version 48 articles will be coming after June 18, courtesy of Joni!

That’s it for this week – next week the blog post may not be here… but if you keep an eye open for our Twitter updates, you may see a lot of smiling faces.

Categorieën: Mozilla-nl planet

Mark Surman: Making the open internet a mainstream issue

do, 09/06/2016 - 19:35

The Internet as a global public resource is at risk. How do we grow the movement to protect it? Thoughts from PDF

Today I’m in New York City at the 13th-annual Personal Democracy Forum, where the theme is “The Tech We Need.” A lot of bright minds are here tackling big issues, like civic tech, data privacy, Internet policy and the sharing economy. PDF is one of the world’s best spaces for exploring the intersection of the Internet and society — and we need events like this now more than ever.

This afternoon I’ll be speaking about the open Internet movement: its genesis, its ebb and why it needs a renaissance. I’ll discuss how the open Internet is much like the environment: a resource that’s delicate and finite. And a resource that, without a strong movement, is spoiled by bad laws and consolidation of power by a few companies.

At its core, the open Internet movement is about more than just technology. It’s about free expression and democracy. That’s why members of the movement are so diverse: Activists and academics. Journalists and hackers.

photo via Flickr/ Stacie Isabella Turk/Ribbonheadphoto via Flickr/ Stacie Isabella Turk/Ribbonhead

Today, this movement is at an inflection point. The open Internet is increasingly at risk. Openness and freedom online are being eroded by governments creating bad or uninformed policy, and by tech companies that are creating monopolies and walled gardens. This is all compounded by a second problem: Many people still don’t perceive the health of the Internet as a mainstream issue.

In order to really demonstrate the importance of the open Internet movement, I like to use an analogue: The environmental movement. The two have a lot in common. Environmentalists are all about preserving the health of the planet. Forests, not clearcutting. Habitats, not smokestacks. Open Internet activists are all about preserving the health of the Internet. Open source code, not proprietary software. Hyperlinks, not walled gardens.

The open Internet is also like the environmental movement in that it has rhythm. Public support ebbs and flows — there are crescendos and diminuendos. Look at the cadence of the environmental movement: It became a number of times in a number of places. For example, an early  crescendo in the US came in the late 19th century. On the heels of the Industrial Revolution, there’s resistance. Think of Thoreau, of “Walden.” Soon after, Theodore Roosevelt and John Muir emerge as champions of the environment, creating the Sierra Club and the first national parks. Both national parks and a conservation movement filled with hikers who use them both become mainstream — it’s a major victory.

But movements ebb. In the mid-20th century, environmental destruction continues. We build nuclear and chemical plants. We pollute rivers and air space. We coat our food and children with DDT. It’s ugly — and we did irreparable damage while most people just went about their lives. In many ways, this is where we’re at with the Internet today. There is reason to worry that we’re doing damage and that we might even lose what we built without even knowing it. .

In reaction, the US environmental movement experiences a second mainstream moment. It starts in the 60s: Rachel Carson releases “Silent Spring,” exposing the dangers of DDT and other pesticides. This is a big deal: Citizens start becoming suspicious of big companies and their impact on the environment. Governments begin appointing environmental ministers. Organizations like Greenpeace emerge and flourish.

For a second time, the environment becomes an issue worthy of policy and public debate. Resting on the foundations built by 1960s environmentalism, things like recycling are a civic duty today. And green business practices are the expectation, not the exception.

The open Internet movement has had a similar tempo. It’s first crescendo — its “Walden” moment — was in the 90s. Users carved out and shaped their own spaces online — digital homesteading. No two web pages were the same, and open was the standard. A rough analogue to Thoreau’s “Walden” is John Perry Barlow’s manifesto “A Declaration of the Independence of Cyberspace.” Barlow boldly wrote that governments and centralized power have no place in the digital world.

It’s during this time that the open Internet faces its first major threat: centralization at the hands of Internet Explorer. Suddenly, it seems the whole Web may fall into the hands of Microsoft technology. But there was also a push back and  crescendo — hackers and users rallied to create open alternatives like Firefox. Quickly, non-proprietary web standards re-emerge. Interoperability and accessibility become driving principles behind building the Web. The Browser Wars are won: Microsoft as monopoly over web technology is thwarted.

But then comes inertia. We could be in the open Internet movement’s DDT moment. Increasingly, the Internet is becoming a place of centralization. The Internet is increasingly shaped by a tiny handful of companies, not individuals. Users are transforming from creators into consumers. In the global south, millions of users equate the Internet with Facebook. These developments crystallize as a handful of threats: Centralization. Loss of privacy. Digital exclusion.

Screen Shot 2016-06-09 at 1.35.12 PM

It’s a bit scary: Like the environment, the open Internet is fragile. There may be a point of no return. What we want to do — what we need to do — is make the health of the open Internet a mainstream issue. We need to make the health of the Internet an indelible issue, something that spurs on better policy and better products. And we need a movement to make this happen.

This is on us: everyone who uses the internet needs to take notice. Not just the technologists — also the activists, academics, journalists and everyday Internet users who treasure freedom of expression and inclusivity online.

There’s good news: This is already happening. Starting with SOPA and ACTA a citizen movement for an open Internet started accelerating. We got organized, we rallyied citizens and we took stands on issues that mattered. Think of the recent headlines. When Edward Snowden revealed the extent of mass surveillance, people listened. Privacy and freedom from surveillance online were quickly enshrined as rights worth fighting for. The issue gained momentum among policymakers — and in 2015, the USA Freedom Act was passed.

Then there is 2015’s net neutrality victory: Over 3 million comments flooded the FCC protesting fast lanes and slow lanes. Most recently, Apple and the FBI clashed fiercely over encryption. Apple refused to concede, standing up for users’ privacy and security. Tim Cook was applauded, and encryption became a word spoken at kitchen tables and coffee shops.

Of course, this is just the beginning. These victories are heartening, for sure. But even as this new wave of internet activism builds, the threats are becoming worse, more widespread. We need to fuel the movement with concrete action — if we don’t, we may lose the open Web for good. Today, upholding the health of the planet is an urgent and enduring enterprise. So too should upholding the health of the Internet.

A small PS, I also gave a talk on this topic at re:publica in Berlin last month. If you want to watch that talk, the video is on the re:publica site.

The post Making the open internet a mainstream issue appeared first on Mark Surman.

Categorieën: Mozilla-nl planet

Air Mozilla: Mapathon Missing Maps #4

do, 09/06/2016 - 19:00

Mapathon Missing Maps #4 Ateliers de cartographie collaborative sur OpenStreetMap de régions du monde peu ou pas encore cartographiées. Organisé par Missing Maps et MSF.

Categorieën: Mozilla-nl planet

Air Mozilla: Web QA team meeting

do, 09/06/2016 - 18:00

Web QA team meeting They say a Mozilla Web QA team member is the most fearless creature in the world. They say their jaws are powerful enough to crush...

Categorieën: Mozilla-nl planet

Air Mozilla: Reps weekly, 09 Jun 2016

do, 09/06/2016 - 18:00

Reps weekly This is a weekly call with some of the Reps to discuss all matters about/affecting Reps and invite Reps to share their work with everyone.

Categorieën: Mozilla-nl planet

The Mozilla Blog: Help Make Open Source Secure

do, 09/06/2016 - 17:47

Major security bugs heartbleed bandagein core pieces of open source software – such as Heartbleed and Shellshock – have elevated highly technical security vulnerabilities into national news headlines. Despite these sobering incidents, adequate support for securing open source software remains an unsolved problem, as a panel of 32 security professionals confirmed in 2015. We want to change that, starting today with the creation of the Secure Open Source (“SOS”) Fund aimed at precisely this need.

Open source software is used by millions of businesses and thousands of educational and government institutions for critical applications and services. From Google and Microsoft to the United Nations, open source code is now tightly woven into the fabric of the software that powers the world. Indeed, much of the Internet – including the network infrastructure that supports it – runs using open source technologies. As the Internet moves from connecting browsers to connecting devices (cars and medical equipment), software security becomes a life and death consideration.

The SOS Fund will provide security auditing, remediation, and verification for key open source software projects. The Fund is part of the Mozilla Open Source Support program (MOSS) and has been allocated $500,000 in initial funding, which will cover audits of some widely-used open source libraries and programs. But we hope this is only the beginning. We want to see the numerous companies and governments that use open source join us and provide additional financial support. We challenge these beneficiaries of open source to pay it forward and help secure the Internet.

Security is a process. To have substantial and lasting benefit, we need to invest in education, best practices, and a host of other areas. Yet we hope that this fund will provide needed short-term benefits and industry momentum to help strengthen open source projects.

Mozilla is committed to tackling the need for more security in the open source ecosystem through three steps:

  • Mozilla will contract with and pay professional security firms to audit other projects’ code;
  • Mozilla will work with the project maintainer(s) to support and implement fixes, and to manage disclosure; and
  • Mozilla will pay for the remediation work to be verified, to ensure any identified bugs have been fixed.

We have already tested this process with audits of three pieces of open source software. In those audits we uncovered and addressed a total of 43 bugs, including one critical vulnerability and two issues with a widely-used image file format. These initial results confirm our investment hypothesis, and we’re excited to learn more as we open for applications.

We all rely on open source software. We invite other companies and funders to join us in securing the open source ecosystem. If you’re a developer, apply for support! And if you’re a funder, join us. Together, we can have a greater impact for the security of open source systems and the Internet as a whole.

More information:




Categorieën: Mozilla-nl planet

Daniel Stenberg: curl on windows versions

do, 09/06/2016 - 14:49

I had to ask. Just to get a notion of which Windows versions our users are actually using, so that we could get an indication which versions we still should make an effort to keep working on. As people download and run libcurl on their own, we just have no other ways to figure this out.

As always when asking a question to our audience, we can’t really know which part of our users that responded and it is probably more safe to assume that it is not a representative distribution of our actual user base but it is simply as good as it gets. A hint.

I posted about this poll on the libcurl mailing list and over twitter. I had it open for about 48 hours. We received 86 responses. Click the image below for the full res version:

windows-versions-used-for-curlSo, Windows 10, 8 and 7 are very well used and even Vista and XP clocked in fairly high on 14% and 23%. Clearly those are Windows versions we should strive to keep supported.

For Windows versions older than XP I was sort of hoping we’d get a zero, but as you can see in the graph we have users claiming to use curl on as old versions as Windows NT 4. I even checked, and it wasn’t the same two users that checked all those three oldest versions.

The “Other” marks were for Windows 2008 and 2012, and bonus points for the user who added “Other: debian 7”. It is interesting that I specifically asked for users running curl on windows to answer this survey and yet 26% responded that they don’t use Windows at all..

Categorieën: Mozilla-nl planet

Matěj Cepl: vim-sticky-notes — vim support for

do, 09/06/2016 - 11:14

I have started to work on updating pastebin.vim so that it works with fpaste The result is available on my GitLab project vim-sticky-notes Any feedback, issue reports, and (of course) merge requests are very welcome.

Categorieën: Mozilla-nl planet