Mozilla Nederland LogoDe Nederlandse

Wil Clouser: just got a lot easier to work on

Mozilla planet - ti, 11/10/2016 - 09:00

We originally built Test Pilot on top of Django and some JS libraries to fulfill our product requirements as well as keep us flexible enough to evolve quickly since we were a brand new site.

As the site has grown, we've dropped a few requirements, and realized that we were using APIs from our engagement team to collect newsletter sign ups, APIs from our measurement team for our metrics, and everything else on the site was essentially HTML and JS. We used the Django scaffolding for updating the experiments, but there was no reason we needed to.

I'm happy to highlight that as of today is served 100% statically. Moving to flat files means:

  • Easier to deploy. All we do is copy files to an S3 bucket. No more SQL migrations or strange half-pushed states.

  • More secure. With just flat files we have way less surface area to attack.

  • Easier to participate in. You'll no longer need to set up Docker or a database. Just check out the files, run npm install and you're done. (disclaimer: we just pushed this today, so we actually still need to update the documentation)

  • Excellent change control. Instead of using an admin panel on the site, we now use GitHub to manage our static content. This means all changes are tracked for free, we already have a process in place for reviewing pull requests, and it's easy to roll back or manipulate the data because it's all in the repository already.

If you want to get involved with Test Pilot, come join us in #testpilot (or webchat)!

Categorieën: Mozilla-nl planet

This Week In Rust: This Week in Rust 151

Mozilla planet - ti, 11/10/2016 - 06:00

Hello and welcome to another issue of This Week in Rust! Rust is a systems language pursuing the trifecta: safety, concurrency, and speed. This is a weekly summary of its progress and community. Want something mentioned? Tweet us at @ThisWeekInRust or send us a pull request. Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub. If you find any errors in this week's issue, please submit a PR.

Updates from Rust Community Blog Posts News & Project Updates Other Weeklies from Rust Community New Crates
  • SvgBobRus. Convert your ASCII diagram scribbles into happy little SVG.
  • Curryrs. A library for providing easy to use bindings between Rust and Haskell code.
  • NetBricks. A new network function framework based on Rust.
  • F3. A crate to play with STM32F3DISCOVERY development board with a Cortex-M4 microcontroller.
  • derivative: A set of attribute-enhanced #[derive] for built-in traits using Macro 1.1.
Crate of the Week

No crate was selected for CotW.

Submit your suggestions and votes for next week!

Call for Participation

Always wanted to contribute to open-source projects but didn't know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

If you are a Rust project owner and are looking for contributors, please submit tasks here.

Updates from Rust Core

135 pull requests were merged in the last week.

New Contributors
  • Anthony Ramine
  • Christopher
  • Eric Roshan-Eisner
  • Florian Diebold
  • KillTheMule
  • Mathieu Borderé
  • Nick Stevens
  • p512
  • Razican
  • Stephen M. Coakley
  • 石博文
Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

No RFCs were approved this week.

Final Comment Period

Every week the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now. This week's FCPs are:

New RFCs Style RFCs

Style RFCs are part of the process for deciding on style guidelines for the Rust community and defaults for Rustfmt. The process is similar to the RFC process, but we try to reach rough consensus on issues (including a final comment period) before progressing to PRs. Just like the RFC process, all users are welcome to comment and submit RFCs. If you want to help decide what Rust code should look like, come get involved!

FCP issues:

Other issues getting a lot of discussion:

No PRs this week.

Upcoming Events

If you are running a Rust event please add it to the calendar to get it mentioned here. Email the Rust Community Team for access.

fn work(on: RustProject) -> Money

No jobs listed for this week.

Tweet us at @ThisWeekInRust to get your job offers listed here!

Friends of the Forest

Our community likes to recognize people who have made outstanding contributions to the Rust Project, its ecosystem, and its community. These people are 'friends of the forest'.

This week's friends of the forest are:

dtolnay for outstanding work on Serde, taking it from a great library into an outstanding library, improving documentation significantly, being on top of the macros 1.1 transition, and even developing a new high level library for making custom derive under macros 1.1 easier to work with.

sgrif for outstanding work on Diesel, an ORM that will change the game for ORMs, and for being incredibly helpful and friendly with early adopters.

njn (on IRC) or nnethercote1 on GitHub for outstanding work on compiler perf. They've removed an allocation during HashSet creation, made TypedArena lazily allocate the first chunk, and more. They also have helped with adding a benchmarking script to compare two different compiler versions against the benchmarks in, which helps future work in this area.

I nominate TimNN for Friend of the Forest for his repeated and invaluable work minimizing and bisecting (example). Keep up the good work!

I'd like to nominate BurntSushi for Friend of the Forest. I think the multiples crates he contributed are both important and high quality. In addition to his code contributions to the ecosystem, he also did some good and informative write up about some of them.

I'd like to nominate GuillaumeGomez for the "Rust documentation superhero" title as well.

Submit your Friends-of-the-Forest nominations for next week!

Quote of the Week

< Celti> I just had a recruiter contact me for a Rust job requiring 3+ years of professional experience with it.

— From #rust.

Thanks to bluss for the suggestion.

Submit your quotes for next week!

This Week in Rust is edited by: nasa42, llogiq, and brson.

Categorieën: Mozilla-nl planet

Shing Lyu: Mutation Testing in JavaScript Using Stryker

Mozilla planet - ti, 11/10/2016 - 03:00

Earlier this year, I wrote a blog post introducing Mutation Testing in JavaScript using the Grunt Mutation Testing framework. But as their NPM README said,

We will be working on (gradually) migrating the majority of the code base to the Stryker.

So I’ll update my post to use the latest Stryker framework. The following will be the updated post with all the code example migrated to the Stryker framework:

Last November (2015) I attended the EuroStar Software Testing Conference, and was introduced to a interesting idea called mutation testing. Ask yourself: “How do I ensure my (automated) unit test suite is good enough?”. Did you miss any important test? Is your test always passing so it didn’t catch anything? Is there anything un-testable in your code such that your test suite can never catch it?

Introduction to Mutation Testing

Mutation testing tries to “test your tests” by deliberately inject faults (called “mutants”) into your program under test. If we re-run the tests on the crippled program, our test suite should catch it (i.e. some test should fail.) If you missed some test, the error might slip through and the test will pass. Borrowing terms from Genetics, if the test fails, we say the mutant is “killed” by the test suite; on the opposite, if the tests passes, we say the mutant survived.

The goal of mutation testing is to kill all mutants by enhancing the test suite. Although 100% kill is usually impossible for even small programs, any progress on increasing the number can still benefit your test a lot.

The concept of mutation testing has been around for quite a while, but it didn’t get very popular because of the following reasons: first, it is slow. The number of possible mutations are just too much, re-compiling (e.g. C++) and re-run the test will take too long. Various methods has been proposed to lower the number of tests we need to run without sacrifice the chance of finding problems. The second is the “equivalent mutation” problem, which we’ll discuss in more detail in the examples.

Mutation Testing from JavaScript

There are many existing mutation testing framework. But most of them are for languages like Java, C++ or C#. Since my job is mainly about JavaScript (both in browser and Node.js), I wanted to run mutation testing in JavaScript.

I have found a few mutation testing frameworks for JavaScript, but they are either non-open-source or very academic. The most mature one I can find so far is the Stryker framework, which is released under Apache 2.0 license.

This framework supports mutants like changing the math operators, changing logic operators or even removing conditionals and block statements. You can find a full list of mutations with examples here.

Setting up the environment

You can follow the quickstart guide to install everything you need. The guide has a nice interactive menu so you can choose your favorite build system, test runner, test framework and reporting format. In this blog post I’ll demonstrate with my favorite combination: Vanilla NPM + Mocha + Mocha + clear-text.

You’ll need node and npm installed. (I recommended using nvm).

There are a few Node packages you need to install,

sudo npm install -g --save-dev mocha npm install --save-dev stryker styker-api stryker-mocha-runner

Here are the list of packaged that I’ve installed:

"devDependencies": { "mocha": "^2.5.3", "stryker": "^0.4.3", "stryker-api": "^0.2.0", "stryker-mocha-runner": "^0.1.0" } A simple test suite

I created a simple program in src/calculator.js, which has two functions:

// ===== calculator.js ===== function substractPositive(num1, num2){ if (num1 > 0){ return num1 - num2; } else { return 0 } } function add(num1, num2){ if (num1 == num2){ return num1 + num2; } else if (num1 num2){ return num1 + num2; } else { return num1 + num2; } } module.exports.substractPositive = substractPositive module.exports.add = add;

The first function is called substractPositive, it substract num2 from num1 if num1 is a positive number. If num1 is not positive, it will return 0 instead. It doesn’t make much sense, but it’s just for demonstrative purpose.

The second is a simple add function that adds two numbers. It has a unnecessary if...else... statement, which is also used to demonstrate the power of mutation testing.

The two functions are tested using test/test_calculator.js:

var assert = require("assert"); var cal = require("../src/calculator.js") describe("Calculator", function(){ it("substractPositive", function(){ assert.equal("2", cal.substractPositive(1, -1)); }); it("add", function(){ assert.equal("2", cal.add(1, 1)); }); })

This is a test file running using mocha, The first verifies substractPositive(1, -1) returns 2. The second tests add(1,1) produces 2. If you run mocha in your commandline, you’ll see both the test passes. If you run mocha in the commandline you’ll see the output:

% mocha Calculator ✓ substractPositive ✓ add 2 passing (6ms)

So this test suite looks OK, it exercises both function and verifies its output, but is it good enough? Let’s verify this by some mutation testing.

Running the mutation testing

To run the mutation testing, we first need to create a config file for the stryker mutator. Create a file called stryker.conf.js in the project’s root directory and paste the following input into it:

// stryker.conf.js module.exports = function(config){ config.set({ files: [ // Add your files here, this is just an example: { pattern: 'src/**/*.js', mutated: true, included: false}, 'test/**/*.js' ], testRunner: 'mocha', testFramework: 'mocha', testSelector: null, reporter: ['clear-text', 'progress'] }); }

You can see we choose mocha as the testRunner and testFramework, and we tell the Stryker framework to run the test files in test/**/*.js, and the source files are in src/**/*.js.

Then we need to tell npm how to run Stryker in the package.json file. Simply add the following code in your package.json configuration:

"scripts": { "stryker": "stryker -c stryker.conf.js" },

This will add a npm run stryker command to npm run and execute the stryker -c stryker.conf.js script.

Finding Missing Tests

Now if we run npm run stryker, you’ll see the following test result:

% npm run stryker > @ stryker /home/shinglyu/workspace/mutation-testing-demo/stryker > stryker -c stryker.conf.js [2016-10-07 15:14:51.997] [INFO] InputFileResolver - Found 1 file(s) to be mutated. [2016-10-07 15:14:52.129] [INFO] Stryker - Initial test run succeeded. Ran 2 tests. [2016-10-07 15:14:52.151] [INFO] Stryker - 22 Mutant(s) generated [2016-10-07 15:14:52.153] [INFO] TestRunnerOrchestrator - Creating 8 test runners (based on cpu count) SSSSSSSSSSSSS..S...... Mutant survived! /home/shinglyu/workspace/mutation-testing-demo/stryker/src/calculator.js: line 17:7 Mutator: BlockStatement - else { - return num1 + num2; - } + else { + } Tests ran: Calculator substractPositive Calculator add Mutant survived! /home/shinglyu/workspace/mutation-testing-demo/stryker/src/calculator.js: line 18:11 Mutator: Math - return num1 + num2; + return num1 - num2; Tests ran: Calculator substractPositive Calculator add Mutant survived! /home/shinglyu/workspace/mutation-testing-demo/stryker/src/calculator.js: line 14:11 Mutator: RemoveConditionals - else if (num1 > num2){ + else if (false){ Tests ran: Calculator substractPositive Calculator add Mutant survived! /home/shinglyu/workspace/mutation-testing-demo/stryker/src/calculator.js: line 14:23 Mutator: BlockStatement - else if (num1 > num2){ - return num1 + num2; - } + else if (num1 > num2){ + } Tests ran: Calculator substractPositive Calculator add //(omitted for brevity) 22 mutants tested. 0 mutants untested. 0 mutants timed out. 8 mutants killed. Mutation score based on covered code: 36.36% Mutation score based on all code: 36.36% [2016-10-07 15:14:52.468] [INFO] fileUtils - Cleaning stryker temp folder /home/shinglyu/workspace/mutation-testing-demo/stryker/.stryker-tmp

As you can see, it tested 22 kinds of mutations, but 14 (22-8=14) of them survived!

Let’s look at a survived mutant:

Mutant survived! /home/shinglyu/workspace/mutation-testing-demo/stryker/src/calculator.js: line 2:6 Mutator: ConditionalBoundary - if (num1 > 0){ + if (num1 >= 0){

It tells us that by replacing the > with >= in line 2 of our calculator.js file, the test will pass.

If you look the line, the problem is pretty clear

// ===== calculator.js ===== function substractPositive(num1, num2){ if (num1 > 0){ // <== This line return num1 - num2; } else { return 0 } }

We didn’t test the boundary values, if num1 == 0, then the program should go to the else branch and returns 0. By changing the > to >=, the program will go into the return num1 - num2 branch instead and returns 0 - num2!

This is one of the problem mutation testing can solve, it tells you which test case you missed. The solution is very simple, we can add a test like this:

it('substractPositive', function(){ assert.equal('0', cal.substractPositive(0, -1)); });

If you run mutation testing again, the problem with the substractPositive function should go away.

Equivalent Mutation and Dead Code

Sometimes a mutation will not change the behavior of the program, so no matter what test you write, you can never make it fail. For example, a mutation may disable caching in your program, the program will run slower but the behavior will be exactly the same, so you’ll have a mutation you can never kill. This kind of mutation is called “equivalent mutation”.

Equivalent mutation will make you overestimate your mutation survival rate. And they time to debug, but may not reveal useful information about your test suite. However, some equivalent mutations do reveal issues about your program under test.

Let look at the mutation results again:

Mutant survived! /home/shinglyu/workspace/mutation-testing-demo/stryker/src/calculator.js: line 11:6 Mutator: ReverseConditional - if (num1 == num2){ + if (num1 != num2){ Mutant survived! /home/shinglyu/workspace/mutation-testing-demo/stryker/src/calculator.js: line 14:11 Mutator: RemoveConditionals - else if (num1 > num2){ + else if (false){ Mutant survived! /home/shinglyu/workspace/mutation-testing-demo/stryker/src/calculator.js: line 14:23 Mutator: BlockStatement - else if (num1 > num2){ - return num1 + num2; - } + else if (num1 > num2){ + } Mutant survived! /home/shinglyu/workspace/mutation-testing-demo/stryker/src/calculator.js: line 15:11 Mutator: Math - return num1 + num2; + return num1 - num2;

If you look at the code, you’ll find that all the branches of the if...else... statement returns the same thing. So no matter how you mutate the if...else... conditions, or even remove one branch that was not reached, the function will always return the correct result.

10 function add(num1, num2){ 11 if (num1 == num2){ 12 return num1 + num2; 13 } 14 else if (num1 > num2){ 15 return num1 + num2; 16 } 17 else { 18 return num1 + num2; 19 } 20 }

Because we found that the if...else... is useless, we can simplify the function to only three lines:

function add(num1, num2){ return num1 + num2; }

If you run the mutation test again, you can see all the mutations being killed.

This is one of the side benefit of equivalent mutations, although your test suite is fine, it tells you that your program code has dead code or untestable code.

Next Steps

By now you should have a rough idea about how mutation testing works, and how to actually apply them in your JavaScript project. If you are interested in mutation testing, there are more interesting questions you can dive into, for example, how to use code coverage data to reduce the test you need to run? How to avoid equivalent mutations? I hope you’ll find many interesting methods you can apply to your testing work. You can submit issues and suggestions to the Stryker GitHub repository, or even contribute code to them. The team is very responsive and friendly. Happy Mutation Testing!

Categorieën: Mozilla-nl planet

Nick Cameron: Macros (and syntax extensions and compiler plugins) - where are we at?

Mozilla planet - mo, 10/10/2016 - 21:31

Procedural macros are one of the main reasons Rust programmers use nightly rather than stable Rust, and one of the few areas still causing breaking changes. Recently, part of the story around procedural macros has been coming together and here I'll explain what you can do today, and where we're going in the future.

TL;DR: as a procedural macro author, you're now able to write custom derive implementations which are on a fast track to stabilisation, and to experiment with the beginnings of our long-term plan for general purpose procedural macros. As a user of procedural macros, you'll soon be saying goodbye to bustage in procedural macro libraries caused by changes to compiler internals.

Macros today

Macros are an important part of Rust. They facilitate convenient and safe functionality used by all Rust programmers, such as println! and assert!; they reduce boilerplate, and make implementing traits trivial via derive. They also allow libraries to provide interesting and unusual abstractions.

However, macros are a rough corner - declarative macros (macro_rules macros) have their own system for modularisation, a fiddly syntax for declarations, and some odd rules around hygiene. Procedural macros (aka syntax extensions, compiler plugins) are unstable and painful to use. Despite that, they are used to implement some core parts of the ecosystem, including serialisation, and this causes a great deal of friction for Rust users who have to use nightly Rust or clunky build systems, and either way get hit with regular upstream breakage.

Future of procedural macros

We strongly want to improve this situation. Our current priority is procedural macros, and in particular the parts of the procedural macro system which force Rust users onto nightly or cause recurring upstream errors.

Our goal is an expressive, powerful, and well-designed system that is as stable as the rest of the language. Design work is ongoing in the RFC process. We have accepted RFCs on naming and custom derive/macros 1.1, there are open RFCs on the overall design of procedural macros, attributes, etc., and probably several more to come, in particular about the libraries available to macro authors.

The future, today!

One of the core innovations to the procedural macro system is to base our macros on tokens rather than AST nodes. The AST is a compiler-internal data structure; it must change whenever we add new syntax to the compiler, and often changes even when we don't due to refactoring, etc. That means that macros based on the AST break whenever the compiler changes, i.e., with every new version. In contrast, tokens are mostly stable, and even when they must change, that change can easily be abstracted over.

We have begun the implementation of token-based macros and today you can experiment with them in two ways: by writing custom derive implementations using macros 1.1, and within the existing syntax extension framework. At the moment these two features are quite different, but as part of the stabilisation process they should become more similar to use, and share more of their implementations.

Even better for many users, popular macro-based libraries such as Serde are moving to the new macro system, and crates using these libraries should see fewer errors due to changes to the compiler. Soon, users should be able to use these libraries from stable Rust.

Custom derive (macros 1.1)

The derive attribute lets programmers implement common traits with minimal boilerplate, typically generating an impl based on the annotated data type. This can be used with Eq, Copy, Debug, and many other traits. These implementations of derive are built in to the compiler.

It would be useful for library authors to provide their own, custom derive implementations. This was previously facilitated by the custom_derive feature, however, that is unstable and the implementation is hacky. We now offer a new solution based on procedural macros (often called 'macros 1.1', RFC, tracking issue) which we hope will be on a fast path to stabilisation.

The macros 1.1 solution offers the core token-based framework for declaring and using procedural macros (including a new crate type), but only a bare-bones set of features. In particular, even the access to tokens is limited: the only stable API is one providing conversion to and from strings. Keeping the API surface small allows us to make a minimal commitment as we continue iterating on the design. Modularization and hygiene are not covered, nevertheless, we believe that this API surface is sufficient for custom derive (as evidenced by the fact that Serde was easily ported over).

To write a macros 1.1 custom derive, you need only a function which takes and returns a proc_macro::TokenStream, you then annotate this function with an attribute containing the name of the derive. E.g., #[proc_macro_derive(Foo)] will enable #[derive(Foo)]. To convert between TokenStreams and strings, you use the to_string and parse functions.

There is a new kind of crate (alongside dylib, rlib, etc.) - a proc-macro crate. All macros 1.1 implementations must be in such a crate.

To use, you import the crate in the usual way using extern crate, and annotate that statement with #[macro_use]. You can then use the derive name in derive attributes.


(These examples will need a pretty recent nightly compiler).

Macro crate (

#![feature(proc_macro, proc_macro_lib)] #![crate_type = "proc-macro"] extern crate proc_macro; use proc_macro::TokenStream; #[proc_macro_derive(B)] pub fn derive(input: TokenStream) -> TokenStream { let input = input.to_string(); format!("{}\n impl B for A {{ fn b(&self) {{}} }}", input).parse().unwrap() }

Client crate (

#![feature(proc_macro)] #[macro_use] extern crate b; trait B { fn b(&self); } #[derive(B)] struct A; fn main() { let a = A; a.b(); }

To build:

rustc && rustc -L .

When building with Cargo, the macro crate must include proc-macro = true in its Cargo.toml.

Note that token-based procedural macros are a lower-level feature than the old syntax extensions. The expectation is that authors will not manipulate the tokens directly (as we do in the examples, to keep things short), but use third-party libraries such as Syn or Aster. It is early days for library support as well as language support, so there might be some wrinkles to iron out.

To see more complete examples, check out derive(new) or serde-derive.


As mentioned above, we intend for macros 1.1 custom derive to become stable as quickly as possible. We have just entered FCP on the tracking issue, so this feature could be in the stable compiler in as little as 12 weeks. Of course we want to make sure we get enough experience of the feature in libraries, and to fix some bugs and rough edges, before stabilisation. You can track progress in the tracking issue. The old custom derive feature is in FCP for deprecation and will be removed in the near-ish future.

Token-based syntax extensions

If you are already a procedural macro author using the syntax extension mechanism, you might be interested to try out token-based syntax extensions. These are new-style procedural macros with a tokens -> tokens signature, but which use the existing syntax extension infrastructure for declaring and using the macro. This will allow you to experiment with implementing procedural macros without changing the way your macros are used. It is very early days for this kind of macro (the RFC hasn't even been accepted yet) and there will be a lot of evolution from the current feature to the final one. Experimenting now will give you a chance to get a taste for the changes and to influence the long-term design.

To write such a macro, you must use crates which are part of the compiler and thus will always be unstable, eventually you won't have to do this and we'll be on the path to stabilisation.

Procedural macros are functions and return a TokenStream just like macros 1.1 custom derive (note that it's actually a different TokenStream implementation, but that will change). Function-like macros have a single TokenStream as input and attribute-like macros take two (one for the annotated item and one for the arguments to the macro). Macro functions must be registered with a plugin_registrar.

To use a macro, you use #![plugin(foo)] to import a macro crate called foo. You can then use the macros using #[bar] or bar!(...) syntax.


Macro crate (

#![feature(plugin, plugin_registrar, rustc_private)] #![crate_type = "dylib"] extern crate proc_macro_plugin; extern crate rustc_plugin; extern crate syntax; use proc_macro_plugin::prelude::*; use syntax::ext::proc_macro_shim::prelude::*; use rustc_plugin::Registry; use syntax::ext::base::SyntaxExtension; #[plugin_registrar] pub fn plugin_registrar(reg: &mut Registry) { reg.register_syntax_extension(token::intern("foo"), SyntaxExtension::AttrProcMacro(Box::new(foo_impl))); reg.register_syntax_extension(token::intern("bar"), SyntaxExtension::ProcMacro(Box::new(bar))); } fn foo_impl(_attr: TokenStream, item: TokenStream) -> TokenStream { let _source = item.to_string(); lex("fn f() { println!(\"Good bye!\"); }") } fn bar(_args: TokenStream) -> TokenStream { lex("println!(\"Hello!\");") }

Client crate (

#![feature(plugin, custom_attribute)] #![plugin(foo)] #[foo] fn f() { println!("Hello world!"); } fn main() { f(); bar!(); }

To build:

rustc && rustc -L . Stability

There is a lot of work still to do, stabilisation is going to be a long haul. Declaring and importing macros should end up very similar to custom derive with macros 1.1 - no plugin registrar. We expect to support full modularisation too. We need to provide, and then iterate on, the library functionality that is available to macro authors from the compiler. We need to implement a comprehensive hygiene scheme. We then need to gain experience and confidence with the system, and probably write some more RFCs.

However! The basic concept of tokens -> tokens macros will remain. So even though the infrastructure for building and declaring macros will change, the macro definitions themselves should be relatively future proof. Mostly, macros will just get easier to write (so less reliance on external libraries, or those libraries can get more efficient) and potentially more powerful.

We intend to deprecate and remove the MultiModifier and MultiDecorator forms of syntax extension. It is likely there will be a long-ish deprecation period to give macro authors opportunity to move to the new system.

Declarative macros

This post has been focused on procedural macros, but we also have plans for declarative macros. However, since these are stable and mostly work, these plans are lower priority and longer-term. The current idea is that there will be new kind of declarative macro (possibly declared using macro! rather than macro_rules!); macro_rules macros will continue working with no breaking changes. The new declarative macros will be different, but we hope to keep them mostly backwards compatible with existing macros. Expect improvements to naming and modularisation, hygiene, and declaration syntax.


Thanks to Alex Crichton for driving, designing, and implementing (which, in his usual fashion, was done with eye-watering speed) the macros 1.1 system; Jeffrey Seyfried for making some huge improvements to the compiler and macro system to facilitate the new macro designs; Cameron Swords for implementing a bunch of the TokenStream and procedural macros work; Erick Tryzelaar, David Tolnay, and Sean Griffin for updating Serde and Diesel to use custom derive, and providing valuable feedback on the designs; and to everyone who has contributed feedback and experience as the designs have progressed.


This post was also posted on users.r-l.o, if you want to comment or discuss, please do so there.

Categorieën: Mozilla-nl planet

Daniel Pocock: DVD-based Clean Room for PGP and PKI

Mozilla planet - mo, 10/10/2016 - 21:25

There is increasing interest in computer security these days and more and more people are using some form of PKI, whether it is signing Git tags, signing packages for a GNU/Linux distribution or just signing your emails.

There are also more home networks and small offices who require their own in-house Certificate Authority (CA) to issue TLS certificates for VPN users (e.g. StrongSWAN) or IP telephony.

Back in April, I started discussing the PGP Clean Room idea (debian-devel discussion and gnupg-users discussion), created a wiki page and started development of a script to build the clean room ISO using live-build on Debian.

Keeping the master keys completely offline and putting subkeys onto smart cards and other devices dramatically lowers the risk of mistakes and security breaches. Using a read-only DVD to operate the clean-room makes it convenient and harder to tamper with.

Trying it out in VirtualBox

It is fairly easy to clone the Git repository, run the script to create the ISO and boot it in VirtualBox to see what is inside:

At the moment, it contains a number of packages likely to be useful in a PKI clean room, including GnuPG, smartcard drivers, the lightweight pki utility from StrongSWAN and OpenSSL.

I've been trying it out with an SPR-532, one of the GnuPG-supported smartcard readers with a pin-pad and the OpenPGP card.

Ready to use today

More confident users will be able to build the ISO and use it immediately by operating all the utilities from the command line. For example, you should be able to fully configure PGP smart cards by following this blog from Simon Josefsson.

The ISO includes some useful scripts, for example, create-raid will quickly partition and RAID a set of SD cards to store your master key-pair offline.

Getting involved

To make PGP accessible to a wider user-base and more convenient for those who don't use GnuPG frequently enough to remember all the command line options, it would be interesting to create a GUI, possibly using python-newt to create a similar look-and-feel to popular text-based installer and system administration tools.

If you are keen on this project and would like to discuss it further, please come and join the new pki-clean-room mailing list and feel free to ask questions or share your thoughts about it.

One way to proceed may be to recruit an Outreachy or GSoC intern to develop the UI. Before they can get started, it would be necessary to more thoroughly document workflow requirements.

Categorieën: Mozilla-nl planet

Joel Maher: Working towards a productive definition of “intermittent orange”

Mozilla planet - mo, 10/10/2016 - 20:00

Intermittent Oranges (tests which fail sometimes and pass other times) are an ever increasing problem with test automation at Mozilla.

While there are many common causes for failures (bad tests, the environment/infrastructure we run on, and bugs in the product)
we still do not have a clear definition of what we view as intermittent.  Some common statements I have heard:

  • It’s obvious, if it failed last year, the test is intermittent
  • If it failed 3 years ago, I don’t care, but if it failed 2 months ago, the test is intermittent
  • I fixed the test to not be intermittent, I verified by retriggering the job 20 times on try server

These are imply much different definitions of what is intermittent, a definition will need to:

  • determine if we should take action on a test (programatically or manually)
  • define policy sheriffs and developers can use to guide work
  • guide developers to know when a new/fixed test is ready for production
  • provide useful data to release and Firefox product management about the quality of a release

Given the fact that I wanted to have a clear definition of what we are working with, I looked over 6 months (2016-04-01 to 2016-10-01) of OrangeFactor data (7330 bugs, 250,000 failures) to find patterns and trends.  I was surprised at how many bugs had <10 instances reported (3310 bugs, 45.1%).  Likewise, I was surprised at how such a small number (1236) of bugs account for >80% of the failures.  It made sense to look at things daily, weekly, monthly, and every 6 weeks (our typical release cycle).  After much slicing and dicing, I have come up with 4 buckets:

  1. Random Orange: this test has failed, even multiple times in history, but in a given 6 week window we see <10 failures (45.2% of bugs)
  2. Low Frequency Orange: this test might fail up to 4 times in a given day, typically <=1 failures for a day. in a 6 week window we see <60 failures (26.4% of bugs)
  3. Intermittent Orange: fails up to 10 times/day or <120 times in 6 weeks.  (11.5% of bugs)
  4. High Frequency Orange: fails >10 times/day many times and are often seen in try pushes.  (16.9% of bugs or 1236 bugs)

Alternatively, we could simplify our definitions and use:

  • low priority or not actionable (buckets 1 + 2)
  • high priority or actionable (buckets 3 + 4)

Does defining these buckets about the number of failures in a given time window help us with what we are trying to solve with the definition?

  • Determine if we should take action on a test (programatically or manually):
    • ideally buckets 1/2 can be detected programatically with autostar and removed from our view.  Possibly rerunning to validate it isn’t a new failure.
    • buckets 3/4 have the best chance of reproducing, we can run in debuggers (like ‘rr’), or triage to the appropriate developer when we have enough information
  • Define policy sheriffs and developers can use to guide work
    • sheriffs can know when to file bugs (either buckets 2 or 3 as a starting point)
    • developers understand the severity based on the bucket.  Ideally we will need a lot of context, but understanding severity is important.
  • Guide developers to know when a new/fixed test is ready for production
    • If we fix a test, we want to ensure it is stable before we make it tier-1.  A developer can use math of 300 commits/day and ensure we pass.
    • NOTE: SETA and coalescing ensures we don’t run every test for every push, so we see more likely 100 test runs/day
  • Provide useful data to release and Firefox product management about the quality of a release
    • Release Management can take the OrangeFactor into account
    • new features might be required to have certain volume of tests <= Random Orange

One other way to look at this is what does gets put in bugs (war on orange bugzilla robot).  There are simple rules:

  • 15+ times/day – post a daily summary (bucket #4)
  • 5+ times/week – post a weekly summary (bucket #3/4 – about 40% of bucket 2 will show up here)

Lastly I would like to cover some exceptions and how some might see this flawed:

  • missing or incorrect data in orange factor (human error)
  • some issues have many bugs, but a single root cause- we could miscategorize a fixable issue

I do not believe adjusting a definition will fix the above issues- possibly different tools or methods to run the tests would reduce the concerns there.

Categorieën: Mozilla-nl planet

Air Mozilla: Mozilla Weekly Project Meeting, 10 Oct 2016

Mozilla planet - mo, 10/10/2016 - 20:00

Mozilla Weekly Project Meeting The Monday Project Meeting

Categorieën: Mozilla-nl planet

uProxy for Mozilla Firefox and Chrome allows access to the internet via Web Proxy - Windows Report

Nieuws verzameld via Google - mo, 10/10/2016 - 18:28

Windows Report

uProxy for Mozilla Firefox and Chrome allows access to the internet via Web Proxy
Windows Report
uProxy is a browser extension for both Mozilla Firefox and Google Chrome. This extension allows you to share your internet route with other users and eventually act as a “self-hosted” proxy server or a free VPN service. In other words, by using uProxy ...

Google Nieuws
Categorieën: Mozilla-nl planet

Andreas Tolfsen: geckodriver 0.11.1 released

Mozilla planet - mo, 10/10/2016 - 16:38

Earlier today we released geckodriver version 0.11.1. geckodriver is an HTTP proxy for using W3C WebDriver-compatible clients to interact with Gecko-based browsers.

The program provides the HTTP API described by the WebDriver protocol to communicate with Gecko browsers, such as Firefox. It translates calls into the Marionette automation protocol by acting as a proxy between the local- and remote ends.

Some highlighted changes include:

  • Commands for setting- and getting the window position
  • Extension commands for finding an element’s anonymous children
  • A moz:firefoxOptions dictionary, akin to chromeOptions, that lets you configure binary path, arguments, preferences, and log options
  • Better profile support for officially branded Firefox builds

You should consult the full changelog for the complete list of notable changes.

You can fetch the latest builds which for the first time include Linux- and Windows 32-bit.

One non-backwards incompatible change to note is that the firefox_binary firefox_args, and firefox_profile capabilities have all been removed in favour of the moz:firefoxOptions dictionary. Please consult the documentation on how to use it.

Sample usage:

{ "moz:firefoxOptions": { // select a custom firefox installation // and pass some arguments to it "binary": "/usr/local/firefox/firefox-bin", "args": ["--foo", "--bar"], // profile directory as a Base64 encoded string "profile": "…", // dictionary of preferences to set "prefs": { {"privacy.trackingprotection.enabled": true}, {"privacy.donottrackheader.enabled", true} }, // increase logging verbosity "log": { "level": "trace" } } }
Categorieën: Mozilla-nl planet

Google and Mozilla Begin “Privacy Shaming” Users for Accepting Insecure ... - CoinTelegraph

Nieuws verzameld via Google - mo, 10/10/2016 - 15:10


Google and Mozilla Begin “Privacy Shaming” Users for Accepting Insecure ...
Google and Mozilla Begin “Privacy Shaming” Users for Accepting Insecure Connections. Web browser giants Google and Mozilla have implemented practices encouraging users to take care of their online privacy and security in an ongoing shift towards data ...

en meer »
Categorieën: Mozilla-nl planet

WoSign kondigt herstructurering aan en vervangt ceo na onderzoek Mozilla - Tweakers

Nieuws verzameld via Google - mo, 10/10/2016 - 11:27

WoSign kondigt herstructurering aan en vervangt ceo na onderzoek Mozilla
Het Chinese bedrijf Qihoo 360, de grootste aandeelhouder van de certificaatautoriteiten WoSign en Startcom, geeft aan een herstructurering van de bedrijven door te voeren. Dit is het gevolg van een onderzoek door Mozilla naar de door de autoriteiten ...

Categorieën: Mozilla-nl planet

Wil Clouser: Test Pilot Q3 OKR Review

Mozilla planet - mo, 10/10/2016 - 09:00

For the third quarter of 2016 the Test Pilot team decided to try using the OKR method (an OKR overview) for our goal setting.

We all sat down in London and hashed out what direction we wanted to move in for Q3, what we thought we could do in that timeframe, prioritized the results and then I published the results on the wiki. If you're interested in what Test Pilot did in Q3 you should read that link because it has a bunch of comments in it.

I knew we deprioritized some of our goals mid-quarter, but I was surprised to see us come up with a pretty modest .61. My takeaways from my first time using the OKR method is:

  • Wording is really important. Even if you all agree on some words while sitting around a table, look them over again the next day because they might not make as much sense as you think.

  • Getting the goals for your quarter planned before the quarter starts is tops.

  • Having a public list of goals you can point people to is great for your team, other teams you work with, and anyone in the community interested in your project.

  • Estimates for how long things will take you is still a Really Hard Problem.

The feedback I've received about the OKR process we followed has been really positive and I expect to continue it in the future.

Categorieën: Mozilla-nl planet