Mozilla Nederland LogoDe Nederlandse

Abonneren op feed Mozilla planet
Planet Mozilla -
Bijgewerkt: 19 uur 11 min geleden

Nick Cameron: Macros (and syntax extensions and compiler plugins) - where are we at?

ma, 10/10/2016 - 21:31

Procedural macros are one of the main reasons Rust programmers use nightly rather than stable Rust, and one of the few areas still causing breaking changes. Recently, part of the story around procedural macros has been coming together and here I'll explain what you can do today, and where we're going in the future.

TL;DR: as a procedural macro author, you're now able to write custom derive implementations which are on a fast track to stabilisation, and to experiment with the beginnings of our long-term plan for general purpose procedural macros. As a user of procedural macros, you'll soon be saying goodbye to bustage in procedural macro libraries caused by changes to compiler internals.

Macros today

Macros are an important part of Rust. They facilitate convenient and safe functionality used by all Rust programmers, such as println! and assert!; they reduce boilerplate, and make implementing traits trivial via derive. They also allow libraries to provide interesting and unusual abstractions.

However, macros are a rough corner - declarative macros (macro_rules macros) have their own system for modularisation, a fiddly syntax for declarations, and some odd rules around hygiene. Procedural macros (aka syntax extensions, compiler plugins) are unstable and painful to use. Despite that, they are used to implement some core parts of the ecosystem, including serialisation, and this causes a great deal of friction for Rust users who have to use nightly Rust or clunky build systems, and either way get hit with regular upstream breakage.

Future of procedural macros

We strongly want to improve this situation. Our current priority is procedural macros, and in particular the parts of the procedural macro system which force Rust users onto nightly or cause recurring upstream errors.

Our goal is an expressive, powerful, and well-designed system that is as stable as the rest of the language. Design work is ongoing in the RFC process. We have accepted RFCs on naming and custom derive/macros 1.1, there are open RFCs on the overall design of procedural macros, attributes, etc., and probably several more to come, in particular about the libraries available to macro authors.

The future, today!

One of the core innovations to the procedural macro system is to base our macros on tokens rather than AST nodes. The AST is a compiler-internal data structure; it must change whenever we add new syntax to the compiler, and often changes even when we don't due to refactoring, etc. That means that macros based on the AST break whenever the compiler changes, i.e., with every new version. In contrast, tokens are mostly stable, and even when they must change, that change can easily be abstracted over.

We have begun the implementation of token-based macros and today you can experiment with them in two ways: by writing custom derive implementations using macros 1.1, and within the existing syntax extension framework. At the moment these two features are quite different, but as part of the stabilisation process they should become more similar to use, and share more of their implementations.

Even better for many users, popular macro-based libraries such as Serde are moving to the new macro system, and crates using these libraries should see fewer errors due to changes to the compiler. Soon, users should be able to use these libraries from stable Rust.

Custom derive (macros 1.1)

The derive attribute lets programmers implement common traits with minimal boilerplate, typically generating an impl based on the annotated data type. This can be used with Eq, Copy, Debug, and many other traits. These implementations of derive are built in to the compiler.

It would be useful for library authors to provide their own, custom derive implementations. This was previously facilitated by the custom_derive feature, however, that is unstable and the implementation is hacky. We now offer a new solution based on procedural macros (often called 'macros 1.1', RFC, tracking issue) which we hope will be on a fast path to stabilisation.

The macros 1.1 solution offers the core token-based framework for declaring and using procedural macros (including a new crate type), but only a bare-bones set of features. In particular, even the access to tokens is limited: the only stable API is one providing conversion to and from strings. Keeping the API surface small allows us to make a minimal commitment as we continue iterating on the design. Modularization and hygiene are not covered, nevertheless, we believe that this API surface is sufficient for custom derive (as evidenced by the fact that Serde was easily ported over).

To write a macros 1.1 custom derive, you need only a function which takes and returns a proc_macro::TokenStream, you then annotate this function with an attribute containing the name of the derive. E.g., #[proc_macro_derive(Foo)] will enable #[derive(Foo)]. To convert between TokenStreams and strings, you use the to_string and parse functions.

There is a new kind of crate (alongside dylib, rlib, etc.) - a proc-macro crate. All macros 1.1 implementations must be in such a crate.

To use, you import the crate in the usual way using extern crate, and annotate that statement with #[macro_use]. You can then use the derive name in derive attributes.


(These examples will need a pretty recent nightly compiler).

Macro crate (

#![feature(proc_macro, proc_macro_lib)] #![crate_type = "proc-macro"] extern crate proc_macro; use proc_macro::TokenStream; #[proc_macro_derive(B)] pub fn derive(input: TokenStream) -> TokenStream { let input = input.to_string(); format!("{}\n impl B for A {{ fn b(&self) {{}} }}", input).parse().unwrap() }

Client crate (

#![feature(proc_macro)] #[macro_use] extern crate b; trait B { fn b(&self); } #[derive(B)] struct A; fn main() { let a = A; a.b(); }

To build:

rustc && rustc -L .

When building with Cargo, the macro crate must include proc-macro = true in its Cargo.toml.

Note that token-based procedural macros are a lower-level feature than the old syntax extensions. The expectation is that authors will not manipulate the tokens directly (as we do in the examples, to keep things short), but use third-party libraries such as Syn or Aster. It is early days for library support as well as language support, so there might be some wrinkles to iron out.

To see more complete examples, check out derive(new) or serde-derive.


As mentioned above, we intend for macros 1.1 custom derive to become stable as quickly as possible. We have just entered FCP on the tracking issue, so this feature could be in the stable compiler in as little as 12 weeks. Of course we want to make sure we get enough experience of the feature in libraries, and to fix some bugs and rough edges, before stabilisation. You can track progress in the tracking issue. The old custom derive feature is in FCP for deprecation and will be removed in the near-ish future.

Token-based syntax extensions

If you are already a procedural macro author using the syntax extension mechanism, you might be interested to try out token-based syntax extensions. These are new-style procedural macros with a tokens -> tokens signature, but which use the existing syntax extension infrastructure for declaring and using the macro. This will allow you to experiment with implementing procedural macros without changing the way your macros are used. It is very early days for this kind of macro (the RFC hasn't even been accepted yet) and there will be a lot of evolution from the current feature to the final one. Experimenting now will give you a chance to get a taste for the changes and to influence the long-term design.

To write such a macro, you must use crates which are part of the compiler and thus will always be unstable, eventually you won't have to do this and we'll be on the path to stabilisation.

Procedural macros are functions and return a TokenStream just like macros 1.1 custom derive (note that it's actually a different TokenStream implementation, but that will change). Function-like macros have a single TokenStream as input and attribute-like macros take two (one for the annotated item and one for the arguments to the macro). Macro functions must be registered with a plugin_registrar.

To use a macro, you use #![plugin(foo)] to import a macro crate called foo. You can then use the macros using #[bar] or bar!(...) syntax.


Macro crate (

#![feature(plugin, plugin_registrar, rustc_private)] #![crate_type = "dylib"] extern crate proc_macro_plugin; extern crate rustc_plugin; extern crate syntax; use proc_macro_plugin::prelude::*; use syntax::ext::proc_macro_shim::prelude::*; use rustc_plugin::Registry; use syntax::ext::base::SyntaxExtension; #[plugin_registrar] pub fn plugin_registrar(reg: &mut Registry) { reg.register_syntax_extension(token::intern("foo"), SyntaxExtension::AttrProcMacro(Box::new(foo_impl))); reg.register_syntax_extension(token::intern("bar"), SyntaxExtension::ProcMacro(Box::new(bar))); } fn foo_impl(_attr: TokenStream, item: TokenStream) -> TokenStream { let _source = item.to_string(); lex("fn f() { println!(\"Good bye!\"); }") } fn bar(_args: TokenStream) -> TokenStream { lex("println!(\"Hello!\");") }

Client crate (

#![feature(plugin, custom_attribute)] #![plugin(foo)] #[foo] fn f() { println!("Hello world!"); } fn main() { f(); bar!(); }

To build:

rustc && rustc -L . Stability

There is a lot of work still to do, stabilisation is going to be a long haul. Declaring and importing macros should end up very similar to custom derive with macros 1.1 - no plugin registrar. We expect to support full modularisation too. We need to provide, and then iterate on, the library functionality that is available to macro authors from the compiler. We need to implement a comprehensive hygiene scheme. We then need to gain experience and confidence with the system, and probably write some more RFCs.

However! The basic concept of tokens -> tokens macros will remain. So even though the infrastructure for building and declaring macros will change, the macro definitions themselves should be relatively future proof. Mostly, macros will just get easier to write (so less reliance on external libraries, or those libraries can get more efficient) and potentially more powerful.

We intend to deprecate and remove the MultiModifier and MultiDecorator forms of syntax extension. It is likely there will be a long-ish deprecation period to give macro authors opportunity to move to the new system.

Declarative macros

This post has been focused on procedural macros, but we also have plans for declarative macros. However, since these are stable and mostly work, these plans are lower priority and longer-term. The current idea is that there will be new kind of declarative macro (possibly declared using macro! rather than macro_rules!); macro_rules macros will continue working with no breaking changes. The new declarative macros will be different, but we hope to keep them mostly backwards compatible with existing macros. Expect improvements to naming and modularisation, hygiene, and declaration syntax.


Thanks to Alex Crichton for driving, designing, and implementing (which, in his usual fashion, was done with eye-watering speed) the macros 1.1 system; Jeffrey Seyfried for making some huge improvements to the compiler and macro system to facilitate the new macro designs; Cameron Swords for implementing a bunch of the TokenStream and procedural macros work; Erick Tryzelaar, David Tolnay, and Sean Griffin for updating Serde and Diesel to use custom derive, and providing valuable feedback on the designs; and to everyone who has contributed feedback and experience as the designs have progressed.


This post was also posted on users.r-l.o, if you want to comment or discuss, please do so there.

Categorieën: Mozilla-nl planet

Daniel Pocock: DVD-based Clean Room for PGP and PKI

ma, 10/10/2016 - 21:25

There is increasing interest in computer security these days and more and more people are using some form of PKI, whether it is signing Git tags, signing packages for a GNU/Linux distribution or just signing your emails.

There are also more home networks and small offices who require their own in-house Certificate Authority (CA) to issue TLS certificates for VPN users (e.g. StrongSWAN) or IP telephony.

Back in April, I started discussing the PGP Clean Room idea (debian-devel discussion and gnupg-users discussion), created a wiki page and started development of a script to build the clean room ISO using live-build on Debian.

Keeping the master keys completely offline and putting subkeys onto smart cards and other devices dramatically lowers the risk of mistakes and security breaches. Using a read-only DVD to operate the clean-room makes it convenient and harder to tamper with.

Trying it out in VirtualBox

It is fairly easy to clone the Git repository, run the script to create the ISO and boot it in VirtualBox to see what is inside:

At the moment, it contains a number of packages likely to be useful in a PKI clean room, including GnuPG, smartcard drivers, the lightweight pki utility from StrongSWAN and OpenSSL.

I've been trying it out with an SPR-532, one of the GnuPG-supported smartcard readers with a pin-pad and the OpenPGP card.

Ready to use today

More confident users will be able to build the ISO and use it immediately by operating all the utilities from the command line. For example, you should be able to fully configure PGP smart cards by following this blog from Simon Josefsson.

The ISO includes some useful scripts, for example, create-raid will quickly partition and RAID a set of SD cards to store your master key-pair offline.

Getting involved

To make PGP accessible to a wider user-base and more convenient for those who don't use GnuPG frequently enough to remember all the command line options, it would be interesting to create a GUI, possibly using python-newt to create a similar look-and-feel to popular text-based installer and system administration tools.

If you are keen on this project and would like to discuss it further, please come and join the new pki-clean-room mailing list and feel free to ask questions or share your thoughts about it.

One way to proceed may be to recruit an Outreachy or GSoC intern to develop the UI. Before they can get started, it would be necessary to more thoroughly document workflow requirements.

Categorieën: Mozilla-nl planet

Joel Maher: Working towards a productive definition of “intermittent orange”

ma, 10/10/2016 - 20:00

Intermittent Oranges (tests which fail sometimes and pass other times) are an ever increasing problem with test automation at Mozilla.

While there are many common causes for failures (bad tests, the environment/infrastructure we run on, and bugs in the product)
we still do not have a clear definition of what we view as intermittent.  Some common statements I have heard:

  • It’s obvious, if it failed last year, the test is intermittent
  • If it failed 3 years ago, I don’t care, but if it failed 2 months ago, the test is intermittent
  • I fixed the test to not be intermittent, I verified by retriggering the job 20 times on try server

These are imply much different definitions of what is intermittent, a definition will need to:

  • determine if we should take action on a test (programatically or manually)
  • define policy sheriffs and developers can use to guide work
  • guide developers to know when a new/fixed test is ready for production
  • provide useful data to release and Firefox product management about the quality of a release

Given the fact that I wanted to have a clear definition of what we are working with, I looked over 6 months (2016-04-01 to 2016-10-01) of OrangeFactor data (7330 bugs, 250,000 failures) to find patterns and trends.  I was surprised at how many bugs had <10 instances reported (3310 bugs, 45.1%).  Likewise, I was surprised at how such a small number (1236) of bugs account for >80% of the failures.  It made sense to look at things daily, weekly, monthly, and every 6 weeks (our typical release cycle).  After much slicing and dicing, I have come up with 4 buckets:

  1. Random Orange: this test has failed, even multiple times in history, but in a given 6 week window we see <10 failures (45.2% of bugs)
  2. Low Frequency Orange: this test might fail up to 4 times in a given day, typically <=1 failures for a day. in a 6 week window we see <60 failures (26.4% of bugs)
  3. Intermittent Orange: fails up to 10 times/day or <120 times in 6 weeks.  (11.5% of bugs)
  4. High Frequency Orange: fails >10 times/day many times and are often seen in try pushes.  (16.9% of bugs or 1236 bugs)

Alternatively, we could simplify our definitions and use:

  • low priority or not actionable (buckets 1 + 2)
  • high priority or actionable (buckets 3 + 4)

Does defining these buckets about the number of failures in a given time window help us with what we are trying to solve with the definition?

  • Determine if we should take action on a test (programatically or manually):
    • ideally buckets 1/2 can be detected programatically with autostar and removed from our view.  Possibly rerunning to validate it isn’t a new failure.
    • buckets 3/4 have the best chance of reproducing, we can run in debuggers (like ‘rr’), or triage to the appropriate developer when we have enough information
  • Define policy sheriffs and developers can use to guide work
    • sheriffs can know when to file bugs (either buckets 2 or 3 as a starting point)
    • developers understand the severity based on the bucket.  Ideally we will need a lot of context, but understanding severity is important.
  • Guide developers to know when a new/fixed test is ready for production
    • If we fix a test, we want to ensure it is stable before we make it tier-1.  A developer can use math of 300 commits/day and ensure we pass.
    • NOTE: SETA and coalescing ensures we don’t run every test for every push, so we see more likely 100 test runs/day
  • Provide useful data to release and Firefox product management about the quality of a release
    • Release Management can take the OrangeFactor into account
    • new features might be required to have certain volume of tests <= Random Orange

One other way to look at this is what does gets put in bugs (war on orange bugzilla robot).  There are simple rules:

  • 15+ times/day – post a daily summary (bucket #4)
  • 5+ times/week – post a weekly summary (bucket #3/4 – about 40% of bucket 2 will show up here)

Lastly I would like to cover some exceptions and how some might see this flawed:

  • missing or incorrect data in orange factor (human error)
  • some issues have many bugs, but a single root cause- we could miscategorize a fixable issue

I do not believe adjusting a definition will fix the above issues- possibly different tools or methods to run the tests would reduce the concerns there.

Categorieën: Mozilla-nl planet

Air Mozilla: Mozilla Weekly Project Meeting, 10 Oct 2016

ma, 10/10/2016 - 20:00

Mozilla Weekly Project Meeting The Monday Project Meeting

Categorieën: Mozilla-nl planet

Andreas Tolfsen: geckodriver 0.11.1 released

ma, 10/10/2016 - 16:38

Earlier today we released geckodriver version 0.11.1. geckodriver is an HTTP proxy for using W3C WebDriver-compatible clients to interact with Gecko-based browsers.

The program provides the HTTP API described by the WebDriver protocol to communicate with Gecko browsers, such as Firefox. It translates calls into the Marionette automation protocol by acting as a proxy between the local- and remote ends.

Some highlighted changes include:

  • Commands for setting- and getting the window position
  • Extension commands for finding an element’s anonymous children
  • A moz:firefoxOptions dictionary, akin to chromeOptions, that lets you configure binary path, arguments, preferences, and log options
  • Better profile support for officially branded Firefox builds

You should consult the full changelog for the complete list of notable changes.

You can fetch the latest builds which for the first time include Linux- and Windows 32-bit.

One non-backwards incompatible change to note is that the firefox_binary firefox_args, and firefox_profile capabilities have all been removed in favour of the moz:firefoxOptions dictionary. Please consult the documentation on how to use it.

Sample usage:

{ "moz:firefoxOptions": { // select a custom firefox installation // and pass some arguments to it "binary": "/usr/local/firefox/firefox-bin", "args": ["--foo", "--bar"], // profile directory as a Base64 encoded string "profile": "…", // dictionary of preferences to set "prefs": { {"privacy.trackingprotection.enabled": true}, {"privacy.donottrackheader.enabled", true} }, // increase logging verbosity "log": { "level": "trace" } } }
Categorieën: Mozilla-nl planet

Wil Clouser: Test Pilot Q3 OKR Review

ma, 10/10/2016 - 09:00

For the third quarter of 2016 the Test Pilot team decided to try using the OKR method (an OKR overview) for our goal setting.

We all sat down in London and hashed out what direction we wanted to move in for Q3, what we thought we could do in that timeframe, prioritized the results and then I published the results on the wiki. If you're interested in what Test Pilot did in Q3 you should read that link because it has a bunch of comments in it.

I knew we deprioritized some of our goals mid-quarter, but I was surprised to see us come up with a pretty modest .61. My takeaways from my first time using the OKR method is:

  • Wording is really important. Even if you all agree on some words while sitting around a table, look them over again the next day because they might not make as much sense as you think.

  • Getting the goals for your quarter planned before the quarter starts is tops.

  • Having a public list of goals you can point people to is great for your team, other teams you work with, and anyone in the community interested in your project.

  • Estimates for how long things will take you is still a Really Hard Problem.

The feedback I've received about the OKR process we followed has been really positive and I expect to continue it in the future.

Categorieën: Mozilla-nl planet

Robert O'Callahan: rr Paper: "Lightweight User-Space Record And Replay"

ma, 10/10/2016 - 03:02

Earlier this year we submitted the paper Lightweight User-Space Record And Replay to an academic conference. Reviews were all over the map, but ultimately the paper was rejected, mainly on the contention that most of the key techniques have (individually) been presented in other papers. Anyway, it's probably the best introduction to how rr works and how it performs that we currently have, so I want to make it available now in the hope that it's interesting to people.

Update The paper is now available on arXiv.

Categorieën: Mozilla-nl planet

Jeff Walden: Quotes of the day

ma, 10/10/2016 - 02:54

Of all tyrannies, a tyranny sincerely exercised for the good of its victims may be the most oppressive. It would be better to live under robber barons than under omnipotent moral busybodies. The robber baron’s cruelty may sometimes sleep, his cupidity may at some point be satiated; but those who torment us for our own good will torment us without end for they do so with the approval of their own conscience…. To be ‘cured’ against one’s will and cured of states which we may not regard as disease is to be put on a level of those who have not yet reached the age of reason or those who never will; to be classed with infants, imbeciles, and domestic animals.

C. S. Lewis, The Humanitarian Theory of Punishment

Frequently a [proposition] will [present itself], so to speak, in sheep’s clothing: [its undesirable consequences are] not immediately evident, and must be discerned by a careful and perceptive analysis. But this wolf comes as a wolf.

Scalia, J. dissenting in Morrison v. Olson
Categorieën: Mozilla-nl planet

Cameron Kaiser: The Xserves still lurk

ma, 10/10/2016 - 02:38
Xserves, the last true Apple server systems, apparently still lurk in dark corners in data centres near you. But for me, the quintessential Apple server line will always be the Apple Network Servers.
Categorieën: Mozilla-nl planet

Cameron Kaiser: A Saturday mystery, or, locatedb considered harmful to old Macs

za, 08/10/2016 - 22:02
I've been waist-deep on AltiVec intrinsics for the last week converting some of those big inverse discrete cosine and Hadamard transforms for TenFourFox's vectorized PowerPC VP9 codec. The little ones cause a noticeable but minor improvement, but when I got the first large transform done there was a big jump in performance on this quad G5. Note that the G5, even though its vector unit is based on the 7400 and therefore weaker than the 7450's, likes long strings of sequential code it can reorder, which is essentially what that huge clot of vector intrinsics is, so I have not yet determined if I've just optimized it well for the G5 or it's generalizeable to the G4 too. My theory is that even though the improvement ratio is about the same (somewhere between 4:1 and 8:1 depending on how much data they swallow per cycle), these huge vectorized inverse transforms accelerate code that takes a lot of CPU time ordinarily, so it's a bigger absolute reduction. I'm going to work on a couple more this weekend and see if I can get even more money out of it. 720p playback is still out of the question even with the Quad at full tilt, but 360p windowed is pretty smooth and even 360p fullscreen (upscaled to 1080p) and 480p windowed can keep up, and it buffers a lot quicker.

The other thing I did was to eliminate some inefficiencies in the CoreGraphics glue we use for rendering pretty much everything (there is no Skia support on 10.4) except the residual Cairo backend that handles printing. In particular, I overhauled our blend and composite mode code so that it avoids a function call on every draw operation. This is a marginal speedup but it makes some types of complex animation much smoother.

Overall I'm pretty happy with this and no one has reported any issues with the little-endian typed array switchover, so I'll make a second beta release sometime next week hopefully. MSE will still be off by default in that build but unless I hear different or some critical showstopper crops up it will be switched on for the final public release.

When I sat down at my G5 this warm Southern California Saturday morning, however, I noticed that MenuMeters (a great tool to have if you don't already) showed the Quad was already rather occupied. This wasn't a new thing; I'd seen what I assumed was a stuck cron job or something for the last several Saturday mornings and killed it in the Activity Monitor. But this was the sixth week in a row it had happened and it looked like it had been running for over three hours wasting CPU time, so enough was enough.

The offending process was something running /usr/bin/find to find, well, everything (that wasn't in /tmp or /usr/tmp), essentially iterating over the whole damn filesystem. A couple of ps -wwjp (What Would Jesus Post?) later showed it was being kicked off as part of the update system for an old Unix dragon of yore, locate.

There are no less than three possible ways to find files from the command line in OS X macOS. One is the venerable find command, which is the slowest of the lot (it uses no cache) and the predicates can be somewhat confusing to novices, but is guaranteed to be up-to-date because it doesn't rely on a pre-existing database and will find nearly anything. The second is of course Spotlight, which is accessible from the Terminal using the mdfind command. There are man pages for both.

The third way is locate, which is easier than find and faster because it uses a database for quick lookups, but less comprehensive than Spotlight/mdfind because it only looks for filenames instead of within file content as well, and the updater has to run periodically to stay current. (There's a man page for it too.) It would seem that Spotlight could completely supersede locate, and Apple thinks so too, because it was turned into a launchd .plist in 10.6 (look at /System/Library/LaunchDaemons/ and disabled by default. That's not the case for 10.5 and previous, however, and I have so many files on my G5 by now that the runtime to update the locate database is now close to five hours -- on an SSD! And that's why it was still running when I sat down to do work after breakfast.

I don't use locate because Spotlight is more convenient and updates practically on demand instead of weekly. If you don't either, then niced or not it's wasted effort and you should disable it from running within your Mac's periodic weekly maintenance tasks. (Note: on 10.3 and earlier, since you don't have Spotlight, you may not want to do this unless locate's update process is tying up your machine also.) Here's how:

  • On 10.5, the weekly periodic script can be told specifically not to run locate.updatedb. Edit /etc/defaults/periodic.conf as root (such as sudo vi /etc/defaults/periodic.conf -- you did fix the sudo bug, right?) and set weekly_locate_enable to "NO".

  • On 10.4 and before (I checked this on my 10.2.8 strawberry iMac G3 as well, so I'm sure 10.3 is the same), the weekly script doesn't offer this option. However, it does check to see if locate.updatedb is executable before it runs it, so simply make it non-executable: sudo chmod -x /usr/libexec/locate.updatedb

Now for some 8-Bit Weapon ambient (de)programming with a much more sedate G5 into the rest of the weekend.
Categorieën: Mozilla-nl planet

Hub Figuière: Rust and Automake

za, 08/10/2016 - 04:01

But why automake? Cargo is nice.

Yes it is. But it is also limited to build the Rust crate. It does one thing, very well, and easily.

Although I'm writing a GNOME application and this needs more than building the code. So I decided I need to wrap the build process into automake.

Let's start with Autoconf for Rust Project. This post is a great introduction to solving the problem and give an actual example on doing it even though the author just uses autoconf. I need automake too, but this is a good start.

We'll basically write a and a in the top level Rust crate directory.

AC_INIT([gpsami], m4_esyscmd([grep ^version Cargo.toml | awk '{print $3}' | tr -d '"' | tr -d "\n"]), []) AM_INIT_AUTOMAKE([1.11 foreign no-dependencies no-dist-gzip dist-xz subdir-objects])

Let's init autoconf and automake. We use the options: foreign to not require all the GNU files, no-dependencies because we don't have dependency tracking done by make (cargo do that for us) and subdir-objects because we have one and don't want recursive mode.

The m4_esyscmd macro is a shell command to extract the version out of the Cargo.toml.

VERSION=$(grep ^version Cargo.toml | awk '{print $3}' | tr -d '"' | tr -d "\n")

This does the same as above, but put it into VERSION

This shell command was adapted from Autoconf for Rust Project but fixed as it was being greedy and also captured the "version" strings from the dependencies.

AC_CHECK_PROG(CARGO, [cargo], [yes], [no]) AS_IF(test x$CARGO = xno, AC_MSG_ERROR([cargo is required]) ) AC_CHECK_PROG(RUSTC, [rustc], [yes], [no]) AS_IF(test x$RUSTC = xno, AC_MSG_ERROR([rustc is required]) )

Check for cargo and rustc. I'm pretty sure without rustc you don't have cargo, but better be safe than sorry. Note that this is considered a fatal error at configure time.

dnl Release build we do. CARGO_TARGET_DIR=release AC_SUBST(CARGO_TARGET_DIR)

This is a trick: we need the cargo target directory. We hardcode to release as that's what we want to build.

The end is pretty much standard.

So far just a few tricks.

desktop_files = data/gpsami.desktop desktopdir = $(datadir)/applications desktop_DATA = $(desktop_files) ui_files = src/mgwindow.ui \ $(null)

Just some basic declarations in the The desktop file with installation target and the ui_files. Note that at the moment the ui files are not installed because we inline them in Rust.

EXTRA_DIST = Cargo.toml \ src/devices.json \ src/ \ src/ \ src/ \ src/ \ src/ \ src/ \ $(ui_files) \ $(desktop_in_files) \ $(null)

We want to distribute the source files and the desktop files. This will get more complex when the crate grows as we'll need to add more files to here.

all-local: cargo build --release clean-local: -cargo clean

Drive build and clean targets with cargo.

install-exec-local: $(MKDIR_P) $(DESTDIR)$(bindir) $(INSTALL) -c -m 755 target/@CARGO_TARGET_DIR@/gpsami $(DESTDIR)$(bindir)

We have to install the binary by hand. That's one of the drawback of cargo.

We this, we do

$ autoreconf -si $ ./configure $ make # make install

This build in release and install it in the prefix. You can even make dist, which is another of the reason why I wanted to do that.

Caveats: I know this will not work if we build in a different directory than the source directory. make distcheck fails for that reason.

I'm sure there are ways to improve this, and I will probably, but I wanted to give a recipe for something I wanted to do.

Categorieën: Mozilla-nl planet

Karl Dubost: [worklog] Edition 039. kill two images with one python

vr, 07/10/2016 - 16:55

Being born in France, lived/ing in Canada and Japan, The international news pages are usually my preferred source of information about the world. But when I read the non-comical farce and quite disheartening run for the USA presidential 2016, I'm dumbfounded. Quick, poetry and imagination! Tune of the week: Ol' Man River - William Warfield

Webcompat Life

Progress this week:

Today: 2016-10-11T07:15:20.170216 363 open issues ---------------------- needsinfo 19 needsdiagnosis 123 needscontact 12 contactready 20 sitewait 170 ----------------------

You are welcome to participate

I'll be speaking in Jakarta, Indonesia for Tech in Asia 2016 on November 16.

Preparing a brownbag for Taipei's office.

Webcompat issues

(a selection of some of the bugs worked on this week). development Reading List
  • will-change used too often. The thing I found interesting in this article is that it is written entirely from the prospective of Chrome without any tests in other browsers. This is one of the issues of the way some people think about the Web. Firefox is sending a warning in the console when people over-use will-change. fwiw the code provided in the article works very well in Firefox/Gecko and Safari/WebKit (testing in Edge would be cool too) and indeed shows bluriness in Blink (Opera and Chrome).

    The will-change spec doesn't really specify implementation details which means that Chrome's new behavior may be completely unique; Firefox might do something different, and then there's Edge, Safari, Opera, Android, etc. Perhaps Chrome requires that developers toggle back-and-forth to maintain clarity, but what if Firefox interprets that differently, or imposes a big performance penalty when doing the same thing? What if developers must resort to various [potentially conflicting] hacks for each browser, bloating their code and causing all sorts of headaches. We may have to resort to user agent sniffing again (did you just throw up a little in your mouth?). This will-change property that was intended to SOLVE problems for animators may end up doing the opposite.

Follow Your Nose TODO
  • Document how to write tests on using test fixtures.
  • ToWrite: Amazon prefetching resources with <object> for Firefox only.


Categorieën: Mozilla-nl planet

Gian-Carlo Pascutto: Firefox sandbox on Linux tightened

vr, 07/10/2016 - 16:43
As just announced on, we landed a set of changes in today's Nightly that tightens our sandboxing on Linux. The content process, which is the part of Firefox that renders webpages and executes any JavaScript on them, had been previously restricted in the amount of system calls that it could access. As of today, it no longer has write access to the filesystem, barring an exception for shared memory and /tmp. We plan to also remove the latter, eventually.

As promised, we're continuing to batten down the hatches gradually, making it harder for an attacker to successfully exploit Firefox. The changes that landed this night are an important step, but far from the end of the road, and we'll be continuing to put out iterative improvements.

Some of our next steps will be to address the interactions with the X11 windowing system, as well as implementing read restrictions.
Categorieën: Mozilla-nl planet

Myk Melez: SpiderNode In Positron

vr, 07/10/2016 - 08:35

Last Friday, Brendan Dahl landed SpiderNode integration into Positron. Now, when you run an Electron app in Positron, the app’s main script runs in a JavaScript context that includes the Node module loader and the core Node modules.

The hello-world-server test app demonstrates an Electron BrowserWindow connecting to a Node HTTP server started by the main script. It’s similar to the hello-world test app (a.k.a. the Electron Quick Start app), with this added code to create the HTTP server:

// Load the http module to create an HTTP server. var http = require('http'); // Configure our HTTP server to respond with Hello World to all requests. var server = http.createServer(function (request, response) { response.writeHead(200, {"Content-Type": "text/plain"}); response.end("Hello World from node " + process.versions.node + "\n"); });

The main script then loads a page from the server in an Electron BrowserWindow:

const electron = require('electron'); const app =; // Module to control application life. const BrowserWindow = electron.BrowserWindow; // Module to create native browser window. … var mainWindow = null; … // This method will be called when Electron has finished // initialization and is ready to create browser windows. app.on('ready', function() { // Create the browser window. mainWindow = new BrowserWindow({width: 800, height: 600}); // and load the index.html of the app. mainWindow.loadURL('http://localhost:8000'); … });

Which results in:

Hello World Server Demo

The simplicity of that screenshot belies the complexity of the implementation! It requires SpiderNode, which depends on SpiderShim (based on ChakraShim from node-chakracore). And it must expose Node APIs to the existing JavaScript context created by the Positron app runner while also synchronizing the SpiderMonkey and libuv event loops.

It’s also the first example of using SpiderNode in a real-world (albeit experimental) project, which is an achievement for that effort and a credit to its principal contributors, especially Ehsan Akhgari, Trevor Saunders, and Brendan himself.

Try it out for yourself:

git clone cd positron git submodule update --init MOZCONFIG=positron/config/mozconfig ./mach build ./mach run positron/test/hello-world-server/

Or, for a more interesting example, run the basic browser app:

git clone ./mach run electron-sample-apps/webview/browser/

(Note: Positron now works on Mac and Linux but not Windows, as SpiderNode doesn’t yet build there.)

Categorieën: Mozilla-nl planet

Mozilla Cloud Services Blog: Device management coming to your Firefox Account

do, 06/10/2016 - 22:37

Today we are beginning a phased roll out of a new account management feature to Firefox Accounts users. This new feature aims to give users a clear overview of all services attached to the account, and provide our users with full control over their synced devices.

With the new “Devices” panel in your Firefox Accounts settings, you will be able to manage all your devices that use Firefox Sync. The devices section shows all connected Firefox clients on Desktop, iOS and Android, making it an excellent addition to those who use Firefox Sync on multiple devices. Use the “Disconnect” button to get rid of the devices that you don’t want to sync.

This feature will be made available to all users soon and we have a lot more planned to make account management easier for everyone. Here’s what the first version of the devices view looks like:

To stay organized you can easily rename your device in the Sync Preferences using the “Device Name” panel:

Updating Device Name

Thanks to everyone who worked on this feature: Phil Booth, Jon Buckley, Vijay Budhram, Alex Davis, Ryan Feeley, Vlad Filippov, Mark Hammond, Ryan Kelly , Sean McArthur, John Morrison, Edouard Oger, Shane Tomlinson. Special thanks to developers on the mobile teams that helped with device registration: Nick Alexander, Michael Comella, Stephan Leroux and Brian Nicholson.

If you want to get involved with the Firefox Accounts open source project please visit: Make sure to visit the Firefox Accounts settings page in the coming weeks to take more control over your devices!

Categorieën: Mozilla-nl planet

Air Mozilla: Reps Weekly Meeting Oct. 6, 2016

do, 06/10/2016 - 18:00

Reps Weekly Meeting Oct. 6, 2016 This is a weekly call with some of the Reps to discuss all matters about/affecting Reps and invite Reps to share their work with everyone.

Categorieën: Mozilla-nl planet

Doug Belshaw: Web Literacy badges in GitHub

do, 06/10/2016 - 14:25

I’m delighted to see that Mozilla have worked with Digitalme to create new Open Badges based on their Web Literacy Map. Not only that, but the badges themselves are platform-agnostic, with a GitHub repository containing the details that can be used under an open license.

Web Literacy Badges

As Matt Rogers from Digitalme documents in A journey into the open, there’s several levels to working open:

In a recent collaboration with the Mozilla Learning team – I got to understand how I can take our work to the next level of openness. Creating publicly available badge projects is one thing, but it’s another when they’re confined to one platform – even if that is your own. What truly makes a badge project open is its ability to be taken, maybe remixed, and utilised anywhere across the web. Be that on a different badging platform, or via a completely different delivery means entirely.

This is exactly the right path for the Web Literacy work and Open Badges work at Mozilla. It’s along the same lines as something I tried to achieve during my time as Web Literacy Lead there. Now, however, it seems like they’ve got the funding and capacity to get on with it. A big hats-off to Digitalme for wrangling this, and taking the hard, but extremely beneficial and enlightening steps towards working (even) more openly.

If you’re interested in working more openly, why not get in touch with the co-operative I’m part of, We Are Open?

Comments? Questions? I’m @dajbelshaw on Twitter, or you can email me:

Categorieën: Mozilla-nl planet

Andreas Tolfsen: What’s cooking in Marionette

do, 06/10/2016 - 12:17

A few weeks ago I sent out an email to the Marionette mailing list about the topics that have been cooking in Marionette. I intended to make that a semi-regular occurrence, but it was interrupted by travel to TPAC for specification meetings.

The following list enumerates all changes I have deemed relevant to public consumers from Marionette, geckodriver, and webdriver-rust.

Of course many more changes have happened to the (internal) Marionette test harness and there have been several more commits of janitorial nature, but the intention here is to distil the bulk of information into a format that is useful to ① people not actively engaged in development, and ② for us to keep track of what we have in the pipeline for the upcoming release.

I suspect it will also be useful as a reference when triaging new issue reports.

Avoid CPOW when setting file array on <input type=file> Marionette status-firefox49 fix-optional tracking-firefox50 - status-firefox50 fix-optional tracking-firefox51 ? status-firefox51 fixed

There is no longer any need to disable safe CPOW checks when using the latest Nightly. Still I would recommend keeping the preference until the fix has ridden all trains to stable.

The checks are disabled by setting dom.ipc.cpows.forbid-unsafe-from-browser to false. You can enable the checks by removing this preference from the startup profile.

There is a hope to uplift this to Beta once it has been verified to work well there.

DOM events not fired on interacting with <input type=file< element Marionette status-firefox49 wontfix status-firefox50 fixed status-firefox51 fixed

Marionette will now dispatch the correct DOM events when interacting with <input type=file> elements.

Wrong element is clicked when requested element is out of view in <select> element Marionette status-firefox49 fixed status-firefox50 fixed status-firefox51 fixed

Marionette now supports interaction with <select> and <select multiple> elements.

Allow quitApplication to accept no parameters Marionette status-firefox50 fixed status-firefox51 fixed

We patched the quitApplication command to optionally take flags, so that one does not have to supply any by default.

It has been uplifted to Firefox 50, which means we will soon be able to remove the explicit eForceQuit usage from geckodriver.

Cross-compile on win32 using Docker image from port-of-rust geckodriver tracking 0.11.0

The Windows 32-bit compilation issue we had with Rust was solved by using a Docker image with the right dependencies set up.

Switch to building with Rust beta geckodriver tracking 0.11.0

We should probably downgrade this to stable once the relevant fixes make their way there.

Add extension command for finding anonymous nodes geckodriver tracking 0.11.0

XBL has the concept of anonymous nodes that are not returned by the usual WebDriver element-finding methods. However there are two Gecko-specific methods of finding them; either by getting all the anonymous children of a reference element, or getting a single anonymous child of a reference element with specified attribute values.

This commit adds two endpoints corresponding to those methods:


Return all anonymous children.


Return an anonymous element with the given attribute value, provided as a body of the form:

{ "name": <attribute name>, "value": <attribute value> }
Set log verbosity from capability geckodriver tracking 0.11.0

Using the capability firefoxOptions.log.level (full usage described in it’s possible to set the log level of both Firefox and geckodriver itself. This will become useful as most of our users have trouble figuring out how to start geckodriver with the -vv flag in order to enable very verbose logging. From 0.11 we can recommend users to add this to their capabilities instead.

In the same vein, there is an open PR to replace the default log dependency with slog, as env_logger does not allow us to reinstantiate it at runtime. This will cause issues when providing a different log level in the capabilities for subsequent sessions.

Add prefs capability to firefoxOptions geckodriver tracking 0.11.0

This takes the form of a dictionary of {pref_name: pref_value}. These prefs are applied after the default prefs, but before those required to enable Marionette.

Disable additional welcome URL geckodriver tracking 0.11.0

We also disabled the additional welcome URL in geckodriver which has caused officially branded builds to open two new tabs when started with a clean profile. This doesn’t reproduce in Nightly builds as it does not have it set by default. As the additional welcome page uses a plugin that forks and starts the plugin container process, deleting a session whilst on this site sometimes causes that process to crash. However, we still haven’t pinned down the cause of the underlying issue which is tracked in issue 225.

Log listening host and port when starting geckodriver geckodriver tracking 0.11.0 Propagate webdriver::server::start errors and display them in geckodriver without panicking geckodriver tracking 0.11.0 Add --webdriver-port argument back as a hidden alias geckodriver tracking 0.11.0

This is a hidden and deprecated flag, and we do not recommend using it. The intention is to make the transition somewhat easier for some of our users.

Incrementally improve the UI geckodriver tracking 0.11.0

Copying information is currently included in --help and this patch makes it only appear when --version is invoked.

Futhermore, the error messages that are printed on invalid input are made consistent.

Add Get Window Position and Set Window Position commands webdriver-rust tracking 0.15.0

Commands are added to the WebDriver specification in w3c/webdriver#307, but still remains to be exposed in geckodriver.

Return early using try!() instead of unwrapping errors webdriver-rust tracking 0.15.0 Return hyper::server::Listening so user can access socket address webdriver-rust fixed 0.14.0 Correct error type when starting second session webdriver-rust fixed 0.14.0

Originally sent to the mailing list.


Categorieën: Mozilla-nl planet

Mozilla Addons Blog: Friend of Add-ons: Atique Ahmed Ziad

do, 06/10/2016 - 02:36

Please meet our newest Friend of Add-ons: Atique Ahmed Ziad, who in September alone closed 14 AMO bugs!

Atique is primarily interested in front-end work. His recent contributions mostly focused on RTL language bugs, however it was this complex error messaging bug that proved the most challenging because it forced Atique to grapple with complex code.

Beyond crushing bugs, Atique also helps organize Activate campaigns.

When he’s not busy being a tireless champion for the open Web, Atique likes to unwind by taking in a good movie or playing video games; and while he’s one of the nicest guys you’ll ever meet, you’ll want to avoid him in the virtual settings of Grand Theft Auto and Call of Duty.

On behalf of the AMO staff and community, thank you for all of your great work, Atique!

Do you contribute to AMO in some way? If so, don’t forget to add your contributions to our Recognition page!

Categorieën: Mozilla-nl planet