Mozilla Nederland LogoDe Nederlandse

Abonneren op feed Mozilla planet
Planet Mozilla -
Bijgewerkt: 1 week 6 dagen geleden

Chris H-C: Distributed Teams: Why I Don’t Go to the Office More Often

ma, 18/11/2019 - 16:00

I was invited to a team dinner as part of a work week the Data Platform team was having in Toronto. I love working with these folks, and I like food, so I set about planning my logistics.

The plan was solid, but unimpressive. It takes three hours or so to get from my home to the Toronto office by transit, so I’d be relying on the train’s WiFi to allow me to work on the way to Toronto, and I’d be arriving home about 20min before midnight.

Here’s how it went:

  1. 0800 Begin
  2. 0816 Take the GRT city bus to Kitchener train station
  3. 0845 Try to find a way to get to the station (the pedestrian situation around the station is awful)
  4. 0855 Learn that my 0918 train is running 40min late.
  5. 0856 Purchase a PRESTO card for my return journey, being careful to not touch any of the blood stains on the vending machine. (Seriously. Someone had a Bad Time at Kitchener Station recently)
  6. 0857 Learn that they had removed WiFi from the train station, so the work I’ll be able to do is limited to what I can manage on my phone’s LTE
  7. 0900 Begin my work day (Slack and IRC only), and eat the breakfast I packed because I didn’t have time at home.
  8. 0943 Train arrives only 35min late. Goodie.
  9. 0945 Learn from the family occupying my seat that I actually bought a ticket for the wrong day. Applying a discount code didn’t keep the date and time I selected, and I didn’t notice until it was too late. Sit in a different seat and wonder what the fare inspector will think.
  10. 0950 Start working from my laptop. Fear of authority can build on its own time, I have emails to answer and bugs to shuffle.
  11. 1030 Fare inspector finally gets around to me as my nervousness peaks. Says they’ll call it in and there might be an adjustment charge to reschedule it.
  12. 1115 Well into Toronto, the fare inspector just drops my ticket into my lap on their way to somewhere else. I… guess everything’s fine?
  13. 1127 Train arrives at Toronto Union Station. Disconnect WiFi, disembark and start walking to the office. (Public transit would be slower, and I’m saving my TTC token for tonight’s trip)
  14. 1145 Arrive at MoTo just in time for lunch.

Total time to get to Mozilla Toronto: 3h45min. Total distance traveled: 95km Total cost: $26 for the Via rail ticket, $2.86 for the GRT city bus.

The way back wasn’t very much better. I had to duck out of dinner at 8pm to have a hope of getting home before the day turned into tomorrow:

  1. 2000 Leave the team dinner, say goodnights. Start walking to the subway
  2. 2012 At the TTC subway stop learn that the turnstiles don’t take tokens any more. Luckily there’s someone in the booth to take my fare.
  3. 2018 Arrive at Union station and get lost in the construction. I thought the construction was done (the construction is never done).
  4. 2025 Ask at the PRESTO counter how to use PRESTO properly. I knew it was pay-by-distance but I was taking a train _and_ a bus, so I wasn’t sure if I needed to tap in between the two modes (I do. Tap before the train, after the train, on the bus when you get on, and on the bus when you get off. Seems fragile, but whatever).
  5. 2047 Learn that the train’s been rescheduled 6min later. Looks like I can still make my bus connection in Bramalea.
  6. 2053 Tap on the thingy, walk up the flights of stairs to the train, find a seat.
  7. 2102 “Due to platform restrictions, the doors on car 3107 will not open at Bramalea”… what car am I on? There’s no way to tell from where I’m sitting.
  8. 2127 Arrive at Bramalea. I’m not on car 3107.
  9. 2130 Learn that there’s one correct way to leave the platform and I took the other one that leads to the parking lot. Retrace my steps.
  10. 2132 Tap the PRESTO on the thingy outside the station building (closed)
  11. 2135 Tap the PRESTO on the thingy inside the bus. BEEP BEEP. Bus driver says insufficient funds. That can’t be, I left myself plenty of room. Tick tock.
  12. 2136 Cold air aching in my lungs from running I load another $20 onto the PRESTO
  13. 2137 Completely out of breath, tap the PRESTO on the thingy inside the bus. Ding. Collapse in a seat. Bus pulls out just moments later.
  14. 2242 Arrive in Kitchener. Luckily the LRT, running at 30min headways, is 2min away. First good connection of the day.
  15. 2255 This is the closest the train can get me. There’s a 15min wait (5 of which I’ll have to walk in the cold to get to the stop) for a bus that’ll get me, in 7min, within a 10min walk from home. I decide to walk instead, as it’ll be faster.
  16. 2330 Arrive home.

Total time to get home: 3h30min. Total distance traveled: 103km. Total cost: $3.10 for the subway token, $46 PRESTO ($6 for the card, $20 for the fare, $20 for the surprise fare), $2.86 for the LRT.

At this point I’ve been awake for over 20 hours.

Is it worth it? Hard to say. Every time I plan one of these trips I look forward to it. Conversations with office folks, eating office lunch, absconding with office snacks… and this time I even got to go out to dinner with a bunch of data people I work with all the time!

But every time I do this, as I’m doing it, or as I’m recently back from doing it… I don’t feel great about it. It’s essentially a full work day (nearly eight full hours!) just in travel to spend 5 hours in the office, and (this time) a couple hours afterwards in a restaurant.

Ultimately this — the share of my brain I need to devote purely to logistics, the manifold ways things can go wrong, the sheer _time_ it all takes — is why I don’t go into the office more often.

And the people are the reason I do it at all.


Categorieën: Mozilla-nl planet

Mozilla Privacy Blog: Mozilla Mornings on the future of openness and data access in the EU

ma, 18/11/2019 - 10:51

On 10 December, Mozilla will host the next installment of our Mozilla Mornings series – regular breakfast meetings where we bring together policy experts, policymakers and practitioners for insight and discussion on the latest EU digital policy developments.

The next installment will focus on openness and data access in the European Union. We’re bringing together an expert panel to discuss how the European Commission should approach a potential framework on data access, sharing and re-use.


Agustín Reyna
Head of Legal and Economic Affairs
BEUC, the European Consumer Organisation

Benjamin Ledwon
Head of Brussels Office

Maud Sacquet
Public Policy Manager
Mozilla Corporation

Moderated by Jennifer Baker
EU tech journalist

Logistical information

10 December, 2019
08:30 – 10:30
The Office cafe, Rue d’Arlon 80, Brussels 1040

Register your attendance here


The post Mozilla Mornings on the future of openness and data access in the EU appeared first on Open Policy & Advocacy.

Categorieën: Mozilla-nl planet

QMO: Firefox 71 Beta 12 Testday – November 22nd

ma, 18/11/2019 - 10:07

Hello Mozillians,

We are happy to let you know that FridayNovember 22nd, we are organizing Firefox 71 Beta 12 Testday. We’ll be focusing our testing on: Inactive CSS.

Check out the detailed instructions via this gdoc.

*Note that this events are no longer held on etherpad docs since was disabled.

No previous testing experience is required, so feel free to join us on #qa IRC channel where our moderators will offer you guidance and answer your questions.

Join us and help us make Firefox better!

See you on Friday!

Categorieën: Mozilla-nl planet

Botond Ballo: Trip Report: C++ Standards Meeting in Belfast, November 2019

vr, 15/11/2019 - 16:00
Summary / TL;DR

Project What’s in it? Status C++20 See below On track Library Fundamentals TS v3 See below Under development Concepts Constrained templates In C++20 Parallelism TS v2 Task blocks, library vector types and algorithms, and more Published! Executors Abstraction for where/how code runs in a concurrent context Targeting C++23 Concurrency TS v2 See below Under active development Networking TS Sockets library based on Boost.ASIO Published! Not in C++20. Ranges Range-based algorithms and views In C++20 Coroutines Resumable functions (generators, tasks, etc.) In C++20 Modules A component system to supersede the textual header file inclusion model In C++20 Numerics TS Various numerical facilities Under active development C++ Ecosystem TR Guidance for build systems and other tools for dealing with Modules Under active development Contracts Preconditions, postconditions, and assertions Under active development Pattern matching A match-like facility for C++ Under active development Reflection TS Static code reflection mechanisms Publication imminent Reflection v2 A value-based constexpr formulation of the Reflection TS facilities Under active development Metaclasses Next-generation reflection facilities Early development

A few links in this blog post may not resolve until the committee’s post-meeting mailing is published (expected within a few days of November 25, 2019). If you encounter such a link, please check back in a few days.


Last week I attended a meeting of the ISO C++ Standards Committee (also known as WG21) in Belfast, Northern Ireland. This was the third and last committee meeting in 2019; you can find my reports on preceding meetings here (July 2019, Cologne) and here (February 2019, Kona), and previous ones linked from those. These reports, particularly the Cologne one, provide useful context for this post.

At the last meeting, the committee approved and published the C++20 Committee Draft (CD), a feature-complete draft of the C++20 standard which includes wording for all of the new features we plan to ship in C++20. The CD was then sent out to national standards bodies for a formal ISO ballot, where they have the opportunity to file technical comments on it, called “NB (national body) comments”.

We have 10-15 national standards bodies actively participating in C++ standardization, and together they have filed several hundred comments on the CD. This meeting in Belfast was the first of two ballot resolution meetings, where the committee processes the NB comments and approves any changes to the C++20 working draft needed to address them. At the end of the next meeting, a revised draft will be published as a Draft International Standard (DIS), which will likely be the final draft of C++20.

NB comments typically ask for bug and consistency fixes related to new features added to C++20. Some of them ask for fixes to longer-standing bugs and consistency issues, and some for editorial changes such as fixes to illustrative examples. Importantly, they cannot ask for new features to be added (or at least, such comments are summarily rejected, though the boundary between bug fix and feature can sometimes be blurry).

Occasionally, NB comments ask for a newly added feature to be pulled from the working draft due to it not being ready. In this case, there were comments requesting that Modules and Coroutines (among other things) be postponed to C++23 so they can be better-baked. I’m pleased to report that no major features were pulled from C++20 at this meeting. In cases where there were specific technical issues with a feature, we worked hard to address them. In cases of general “this is not baked yet” comments, we did discuss each one (at length in some cases), but ultimately decided that waiting another 3 years was unlikely to be a net win for the community.

Altogether, over half of the NB comments have been addressed at this meeting, putting us on track to finish addressing all of them by the end of the next meeting, as per our standardization schedule.

While C++20 NB comments were prioritized above all else, some subgroups did have time to process C++23 proposals as well. No proposals were merged into the C++23 working draft at this time (in fact, a “C++23 working draft” doesn’t exist yet; it will be forked from C++20 after the C++20 DIS is published at the end of the next meeting).

Procedural Updates

A few updates to the committee’s structure and how it operates:

  • As the Networking TS prepares to be merged into C++23, it has been attracting more attention, and the committee has been receiving more networking-related proposals (notable among them, one requesting that networking facilities be secure by default), so the Networking Study Group (SG4) has been re-activated so that a dedicated group can give these proposals the attention they deserve.
  • An ABI Review Group (ARG) was formed, comprised of implementors with ABI-related expertise on various different platforms, to advise the committee about the ABI impacts of proposed changes. The role of this group is not to set policy (such as to what extent we are willing to break ABI compatibility), but rather to make objective assessments of ABI impact on various platforms, which other groups can then factor into their decision-making.
  • Not something new, just a reminder: the committee now tracks its proposals in GitHub. If you’re interested in the status of a proposal, you can find its issue on GitHub by searching for its title or paper number, and see its status — such as which subgroups it has been reviewed by and what the outcome of the reviews were — there.
  • At this meeting, GitHub was also used to track NB comments, one issue per comment, and you can also see their status and resolution (if any) there.
Notes on this blog post

This blog post will be a bit different from previous ones. I was asked to chair the Evolution Working Group Incubator (EWG-I) at this meeting, which meant that (1) I was not in the Evolution Working Group (EWG) for most of the week, and thus cannot report on EWG proceedings in as much detail as before; and (2) the meeting and the surrounding time has been busier for me than usual, leaving less time for blog post writing.

As a result, in this blog post, I’ll mostly stick to summarizing what happened in EWG-I, and then briefly mention a few highlights from other groups. For a more comprehensive list of what features are in C++20, what NB comment resolutions resulted in notable changes to C++20 at this meeting, and which papers each subgroup looked at, I will refer you to the excellent collaborative Reddit trip report that fellow committee members have prepared.

Evolution Working Group Incubator (EWG-I)

EWG-I (pronounced “oogie” by popular consensus) is a relatively new subgroup, formed about a year ago, whose purpose is to give feedback on and polish proposals that include core language changes — particularly ones that are not in the purview of any of the domain-specific subgroups, such as SG2 (Modules), SG7 (Reflection), etc. — before they proceed to EWG for design review.

EWG-I met for two and a half days at this meeting, and reviewed 17 proposals. All of this was post-C++20 material.

I’ll go through the proposals that were reviewed, categorized by the review’s outcome.

Forwarded to EWG

The following proposals were considered ready to progress to EWG in their current state:

  • Narrowing contextual conversions to bool. This proposal relaxes a recently added restriction which requires an explicit conversion from integer types to bool in certain contexts. The motivation for the restriction was noexcept(), to remedy the fact that it was very easy to accidentally declare a function as noexcept(f()) (which means “the function is noexcept if f() returns a nonzero value”) instead of noexcept(noexcept(f())) (which means “the function is noexcept if f() doesn’t throw”), and this part of the restriction was kept. However, the proposal argued there was no need for the restriction to also cover static_assert and if constexpr.
  • Structured bindings can introduce a pack. This allows a structured binding declaration to introduce a pack, e.g. auto [...x] = f();, where f() is a function that returns a tuple or other decomposable object, and x is a newly introduced pack of bindings, one for each component of the object; the pack can then be expanded as x... just like a function parameter pack.
  • Reserving attribute names for future use. This reserves attribute names in the global attribute namespace, as well as the attribute namespace std (or std followed by a number) for future standardization.
  • Accessing object representations. This fixes a defect introduced in C++20 that makes it undefined behaviour to access the bytes making up an object (its “object representation”) by reinterpret_casting its address to char*.
  • move = relocates. This introduces “relocating move constructors”, which are move constructors declared using = relocates in places of = default. This generates the same implementation as for a defaulted move constructor, but the programmer additionally guarantees to the compiler that it can safely optimize a move + destructing the old object, into a memcpy of the bytes into the new location, followed by a memcpy of the bytes of a default-constructed instance of the type into the old location. This essentially allows compilers to optimize moves of many types (such as std::shared_ptr), as well as of arrays / vectors of such types, into memcpys. Currently, only types which have an explicit = relocates move constructor declaration are considered relocatable in this way, but the proposal is compatible with future directions where the relocatability of a type is inferred from that of its members (such as in this related proposal).
Forwarded to EWG with modifications

For the following proposals, EWG-I suggested specific revisions, or adding discussion of certain topics, but felt that an additional round of EWG-I review would not be helpful, and the revised paper should go directly to EWG:

  • fiber_context – fibers without scheduler. This is the current formulation of “stackful coroutines”, or rather a primitive on top of which stackful coroutines and other related things like fibers can be built. It was seen by EWG-I so that we can brainstorm possible interactions with other language features. TLS came up, as discussed in more detail in the EWG section.
  • Making operator?: overloadable. See the paper for motivations, which include SIMD blend operations and expression template libraries. The biggest sticking point here is we don’t yet have a language mechanism for making the operands lazily evaluated, the way the built-in operator behaves. However, not all use cases want lazy evaluation; moreover, the logical operators (&& and ||) already have this problem. EWG-I considered several mitigations for this, but ultimately decided to prefer an unrestricted ability to overload this operator, relying on library authors to choose wisely whether or not to overload it.
  • Make declaration order layout mandated. This is largely standardizing existing practice: compilers technically have the freedom to reorder class fields with differing access specifiers, but none are known to do so, and this is blocking future evolution paths for greater control over how fields are laid out in memory. The “modification” requested here is simply to catalogue the implementations that have been surveyed for this.
  • Portable optimisation hints. This standardizes __builtin_assume() and similar facilities for giving the compiler a hint it can use for optimization purposes. EWG-I expressed a preference for an attribute-based ([[assume(expr)]]) syntax.

Note that almost all of the proposals that were forwarded to EWG have been seen by EWG-I at a previous meeting, sometimes on multiple occasions, and revised since then. It’s rare for EWG-I to forward an initial draft (“R0”) of a proposal; after all, its job is to polish proposals and save time in EWG as a result.

Forwarded to another subgroup

The following proposals were forwarded to a domain-specific subgroup:

  • PFA: a generic, extendable and efficient solution for polymorphic programming. This proposed a mechanism for generalized type erasure, so that types like std::function (which a type-erased wrapper for callable objects) can easily be built for any interface. EWG-I forwarded this to the Reflection Study Group (SG7) because the primary core language facility involves synthesizing a new type (the proxy / wrapper) based on an existing one (the interface). EWG-I also recommended expressing the interface as a regular type, rather than introducing a new facade entity to the language.
Feedback given

For the following proposals, EWG-I gave the author feedback, but did not consider it ready to forward to another subgroup. A revised proposal would come back to EWG-I.

No proposals were outright rejected at this meeting, but the nature of the feedback did vary widely, from requesting minor tweaks, to suggesting a completely different approach to solving the problem.

  • Provided operator= returns lvalue-ref on an rvalue. This attempts to rectify a long-standing inconsistency in the language, where operator= for a class type can be called on temporaries, which is not allowed for built-in types; this can lead to accidental dangling. EWG-I agreed that it would be nice to resolve this, but asked the author to assess how much code this would break, so we can reason about its feasibility.
  • Dependent static assertion. The problem this tries to solve is that static_assert(false) in a dependent context fails eagerly, rather than being delayed until instantiation. The proposal introduces a new syntax, static_assert<T>(false), where T is some template parameter that’s in scope, for the delayed behaviour. EWG-I liked the goal, but not the syntax. Other approaches were discussed as well (such as making static_assert(false) itself have the delayed behaviour, or introducing a static_fail() operator), but did not have consensus.
  • Generalized pack declaration and usage. This is an ambitious proposal to make working with packs and pack-like types much easier in the language; it would allow drastically simplifying the implementations of types like tuple and variant, as well as making many compile-time programming tasks much easier. A lot of the feedback concerned whether packs should become first-class language entities (a “language tuple” of sorts, as previously proposed), or remain closer to their current role as depenent constructs that only become language entities after expansion.
  • Just-in-time compilation. Another ambitious proposal, this takes aim at use cases where static polymorphism (i.e. use of templates) is desired for performance, but the parameters (e.g. the dimensions of a matrix) are not known at compile time. Rather than being a general-purpose JIT or eval() like mechanism, the proposal aims to focus on the ability to instantiate some templates at runtime. EWG-I gave feedback related to syntax, error handling, restricting runtime parameters to non-types, and consulting the Reflection Study Group.
  • Interconvertible object representations. This proposes a facility to assert, at compile time, than one type (e.g. a struct containing two floats) has the same representation in memory as another (e.g. an array of two floats). EWG-I felt it would be more useful it the proposed annotation would actually cause the compiler to use the target layout.
  • Language support for class layout control. This aims to allow the order in which class members are laid out in memory to be customized. It was reviewed by SG7 (Reflection) as well, which expressed a preference for performing the customization in library code using reflection facilities, rather than having a set of built-in layout strategies defined by the core language. EWG-I preferred a keyword-based annotation syntax over attributes, though metaclasses might obviate the need for a dedicated syntax.
  • Epochs: a backward-compatible language evolution mechanism. This was probably the most ambitious proposal EWG-I looked at, and definitely the one that attracted the largest audience. It proposes a mechanism similar to Rust’s editions for evolving the language in ways we have not been able to so far. Central to the proposal is the ability to combine different modules which use different epochs in the same program. This generated a lot of discussion around the potential for fracturing the language into dialects, the granularity at which code opts into an epoch, and what sorts of new features should be allowed in older epochs, among other topics.
Thoughts on the role of EWG-I

Having spent time in both EWG and EWG-I, one difference that’s apparent is that EWG is the tougher crowd: features that make it successfully through EWG-I are often still shot down in EWG, sometimes on their first presentation. If EWG-I’s role is to act as a filter for EWG, it is effective in that role already, but there is probably potential for it to be more effective.

One dynamic that you often see play out in the committee is the interplay between “user enthusiasm” and “implementer skepticism”: users are typically enthusiastic about new features, while implementers will often try to curb enthusiasm and be more realistic about a feature’s implementation costs, interactions with other features, and source and ABI compatibility considerations. I’d say that EWG-I tends to skew more towards “user enthusiasm” than EWG does, hence the more permissive outcomes. I’d love for more implementers to spend time in EWG-I, though I do of course realize they’re in short supply and are needed in other subgroups.

Evolution Working Group

As mentioned, I didn’t spend as much time in EWG as usual, but I’ll call out a few of the notable topics that were discussed while I was there.

C++20 NB comments

As with all subgroups, EWG prioritized C++20 NB comments first.

  • The C++20 feature that probably came closest to removal at this meeting was class types as non-type template parameters (NTTPs). Several NB comments pointed out issues with their current specification and asked for either the issues to be resolved, or the feature to be pulled. Thankfully, we were able to salvage the feature. The fix approach involves axing the feature’s relationship with operator==, and instead having template argument equivalence be based on a structural identity, essentially a recursive memberwise comparison. This allows a larger category of types to be NTTPs, including unions, pointers and references to subobjects, and, notably, floating-point types. For class types, only types with public fields are allowed at this time, but future directions for opting in types with private fields are possible.
  • Parenthesized initialization of aggregates also came close to being removed but was fixed instead.
  • A suggestion to patch a functionality gap in std::is_constant_evaluated() by introducing a new syntactic construct if consteval was discussed at length but rejected. The feature may come back in C++23, but there are enough open design questions that it’s too late for C++20.
  • To my mild (but pleasant) surprise, ABI isolation for member functions, a proposal which divorces a method’s linkage from whether it is physically defined inline or out of line, and which was previously discussed as something that’s too late for C++20 but which we could perhaps sneak is as a Defect Report after publication, was now approved for C++20 proper. (It did not get to a plenary vote yet, where it might be controversial.)
  • A minor consistency fix between constexpr and consteval was approved.
  • A few Concepts-related comments:
    • The ability to constrain non-templated functions was removed because their desired semantics were unclear. They could come back in C++23 with clarified semantics.
    • One remaining visual ambiguity in Concepts is that in a template parameter list, Foo Bar can be either a constrained type template parameter (if Foo names a concept) or a non-type template parameter (if Foo names a type). The compiler knows which by looking up Foo, but a reader can’t necessarily tell just by the syntax of the declaration. A comment proposed resolving this by changing the syntax to Foo auto Bar for the type parameter case (similar to the syntax for abbreviated function templates). There was no consensus for this change; a notable counter-argument is that the type parameter case is by far the more common one, and we don’t want to make the common case more verbose (and the non-type syntax can’t be changed because it’s pre-existing).
    • Another comment pointed out that Concept<X> can also mean two different things: a type constraint (which is satisifed by a type T if Concept<T, X> is true), or an expression which evaluates Concept applied to the single argument X. The comment suggested disambiguating by e.g. changing the first case to Concept<, X>, but there was no consensus for this either.
Post-C++20 material

Having gotten through all Evolutionary NB comments, EWG proceeded to review post-C++20 material. Most of this had previously gone through EWG-I (you might recognize a few that I mentioned above because they went through EWG-I this week).

  • (Approved) Reserving attribute names for future use. In addition to approing this for C++23, EWG also approved it as a C++20 Defect Report. (Thanks to Erich Keane for pointing that out!)
  • (Approved) Accessing object representations. EWG agreed with the proposal’s intent and left it to the Core Working Group to figure out the exact way to specify this intent.
  • (Further work) std::fiber_context – stackful context switching. This was discussed at some length, with at least one implementer expressing significant reservations due to the feature’s interaction with thread-local storage (TLS). Several issues related to TLS were raised, such as the fact that compilers can cache pointers to TLS across function calls, and if a function call executes a fiber switch that crosses threads (i.e. the fiber is resumed on a different OS thread), the cache becomes invalidated without the compiler having expected that; addressing this at the compiler lever would be a performance regression even for code that doesn’t use fibers, because the compiler would need to assume that any out of line function call could potentially execute a fiber switch. A possible alternative that was suggested was to have a mechanism for a user-directed kernel context switch that would allow coordinating threads of execution (ToEs) in a co-operative way without needing a distinct kind of ToE (namely, fibers).
  • (Further work) Structured bindings can introduce a pack. EWG liked the direction, but some implementers expressed concerns about the implementation costs, pointing out that in some implementations, handling of packs is closely tied to templates, while this proposal would allow packs to exist outside of templates. The author and affected implementers will discuss the concerns offline.
  • (Further work) Automatically generate more operators. This proposal aims to build on the spaceship operator’s model of rewriting operators (e.g. rewriting a < b to a <=> b < 0), and allow other kinds of rewriting, such as rewriting a += b to a = a + b. EWG felt any such facility should be strictly opt-in (e.g. you could give you class an operator+=(...) = default to opt into this rewriting, but it wouldn’t happen by default), with the exception of rewriting a->b to (*a).b (and the less common a->*b to (*a).*b) which EWG felt could safely happen by default.
  • (Further work) Named character escapes. This would add a syntax for writing unicode characters in source code by using their descriptive names. Most of the discussion concerned the impacts of implementations having to ship a unicode database containing such descriptive names. EWG liked the direction but called for further exploration to minimize such impacts.
  • (Further work) tag_invoke. This concerns making it easier to write robust customization points for library facilities. There was a suggestion of trying to model the desired operations more directly in the language, and EWG suggested exploring that further.
  • (Rejected) Homogeneous variadic function parameters. This would have allowed things like template <typename T> void foo(T...); to mean “foo is a function template that takes zero or more parameters, all of the same type T“. There were two main arguments against this. First, it would introduce a novel interpretation of template-ids (foo<int> no longer names a single specialization of foo, it names a family of specializations, and there’s no way to write a template-id that names any individual specialization). The objection that seems to have played the larger role in the proposal’s rejection, however, is that it breaks the existing meaning of e.g. (int...) as an alternative way of writing (int, ...) (meaning, an int parameter followed by C-style variadic parameters). While the (int...) form is not allowed in C (and therefore, not used by any C libraries that a C++ project might include), apparently a lot of old C++ code uses it. The author went to some lengths to analyze a large dataset of open-source C++ code for occurrences of such use (of which there were vanishingly few), but EWG felt this wasn’t representative of the majority of C++ code out there, most of which is proprietary.
Other Highlights
  • Ville Voutilainen’s paper proposing a high-level direction for C++23 was reviewed favourably by the committee’s Direction Group.
  • While I wasn’t able to attend the Reflection Study Group (SG7)’s meeting (it happened on a day EWG-I was in session), I hear that interesting discussions took place. In particular, a proposal concerning side effects during constant evaluation prompted SG7 to consider whether we should revise the envisioned metaprogramming model and take it even further in the direction of “compile-time programming is like regular programming”, such that if you e.g. wanted compile-time output, then rather than using a facility like the proposed constexpr_report, you could just use printf (or whatever you’d use for runtime output). The Circle programming language was pointed to as prior art in this area. SG7 did not make a decision about this paradigm shift, just encouraged exploration of it.
  • The Concurrency Study Group (SG1) came to a consensus on a design for executors. (Really, this time.)
  • The Networking Study Group (SG4) pondered whether C++ networking facilities should be secure by default. The group felt that we should standardize facilities that make use of TLS if and when they are ready, but not block networking proposals on it.
  • The determistic exceptions proposal was not discussed at this meeting, but one of the reactions that its previous discussions have provoked is a resurgence in interest in better optimizing today’s exceptions. There was an evening session on this topic, and benchmarking and optimization efforts were discussed.
  • web_view was discussed in SG13 (I/O), and I relayed some of Mozilla’s more recent feedback; the proposal continues to enjoy support in this subgroup. The Library Evolution Incubator did not get a chance to look at it this week.
Next Meeting

The next meeting of the Committee will be in Prague, Czech Republic, the week of February 10th, 2020.


This was an eventful and productive meeting, as usual, with the primary accomplishment being improving the quality of C++20 by addressing national body comments. While C++20 is feature-complete and thus no new features were added at this meeting, we did solidify the status of recently added features such as Modules and class types as non-type template parameters, greatly increasing the chances of these features remaining in the draft and shipping as part of C++20.

There is a lot I didn’t cover in this post; if you’re curious about something I didn’t mention, please feel free to ask in a comment.

Other Trip Reports

In addition to the collaborative Reddit report which I linked to earlier, here are some other trip reports of the Belfast meeting that you could check out:

Categorieën: Mozilla-nl planet

Karl Dubost: Best viewed with… Mozilla Dev Roadshow Asia 2019

vr, 15/11/2019 - 09:26

I was invited by Sandra Persing to participate to the Mozilla Developer Roadshow 2019 in Asia. The event is going through 5 cities: Tokyo, Seoul, Taipei, Singapore, Bangkok. I committed to participate to Tokyo and Seoul. The other speakers are still on the road. As I'm writing this, they are speaking in Taipei, when I'm back home.

Let's go through the talk and then some random notes about the audience, people and cities.

The Webcompat Talk

The talk was half about webcompat and half about the tools helping developers using Firefox to avoid Web compatibility issues. The part about the Firefox devtools was introduced by Daisuke. My notes here are a bit longer than what I actually said. I have the leisure of more space.

Let's talk about webcompat

Intro slide

The market dominance by one browser is not new. It has happened a couple of times already. In an article by Wired on September 2008 (Time flies!), they have this nice graph about the browser market share space. The first dominant browser was Netscape in 1996, the child of Mosaic. It already had influence on the other player. I remember how the introduction of images and tables. For example, I remember a version of Netscape where we had to close the table element absolutely. If not, the full table was not being displayed. This had consequences on the web developer job-At that time, we were webmasters. This seems to be a good consequence in this case by imposing a cleaner, leaner code.

browser market shares

Then Internet Explorer entered the market and took it all. The browser being distributed with Windows, it became de factor the default engine in all work environments. Internet Explorer reached his dominance peak around 2003, then started to decline through the effort of Firefox. Firefox never reached a peak (and that's good!). Probably the maximum market share in the world was around 20%-30% in 2009. Since there has been a steady decline of Firefox market share. The issue is not the loss of market share, the issue is the dominance by one player whichever the player is. I would not be comfortable to have Firefox as a dominant browser too. A balance in between all browsers is healthy.

note: World market shares are interesting, but they do not represent the full picture. There can be extreme diversity in between markets. That was specifically the case 10 years ago. A browser would have 80% of the market share in a specific country and 0% in another one. The issue is increased through mobile operators. It happened on Japan market, which went from 0 to very high dominance of Safari on iOS, to a shared dominance in between Chrome (Android) and Safari (iOS).

The promises of a website

Fantasy website

When a website is sold to a client. We sell the package, the look and feel, the design. In the cases of web applications, performances, conversion rates, user engagement pledge will be added into the basket. We very rarely talk about the engine. And still, people are more and more conscious about the quality and origin of food they buy, the sustainability of materials used to build a house, the maintenance cost of their car.

There is a lot of omissions in what is being promised to the client. This is accentuated by the complete absence of thinking about the resilience of the information contained on the website. Things change radically when we introduce the notion of archivability, time resilience, robustness over devices diversity and obsolescence. These are interesting questions that should be part of the process of thinking a website.

years ago websites were made of files; now they are made of dependencies. — Simon Pitt

A simple question such as "What the content of this page becomes when the site is not updated anymore and the server is not maintained anymore?" A lot of valuable information disappears every day on the Web, just because we didn't ask the right questions.

But I'm drifting a bit from webcompat. Let's come back on track.

The reality of a website

And here the photo of an actual website.

mechanics workshop

This is dirty, messy, full of dependencies, and forgotten bits. Sometimes different version of the same library is used in the code with conflicting ways of doing things. CSS is botched, JS is in a dire state of complexity, html and accessibility are a deeply nested soup of codes where the meaning has been totally forgotten. With the rise of frameworks such as ReactJS and their components, we fare a lot of worse in terms of semantics than what we did a couple of years ago with table layouts.

These big piles of codes have consequences. Maintainability suffers. Web compatibility issues increase. By Web compatibility I mean the ability of a website to work correctly on any devices, any context. Not as it should look the same everywhere, but as in any users should be able to perform the tasks they were expecting doing on it.


  • Misconfigured user agent sniffing creating a ping-pong game (http/JS redirection) in between a mobile and a desktop site.
  • User agent detection in JavaScript code to deliver a specific feature, which fails when the user agent change, or the browser is being fixed.
  • Detection of a feature to change the behavior of the browser. window.event was not standard, and not implemented in Firefox for a long time. Webcompat issues pushed Mozilla to implement it for solving issues. In return, it created new webcompat issues, because some sites where using this to mean Firefox and not IE and then choose in between keyCode and charCode, which had yet another series of unintended consequences.
  • WebKit prefixes for CSS and JS… Circa 2015, 20% of the Japanese and Chinese mobile websites were breaking in a way or the other on Firefox on Android. It make impossible to have a reasonable impact on the Japanese market or to create an alliance with an operator to distribute Firefox more widely. So some of the WebKit prefixes became part of the Web platform, because you can't exist when developing a new browser if you do not have these aliases. Forget about asking web developers to do the right thing. Some sites are not maintained anymore, but users are still using them.

The list goes on.

The ultimate victim?

one person in the subway

The user who decided to use a specific browser for personal reasons is the victim. They are caught in between a website not doing the right thing, and a tool (the browser) not working properly. If you are not the user of the dominant browser of the market, you are up for a bumpy ride on the Web. Your browser usage becomes more a conviction of doing the right thing, more than making your life easier.

This should not.

A form to save the Web

webcompat site screenshot

We, the Web compatibility team, created a form to help users report issues about website working in one browser but not the others. The report can be made for any browsers, not only Mozilla Firefox (for which I'm working). The majority of our issues are coming from Firefox for two reasons.

  1. Firefox not having a market dominance, web developers do not test in Firefox. It's even more acute on mobile.
  2. The bug reporting is integrated into the Firefox UI (Developer, Nightly releases).

When we triage the issues, an amazing team of 3 persons (Ciprian, Oana and Sergiu), we ping the right persons working for other browsers companies for analyzing the issues. I would love more participation from the other browsers, but the participation is too irregular to make it effective for other browsers.

On Mozilla side, we have a good crew.

Diagnosis: a web plumbers team

person in a workshop

Every day, someone on the Mozilla webcompat team (Dennis, Ksenia, Thomas and myself) will handle the diagnosis of incoming issues. We call this diagnosis. You remember the messy website picture? Well, we put our sleeves up and dig right into the greasy bolts and screws. Minified code, broken code, unmaintained libraries, ill-defined CSS, we go through it and try to make sense of it without sometimes access to the original sources.

Our goal is to determine if it's really one of these:

  • not a webcompat issue. There's a difference but created by different widget rendering, or browser features specific to one vendor (font inflation for example)
  • a bug in Gecko (core engine of Firefox) that needs to be fixed
  • a mistake of the website

Once the website has been diagnosed, we have options…

Charybdis and Scylla: Difficult or… dirty

chess player, people in phone booth, tools to package a box

The options once we know what the issue is are not perfect.

Difficult: We can try to do outreach. It means trying to find out the owner of the website or the developer who created it. It's a very difficult task to discover who is in charge and is very dependent on the country and type of business the site is doing. Contacting a bank site and getting it fixed is nearly impossible. Finding a Japanese developer of a big corporation website is very hard (a lot of secrecy is going around). It's a costly process, with not always great results. If you are part of the owners of broken websites, please contact us.

Probably we should create a search engine for broken websites. So people can find their own sites.

Dirty: The other option is to fix on the browser side. The ideal scenario is when there is really a bug in Firefox and Mozilla needs to fix it. But sometimes, the webcompat team is pushing Gecko engineers to introduce dirty fix into the code, so Firefox can be compatible with what a big website is doing. These can be a site intervention that will modify the code on the fly or it can be a more general issue which requires to fix both the browser and the technical specification.

The specification… ??? Yes. Back to the browser market share and its influences on the world. If a dominant browser implements wrongly the specification, it doesn't matter what is right. The Web developers will plunge into using the feature as it behaves in the dominant browser. This solidifies the technical behavior of a feature and creates the burden of implementing a different behavior on smaller browsers, and then changing the specification to match what everyone was forced to implement.

The Web is amazing!

korean palace corridor and scissors

All of that said, the Web is an amazing place. It's a space which gave us the possibility to express ourselves, to discover each other, to work together, to create beautiful things on a daily basis. Just continue doing that, but be mindful that the web is for everyone.

Notes from a dilettantish speaker

I wanted to mention a couple of things, but I think I will do that in a separate blog post with other things that have happened this week.


Categorieën: Mozilla-nl planet

The Firefox Frontier: Here’s why pop culture and passwords don’t mix

do, 14/11/2019 - 19:40

Were they on a break or not?! For nearly a decade, Ross and Rachel’s on-screen relationship was a point of contention for millions of viewers around the world. It’s no … Read more

The post Here’s why pop culture and passwords don’t mix appeared first on The Firefox Frontier.

Categorieën: Mozilla-nl planet

Mozilla Security Blog: Adding CodeQL and clang to our Bug Bounty Program

do, 14/11/2019 - 19:03

At Github Universe, Github announced the GitHub Security Lab, an initiative to help secure open source software alongside the community and an initial set of partners including Mozilla. As part of this announcement, Github is providing free access to CodeQL, a security research tool which makes it easier to identify flaws in open source software. Mozilla has used these tools privately for the past two years, and have been very impressed and hopeful about how these tools will improve software security. Mozilla recognizes the need to scale security to work automatically, and tighten the feedback loop in the development <-> security auditing/engineering process.

One of the ways we’re supporting this initiative at Mozilla is through renewed investment in automation and static analysis. We think the broader Mozilla community can participate, and we want to encourage it. Today, we’re announcing a new area of our bug bounty program to encourage the community to use the CodeQL tools.  We are exploring the use of CodeQL tools and will award a bounty – above and beyond our existing bounties – for static analysis work that identifies present or historical flaws in Firefox.

The highlights of the bounty are:

  • We will accept static analysis queries written in CodeQL or as clang-based checkers (clang analyzer, clang plugin using the AST API or clang-tidy).
  • Each previously unknown security vulnerability your query matches will be eligible for a bug bounty per the normal policy.
  • The query itself will also be eligible for a bounty, the amount dependent upon the quality of the submission.
  • Queries that match historical issues but do not find new vulnerabilities are eligible. This means you can look through our historical advisories to find examples of issues you can write queries for.
  • Mozilla and Github’s Bug Bounties are compatible not exclusive so if you meet the requirements of both, you are eligible to receive bounties from both. (More details below.)
  • The full details of this program are available at our bug bounty program’s homepage.

When fixing any security bug, retrospective is an important part of the remediation process which should provide answers to the following questions: Was this the only instance of this issue? Is this flaw representative of a wider systemic weakness that needs to be addressed? And most importantly: can we prevent an issue like this from ever occurring again? Variant analysis, driven manually, is usually the way to answer the first two questions. And static analysis, integrated in the development process, is one of the best ways to answer the third.

Besides our existing clang analyzer checks, we’ve made use of CodeQL over the past two years to do variant analysis. This tool allows identifying bugs both in the context of targeted, zero-false-positive queries, and more expansive results where the manual analysis starts from a more complete and less noise-filled point than simple string matching. To see examples of where we’ve successfully used CodeQL, we have a meta tracking bug that illustrates the types of bugs we’ve identified.

We hope that security researchers will try out CodeQL too, and share both their findings and their experience with us. And of course regardless of how you find a vulnerability, you’re always welcome to submit bugs using the regular bug bounty program. So if you have custom static analysis tools, fuzzers, or just the mainstay of grep and coffee – you’re always invited.

Getting Started with CodeQL

Github is publishing a guide covering how to use CodeQL at

Getting Started with Clang Analyzer

We currently have a number of custom-written checks in our source tree. So the easiest way to write and run your query is to build Firefox, add ‘ac_add_options –enable-clang-plugin’ to your mozconfig, add your check, and then ‘./mach build’ again.

To learn how to add your check, you can review this recent bug that added a couple of new checks – it shows how to add a new plugin to,, and additionally how to add tests. This particular plugin also adds a couple of attributes that can be used in the codebase, which your plugin may or may not need. Note that depending on how you view the diffs, it may appear that the author modified existing files, but actually they copied an existing file, then modified the copy.

Future of CodeQL and clang within our Bug Bounty program

We retain the ability to be flexible. We’re planning to evaluate the effectiveness of the program when we reach $75,000 in rewards or after a year. After all, this is something new for us and for the bug bounty community. We—and Github—welcome your communication and feedback on the plan, especially candid feedback. If you’ve developed a query that you consider more valuable than what you think we’d reward – we would love to hear that. (If you’re keeping the query, hopefully you’re submitting the bugs to us so we can see that we are not meeting researcher expectations on reward.) And if you spent hours trying to write a query but couldn’t get over the learning curve – tell us and show us what problems you encountered!

We’re excited to see what the community can do with CodeQL and clang; and how we can work together to improve on our ability to deliver a browser that answers to no one but you.

The post Adding CodeQL and clang to our Bug Bounty Program appeared first on Mozilla Security Blog.

Categorieën: Mozilla-nl planet

Mozilla Addons Blog: 2019 Add-ons Community Meetup in London

do, 14/11/2019 - 17:15

At the end of October, the Firefox add-ons team hosted a day-long meetup with a group of privacy extension developers as part of the Mozilla Festival in London, UK. With 2019 drawing to a close, this meetup provided an excellent opportunity to hear feedback from developers involved in the Recommended Extensions program and to get input about some of our plans for 2020.

Recommended Extensions

Earlier this summer we launched the Recommended Extensions program to provide Firefox users with a list of curated extensions that meet the highest standards of security, utility, and user experience. Participating developers agree to actively maintain their extensions and to have each new version undergo a code review. We invited a handful of Recommended developers to attend the meetup and gather their feedback about the program so far. We also discussed more general issues around publishing content on (AMO), such as ways of addressing user concerns over permission prompts.

Scott DeVaney, Senior Editorial & Campaign Manager for AMO, led a session on ways developers can improve a few key experiential components of their extensions. These tips may be helpful to the developer community at large:

  • AMO listing page. Use clear, descriptive language to convey exactly what your extension does and how it benefits users. Try to avoid overly technical jargon that average users might not understand. Also, screenshots are critical. Be sure to always include updated, relevant screenshots that really capture your extension’s experience.
  • Extension startup/post-install experience. First impressions are really important. Developers are encouraged to take great care in how they introduce new users to their extension experience. Is it clear how users are supposed to engage with the content? Or are they left to figure out a bunch of things on their own with little or no guidance? Conversely, is the guidance too cumbersome (i.e. way too much text for a user to comfortably process?)
  • User interface. If your extension involves customization options or otherwise requires active user engagement, be sure your settings management is intuitive and all UI controls are obvious.

Monetization. It is of course entirely fine for developers to solicit donations for their work or possibly even charge for a paid service. However, monetary solicitation should be tastefully appropriate. For instance, some extensions solicit donations just after installation, which makes little sense given the extension hasn’t proven any value to the user yet. We encourage developers to think through their user experience to find the most compelling moments to ask for donations or attempt to convert users to a paid tier.

WebExtensions API and Manifest v3

One of our goals for this meetup was to learn more about how Firefox extension developers will be affected by Chrome’s proposed changes to their extensions API (commonly referred to as Manifest v3).  As mentioned in our FAQ about Manifest v3, Mozilla plans to adopt some of these changes to maintain compatibility for developers and users, but will diverge from Chrome where it makes sense.

Much of the discussion centered around the impact of changes to the `blocking webRequest` API and replacing background scripts with service workers. Attendees outlined scenarios where changes in those areas will cause breakage to their extensions, and the group spent some time exploring possible alternative approaches for Firefox to take. Overall, attendees agreed that Chrome’s proposed changes to host permission requests could give users more say over when extensions can run. We also discussed ideas on how the WebExtensions API could be improved in light of the goals Manifest v3 is pursuing.

More information about changes to the WebExtensions API for Manifest v3 compatibility will be available in early 2020. Many thanks to everyone who has contributed to this conversation over the last few months on our forums, mailing list, and blogs!

Firefox for Android

We recently announced that Firefox Preview, Mozilla’s next generation browser for Android built on GeckoView, will support extensions through the WebExtensions API. Members of the Android engineering team will build select APIs needed to initially support a small set of Recommended Extensions.

The group discussed a wishlist of features for extensions on Android, including support for page actions and browser actions, history search, and the ability to manipulate context menus. These suggestions will be considered as work on Firefox Preview moves forward.

Thank you

Many thanks to the developers who joined us for the meetup. It was truly a pleasure to meet you in person and to hear first hand about your experiences.

The add-ons team would also like to thank Mandy Chan for making us feel at home in Mozilla’s London office and all of her wonderful support during the meetup.

The post 2019 Add-ons Community Meetup in London appeared first on Mozilla Add-ons Blog.

Categorieën: Mozilla-nl planet

Hacks.Mozilla.Org: Thermostats, Locks and Extension Add-ons – WebThings Gateway 0.10

do, 14/11/2019 - 16:38

Happy Things Thursday! Today we are releasing WebThings Gateway 0.10. If you have a gateway using our Raspberry Pi builds then it should already have automatically updated itself.

This new release comes with support for thermostats and smart locks, as well as an updated add-ons system including extension add-ons, which enable developers to extend the gateway user interface. We’ve also added localisation settings so that you can choose your country, language, time zone and unit preferences. From today you’ll be able to use the gateway in American English or Italian, but we’re already receiving contributions of translations in different languages!

Thermostat and lock in Things UI


Version 0.10 comes with support for smart thermostats like the Zigbee Zen Thermostat, the Centralite HA 3156105 and the Z-Wave Honeywell TH8320ZW1000.

Thermostat UIYou can view the current temperature of your home remotely, set a heating or cooling target temperature and set the current heating mode. You can also create rules which react to temperature or control your heating/cooling via the rules engine. In this way, you could set the heating to come on at a particular time of day or change the colour of lights based on how warm it is, for example.

Smart Locks

Ever wonder if you’ve forgotten to lock your front door? Now you can check when you get to work, and even lock or unlock the doors remotely. With the help of the rules engine, you can also set rules to lock doors at a particular time of day or notify you when they are unlocked.

Lock UI

So far we have support for Zigbee and Z-Wave smart locks like the Yale YRD226 Deadbolt and Yale YRD110 Deadbolt.

Extension Add-ons

Version 0.10 also comes with a revamped add-ons system which includes a new type of add-on called extensions. Like a browser extension, an extension add-on can be used to augment the gateway’s user interface.

For example, an extension can add its own entry in the gateway’s main menu and display its own dedicated screen with new functionality.

Together with a new mechanism for add-on developers to extend the gateway’s REST API, this opens up a whole new world of possibilities for add-on developers to customise the gateway.

Note that the updated add-ons system comes with a new manifest format inspired by Web Extensions. Michael Stegeman’s blog post explains in more depth how to use the new add-ons system. We’ll walk you through building your own extension add-on.

Localisation Settings

Many add-ons use location-specific data like weather, sunrise/sunset and tide times, but it’s no fun to have to configure your location for each add-on. It’s now possible to choose your country, time zone and language via the gateway’s web interface.

With time zone support, time-based rules should now correctly adjust for daylight savings time in your region. Since the gateway is configured to use Greenwich Mean Time by default, your rules may show times you didn’t expect at first. To fix this, you’ll need to set your time zone appropriately and adjust your rule times. You can also set your preference of unit used to display temperature, to either degrees Celsius or Fahrenheit.

And finally, many of you have asked for the user interface to support multiple languages. We are shipping with an Italian translation in this release thanks to our resident Italian speaker Kathy. We already have French, Dutch and Polish translations in the pipeline thanks to our wonderful community. Stand by for more information on how to contribute to translations in your language!

API Changes & Standardisation

For developers, in addition to the new add-ons system, it’s now possible to communicate with all the gateway’s web things via a single WebSocket connection. Previously it was necessary to open a WebSocket per device, so this is a significant enhancement.

We’ve recently started the Web Thing Protocol Community Group at the W3C with the intention of standardising this WebSocket sub-protocol in order to further improve interoperability on the Web of Things. We welcome developers to join this group to contribute to the standardisation process.

Coming Soon

Coming up next, expect Mycroft voice controls, translations into more languages and new ways to install and use WebThings Gateway.

As always, you can head over to the forums for support. And we welcome your contributions on GitHub.

The post Thermostats, Locks and Extension Add-ons – WebThings Gateway 0.10 appeared first on Mozilla Hacks - the Web developer blog.

Categorieën: Mozilla-nl planet

Hacks.Mozilla.Org: Upcoming notification permission changes in Firefox 72

wo, 13/11/2019 - 16:30

Notifications. Can you keep count of how many websites or services prompt you daily for permission to send notifications? Can you remember the last time you were thrilled to get one?

Earlier this year we decided to reduce the amount of unsolicited notification permission prompts people receive as they move around the web using the Firefox browser. We see this as an intrinsic part of Mozilla’s commitment to putting people first when they are online.

In preparation, we ran a series of studies and experiments. We wanted to understand how to improve the user experience and reduce annoyance. In response, we’re now making some changes to the workflow for how sites ask users for permission to send them notifications. Firefox will require explicit user interaction on all notification permission prompts, starting in Firefox 72.

For the full background story, and details of our analysis and experimentation, please read Restricting Notification Permission Prompts in Firefox. Today, we want to be sure web developers are aware of the upcoming changes and share best practices for these two key scenarios:

  1. How to guide the user toward the prompt.
  2. How to acknowledge the user changing the permission.

an animation showing the browser UI where a user can click on the small permission icon that appears in the address bar.

We anticipate that some sites will be impacted by changes to the user flow. We suspect that many sites do not yet deal with the latter in their UX design. Let’s briefly walk through these two scenarios:

How to guide the user toward the prompt

Ideally, sites that want permission to notify or alert a user already guide them through this process. For example, they ask if the person would like to enable notifications for the site and offer a clickable button.

document.getElementById("notifications-button").addEventListener("click", () => { Notification.requestPermission().then(setupNotifications); });

Starting with Firefox 72, the notification permission prompt is gated behind a user gesture. We will not deliver prompts on behalf of sites that do not follow the guidance above. Firefox will instantly reject the promise returned by Notification.requestPermission() and PushManager.subscribe(). However, the user will see a small notification permission icon in the address bar.

Note that because PushManager.subscribe() requires a ServiceWorkerRegistration, Firefox will carry user-interaction flags through promises that return ServiceWorkerRegistration objects. This enables popular examples to continue to work when called from an event handler.

Firefox shows the notification permission icon after a successful prompt. The user can select this icon to make changes to the notification permission. For instance, if they decide to grant the site the permission, or change their preference to no longer receive notifications.

How to acknowledge the user changing the permission

When the user changes the notification permission through the notification permission icon, this is exposed via the Permissions API:

navigator.permissions.query({ name: "notifications" }).then(status => { status.onchange = () => potentiallyUpdateNotificationPermission(status.state); potentiallyUpdateNotificationPermission(status.state); }

We believe this improves the user experience and makes it more consistent. And allows to align the site interface with the notification permission. Please note that the code above works in earlier versions of Firefox as well. However, users are unlikely to change the notification permission dynamically in earlier Firefox releases. Why? Because there was no notification permission icon in the address bar.

Our studies show that users are more likely to engage with prompts that they’ve interacted with explicitly. We’ve seen that through pre-prompting in the user interface, websites can inform the user of the choice they are making before presenting a prompt. Otherwise, unsolicited prompts are denied in over 99% of cases.

We hope these changes will lead to a better user experience for all and better and healthier engagement with notifications.

The post Upcoming notification permission changes in Firefox 72 appeared first on Mozilla Hacks - the Web developer blog.

Categorieën: Mozilla-nl planet

Mozilla Privacy Blog: Mozilla plays role in Kenya’s adoption of crucial data protection law

di, 12/11/2019 - 21:45

The Kenyan Data Protection and Privacy Act 2019, was signed into law last week. This GDPR-like law is the first data protection law in Kenyan history, and marks a major step forward in the protection of Kenyans’ privacy. Mozilla applauds the Government of Kenya, the National Assembly, and all stakeholders who took part in the making of this historic law. It is indeed a huge milestone that sees Kenya become the latest addition to the list of countries with data protection related laws in place; providing much-needed safeguards to its citizens in the digital era.

Strong data protection laws are critical in ensuring that user rights are protected; that companies and governments are compelled to appropriately handle the data that they are entrusted with. As part of its policy work in Africa, Mozilla has been at the forefront in advocating for the new law since 2018. The latest development is most welcome, as Mozilla continues to champion the 5 policy hot-spots that are key to Africa’s digital transformation.

Mozilla is pleased to see that the Data Protection Act is consistent with international data protection standards, through its approach to placing users’ rights at the centre of the digital economy. They also applaud the creation of an empowered data protection commission with a high degree of independence from the government. The law also imposes strong obligations placed on data controllers and processors. It requires them to abide by principles of meaningful user consent, collection limitation, purpose limitation, data minimization, and data security.

It is commendable that the law has maintained integrity throughout the process, and many of the critical comments Mozilla submitted on the initial Data Protection Bill (2018) have been reflected in the final act. The suggestions included the requirement for robust protections of data subjects with the rights to rectification, erasure of inaccurate data, objection to processing of their data, as well as the right to access, and to be informed of the use of their data; with the aim of providing users with control over their personal data and online experiences.

Mozilla continues to be actively engaged in advocating for strong data privacy and  protection in the entire African region, where fewer than 30 countries have a data protection law. Considering that the region has the world’s fastest growth in internet use over the past decade, the continent is poised for great opportunities around accessing the internet. However, without the requisite laws in place, many users, many of whom are accessing the internet for the first time, will be put at risk.

The post Mozilla plays role in Kenya’s adoption of crucial data protection law appeared first on Open Policy & Advocacy.

Categorieën: Mozilla-nl planet

The Firefox Frontier: Tracking Diaries with Tiffany LaTrice Williams

di, 12/11/2019 - 19:05

In Tracking Diaries, we invited people from all walks of life to share how they spent a day online while using Firefox’s privacy protections to keep count of the trackers … Read more

The post Tracking Diaries with Tiffany LaTrice Williams appeared first on The Firefox Frontier.

Categorieën: Mozilla-nl planet

Mozilla Addons Blog: Extensions in Firefox 71

di, 12/11/2019 - 17:00

Firefox 71 is a light release in terms of extension changes. I’d like to tell you about a few interesting improvements nevertheless.

Thanks to Nils Maier, there have been various improvements to the downloads API, specifically in handling download failures. In addition to previously reported failures, the API will now report an error in case of various 4xx error codes. Similarly, HTTP 204 (No Content) and HTTP 205 (Reset Content) are now treated as bad content errors. This makes the API more compatible with Chrome and gives developers a way to handle these errors in their code. With the new allowHttpErrors parameter, extensions may also ignore some http errors when downloading. This will allow them to download the contents of server error pages.

Please also note, the lowercase isarticle filter for the tabs.onUpdated listener has been removed in Firefox 71. Developers should instead use the camelCase isArticle filter.

A few more minor updates are available as well:

  • Popup windows now include the extension name instead of its moz-extension:// url when using the windows.create API.
  • Clearer messaging when encountering unexpected values in manifest.json (they are often warnings, not errors)
  • Extension-registered devtools panels now interact better with screen readers

Thank you contributors Nils and Myeongjun Go for the improvements, as well as our WebExtensions team for fixing various tests and backend related issues. If you’d like to help make this list longer for the next releases, please take a look at our wiki on how to contribute. I’m looking forward to seeing what Firefox 72 will bring!

The post Extensions in Firefox 71 appeared first on Mozilla Add-ons Blog.

Categorieën: Mozilla-nl planet

The Mozilla Blog: New Bytecode Alliance Brings the Security, Ubiquity, and Interoperability of the Web to the World of Pervasive Computing

di, 12/11/2019 - 09:40

New community effort will create a new cross-platform, cross-device computing runtime based on the unique advantages of WebAssembly 

MOUNTAIN VIEW, California, November 12, 2019 — The Bytecode Alliance is a newly-formed open source community dedicated to creating new software foundations, building on standards such as WebAssembly and WebAssembly System Interface (WASI). Mozilla, Fastly, Intel, and Red Hat are founding members.

The Bytecode Alliance will, through the joint efforts of its contributing members, deliver a state-of-the-art runtime environment and associated language toolchains, where security, efficiency, and modularity can all coexist across the widest possible range of devices and architectures. Technologies contributed and collaboratively evolved through the Alliance leverage established innovation in compilers, runtimes, and tooling, and focus on fine-grained sandboxing, capabilities-based security, modularity, and standards such as WebAssembly and WASI.

Founding members are making several open source project contributions to the Bytecode Alliance, including:

  • Wasmtime, a small and efficient runtime for WebAssembly & WASI
  • Lucet, an ahead-of-time compiler and runti
  • me for WebAssembly & WASI focused on low-latency, high-concurrency applications
  • WebAssembly Micro Runtime (WAMR), an interpreter-based WebAssembly runtime for embedded devices
  • Cranelift, a cross-platform code generator with a focus on security and performance, written in Rust

Modern software applications and services are built from global repositories of shared components and frameworks, which greatly accelerates creation of new and better multi-device experiences but understandably increases concerns about trust, data integrity, and system vulnerability. The Bytecode Alliance is committed to establishing a capable, secure platform that allows application developers and service providers to confidently run untrusted code, on any infrastructure, for any operating system or device, leveraging decades of experience doing so inside web browsers.

Partner quotes:


“WebAssembly is changing the web, but we believe WebAssembly can play an even bigger role in the software ecosystem as it continues to expand beyond browsers. This is a unique moment in time at the dawn of a new technology, where we have the opportunity to fix what’s broken and build new, secure-by-default foundations for native development that are portable and scalable. But we need to take deliberate, cross-industry action to ensure this happens in the right way. Together with our partners in the Bytecode Alliance, Mozilla is building these new secure foundations—for everything from small, embedded devices to large, computing clouds,” said Luke Wagner, Distinguished Engineer at Mozilla and co-creator of WebAssembly.


“Fastly is very happy to help bring the Bytecode Alliance to the community,” said Tyler McMullen, CTO at Fastly. “Lucet and Cranelift have been developed together for years, and we’re excited to formalize their relationship and help them grow faster together. This is an important moment in computing history, marking our chance to redefine how software will be built across clients, origins, and the edge. The Bytecode Alliance is our way of contributing to and working with the community, to create the foundations that the future of the internet will be built on.”


“Intel is joining the Bytecode Alliance as a founding member to help extend WebAssembly’s performance and security benefits beyond the browser to a wide range of applications and servers. Bytecode Alliance technologies can help developers extend software using a wide selection of languages, building upon the full capabilities of leading-edge compute platforms,” said Mark Skarpness, VP, Intel Architecture, Graphics, and Software; Director, Data-Centric System Stacks.

Red Hat: 

“Red Hat believes deeply in the role open source technologies play in helping provide the foundation for computing, from the operating system to the browser to the open hybrid cloud,” said Chris Wright, senior vice president and Chief Technology Officer at Red Hat. “Wasmtime is an exciting development that helps move WebAssembly out of the browser into the server space where we are experimenting with it to change the trust model for applications, and we are happy to be involved in helping it grow into a mature, community-based project.”

Useful Links:

About Mozilla

Mozilla has been a pioneer and advocate for the web for more than 20 years. We are a global organization with a mission to promote innovation and opportunity on the Web. Today, hundreds of millions of people worldwide use the popular Firefox browser to discover, experience, and connect to the Web on computers, tablets and mobile phones. Together with our vibrant, global community of developers and contributors, we create and promote open standards that ensure the internet remains a global public resource, open and accessible to all.

The post New Bytecode Alliance Brings the Security, Ubiquity, and Interoperability of the Web to the World of Pervasive Computing appeared first on The Mozilla Blog.

Categorieën: Mozilla-nl planet

Mozilla Addons Blog: Security improvements in AMO upload tools

ma, 11/11/2019 - 17:15

We are making some changes to the submission flow for all add-ons (both AMO- and self-hosted) to improve our ability to detect malicious activity.

These changes, which will go into effect later this month, will introduce a small delay in automatic approval for all submissions. The delay can be as short as a few minutes, but may take longer depending on the add-on file.

If you use a version of web-ext older than 3.2.1, or a custom script that connects to AMO’s upload API, this new delay in automatic approval will likely cause a timeout error. This does not mean your upload failed; the submission will still go through and be approved shortly after the timeout notification. Your experience using these tools should remain the same otherwise.

You can prevent the timeout error from being triggered by updating web-ext or your custom scripts before this change goes live. We recommend making these updates this week.

  • For web-ext: update to web-ext version 3.2.1, which has a longer default timeout for `web-ext sign`. To update your global install, use the command `npm install -g web-ext`.
  • For custom scripts that use the AMO upload API: make sure your upload scripts account for potentially longer delays before the signed file is available. We recommend allowing up to 15 minutes.

The post Security improvements in AMO upload tools appeared first on Mozilla Add-ons Blog.

Categorieën: Mozilla-nl planet

David Humphrey: Hacktoberfest 2019

ma, 11/11/2019 - 01:17

I've been marking student submissions in my open source course this weekend, and with only a half-dozen more to do, the procrastinator in me decided a blog post was in order.

Once again I've asked my students to participate in Hacktoberfest.  I wrote about the experience last year, and wanted to give an update on how it went this time.

I layer a few extra requirements on the students, some of them to deal with things I've learned in the past.  For one, I ask them to set some personal goals for the month, and look at each pull request as a chance to progress toward achieving these goals.  The students are quite different from one another, which I want to celebrate, and this lets them go in different directions, and move at different paces.

Here are some examples of the goals I heard this time around:

  • Finish all the required PRs
  • Increase confidence in myself as a developer
  • Master git/GitHub
  • Learn new languages and technologies (Rust, Python, React, etc)
  • Contribute to projects we use and enjoy on a daily basis (e.g., VSCode)
  • Contribute to some bigger projects (e.g., Mozilla)
  • Add more experience to our resume
  • Read other people's code, and get better at understanding new code
  • Work on projects used around the world
  • Work on projects used locally
  • Learn more about how big projects do testing

So how did it go?  First, the numbers:

  • 62 students completed all 4 PRs during the month (95% completion rate)
  • 246 Pull Requests were made, consisting of 647 commits to 881 files
  • 32K lines of code were added or modified

I'm always interested in the languages they choose.  I let them work on any open source projects, so given this freedom, how will they use it?  The most popular languages by pull request ere:

  • JavaScript/TypeScript - 50%
  • HTML/CSS - 11%
  • C/C++/C# - 11%
  • Python - 10%
  • Java - 5%

Web technology projects dominate GitHub, and it's interesting to see that this is not entirely out of sync with GitHub's own stats on language positions.  As always, the long-tail provides interesting info as well.  A lot of people worked on bugs in languages they didn't know previously, including:

Swift, PHP, Go, Rust, OCaml, PowerShell, Ruby, Elixir, Kotlin

Because I ask the students to "progress" with the complexity and involvement of their pull requests, I had fewer people working in "Hacktoberfest" style repos (projects that popup for October, and quickly vanish).  Instead, many students found their way into larger and well known repositories and organizations, including:

Polymer, Bitcoin, Angular, Ethereum, VSCode, Microsoft Calculator, React Native for Windows, Microsoft STL, Jest, WordPress, node.js, Nasa, Mozilla, Home Assistant, Google, Instacart

The top GitHub organization by pull request volume was Microsoft.  Students worked on many Microsoft projects, which is interesting, since they didn't coordinate their efforts.  It turns out that Microsoft has a lot of open source these days.

When we were done, I asked the students to reflect on the process a bit, and answer a few questions.  Here's what I heard.

1. What are you proud of?  What did you accomplish during October?

  • Contributing to big projects (e.g., Microsoft STL, Nasa, Rust)
  • Contributing to small projects, who really needed my help
  • Learning a new language (e.g., Python)
  • Having PRs merged into projects we respect
  • Translation work -- using my personal skills to help a project
  • Seeing our work get shipped in a product we use
  • Learning new tech (e.g., complex dev environments, creating browser extensions)
  • Successfully contributing to a huge code base
  • Getting involved in open source communities
  • Overcoming the intimidation of getting involved

2. What surprised you about Open Source?  How was it different than you expected?

  • People in the community were much nicer than I expected
  • I expected more documentation, it was lacking
  • The range of projects: big companies, but also individuals and small communities
  • People spent time commenting on, reviewing, and helping with our PRs
  • People responded faster than we anticipated
  • At the same time, we also found that some projects never bothered to respond
  • Surprised to learn that everything I use has some amount of open source in it
  • Surprised at how many cool projects there are, so many that I don’t know about
  • Even on small issues, lead contributors will get involved in helping (e.g., 7 reviews in a node.js fix)
  • Surprised at how unhelpful the “Hacktoberfest” label is in general
  • “Good First Issue” doesn’t mean it will be easy.  People have different standards for what this means
  • Lots of things on GitHub are inactive, be careful you don’t waste your time
  • Projects have very different standards from one to the next, in terms of process, how professional they are, etc.
  • Surprised to see some of the hacks even really big projects use
  • Surprised how willing people were to let us get involved in their projects
  • Lots of camaraderie between devs in the community

3. What advice would you give yourself for next time?

  • Start small, progress from there
  • Manage your time well, it takes way longer than you think
  • Learn how to use GitHub’s Advanced Search well
  • Make use of your peers, ask for help
  • Less time looking for a perfect issue, more time fixing a good-enough issue
  • Don’t rely on the Hacktoberfest label alone.
  • Don’t be afraid to fail.  Even if a PR doesn’t work, you’ll learn a lot in the process
  • Pick issues in projects you are interested in, since it takes so much time
  • Don’t be afraid to work on things you don’t (yet) know.  You can learn a lot more than you think.
  • Read the contributing docs, and save yourself time and mistakes
  • Run and test code locally before you push
  • Don’t be too picky with what you work on, just get involved
  • Look at previously closed PRs in a project for ideas on how to solve your own.

One thing that was new for me this time around was seeing students get involved in repos and projects that didn't use English as their primary language.  I've had lots of students do localization in projects before.  But this time, I saw quite a few students working in languages other than English in issues and pull requests.  This is something I've been expecting to see for a while, especially with GitHub's Trending page so often featuring projects not in English.  But it was the first time it happened organically with my own students.

Once again, I'm grateful to the Hacktoberfest organizers, and to the hundreds of maintainers we encountered as we made our way across GitHub during October.  When you've been doing open source a long time, and work in git/GitHub everyday, it can be hard to remember what it's like to begin.  Because I continually return to the place where people start, I know first-hand how valuable it is to be given the chance to get involved, for people to acknowledge and accept your work, and for people to see that it's possible to contribute.

Categorieën: Mozilla-nl planet

Ryan Harter: Technical Leadership Paths

vr, 08/11/2019 - 09:00

I found this article a few weeks ago and I really enjoyed the read. The author outlines what a role can look like for very senior ICs. It's the first in a (yet to be written) series about technical leadership and long term IC career paths. I'm excited to read …

Categorieën: Mozilla-nl planet

The Firefox Frontier: Nine tips for better tab management

do, 07/11/2019 - 17:59

Poll time! No judgment if you’re in the high end of the range. Keeping a pile of open tabs is the sign of an optimistic, enthusiastic, curious digital citizen, and … Read more

The post Nine tips for better tab management appeared first on The Firefox Frontier.

Categorieën: Mozilla-nl planet

The Rust Programming Language Blog: Announcing Rust 1.39.0

do, 07/11/2019 - 01:00

The Rust team is happy to announce a new version of Rust, 1.39.0. Rust is a programming language that is empowering everyone to build reliable and efficient software.

If you have a previous version of Rust installed via rustup, getting Rust 1.39.0 is as easy as:

rustup update stable

If you don't have it already, you can get rustup from the appropriate page on our website, and check out the detailed release notes for 1.39.0 on GitHub.

What's in 1.39.0 stable

The highlights of Rust 1.39.0 include async/.await, shared references to by-move bindings in match guards, and attributes on function parameters. Also, see the detailed release notes for additional information.

The .await is over, async fns are here

Previously in Rust 1.36.0, we announced that the Future trait is here. Back then, we noted that:

With this stabilization, we hope to give important crates, libraries, and the ecosystem time to prepare for async / .await, which we'll tell you more about in the future.

A promise made is a promise kept. So in Rust 1.39.0, we are pleased to announce that async / .await is stabilized! Concretely, this means that you can define async functions and blocks and .await them.

An async function, which you can introduce by writing async fn instead of fn, does nothing other than to return a Future when called. This Future is a suspended computation which you can drive to completion by .awaiting it. Besides async fn, async { ... } and async move { ... } blocks, which act like closures, can be used to define "async literals".

For more on the release of async / .await, read Niko Matsakis's blog post.

References to by-move bindings in match guards

When pattern matching in Rust, a variable, also known as a "binding", can be bound in the following ways:

  • by-reference, either immutably or mutably. This can be achieved explicitly e.g. through ref my_var or ref mut my_var respectively. Most of the time though, the binding mode will be inferred automatically.

  • by-value -- either by-copy, when the bound variable's type implements Copy, or otherwise by-move.

Previously, Rust would forbid taking shared references to by-move bindings in the if guards of match expressions. This meant that the following code would be rejected:

fn main() { let array: Box<[u8; 4]> = Box::new([1, 2, 3, 4]); match array { nums // ---- `nums` is bound by move. if nums.iter().sum::<u8>() == 10 // ^------ `.iter()` implicitly takes a reference to `nums`. => { drop(nums); // ----------- `nums` was bound by move and so we have ownership. } _ => unreachable!(), } }

With Rust 1.39.0, the snippet above is now accepted by the compiler. We hope that this will give a smoother and more consistent experience with match expressions overall.

Attributes on function parameters

With Rust 1.39.0, attributes are now allowed on parameters of functions, closures, and function pointers. Whereas before, you might have written:

#[cfg(windows)] fn len(slice: &[u16]) -> usize { slice.len() } #[cfg(not(windows))] fn len(slice: &[u8]) -> usize { slice.len() } can now, more succinctly, write:

fn len( #[cfg(windows)] slice: &[u16], // This parameter is used on Windows. #[cfg(not(windows))] slice: &[u8], // Elsewhere, this one is used. ) -> usize { slice.len() }

The attributes you can use in this position include:

  1. Conditional compilation: cfg and cfg_attr

  2. Controlling lints: allow, warn, deny, and forbid

  3. Helper attributes used by procedural macro attributes applied to items.

    Our hope is that this will be used to provide more readable and ergonomic macro-based DSLs throughout the ecosystem.

Borrow check migration warnings are hard errors in Rust 2018

In the 1.35.0 release, we announced that NLL had come to Rust 2015 after first being released for Rust 2018 in 1.31.

As noted in the 1.35.0 release, the old borrow checker had some bugs which would allow memory unsafety. These bugs were fixed by the NLL borrow checker. As these fixes broke some stable code, we decided to gradually phase in the errors by checking if the old borrow checker would accept the program and the NLL checker would reject it. If so, the errors would instead become warnings.

With Rust 1.39.0, these warnings are now errors in Rust 2018. In the next release, Rust 1.40.0, this will also apply to Rust 2015, which will finally allow us to remove the old borrow checker, and keep the compiler clean.

If you are affected, or want to hear more, read Niko Matsakis's blog post.

More const fns in the standard library

With Rust 1.39.0, the following functions became const fn:

Additions to the standard library

In Rust 1.39.0 the following functions were stabilized:

Other changes

There are other changes in the Rust 1.39.0 release: check out what changed in Rust, Cargo, and Clippy.

Please also see the compatibility notes to check if you're affected by those changes.

Contributors to 1.39.0

Many people came together to create Rust 1.39.0. We couldn't have done it without all of you. Thanks!

Categorieën: Mozilla-nl planet

The Rust Programming Language Blog: Async-await on stable Rust!

do, 07/11/2019 - 01:00

On this coming Thursday, November 7, async-await syntax hits stable Rust, as part of the 1.39.0 release. This work has been a long time in development -- the key ideas for zero-cost futures, for example, were first proposed by Aaron Turon and Alex Crichton in 2016! -- and we are very proud of the end result. We believe that Async I/O is going to be an increasingly important part of Rust's story.

While this first release of "async-await" is a momentous event, it's also only the beginning. The current support for async-await marks a kind of "Minimum Viable Product" (MVP). We expect to be polishing, improving, and extending it for some time.

Already, in the time since async-await hit beta, we've made a lot of great progress, including making some key diagnostic improvements that help to make async-await errors far more approachable. To get involved in that work, check out the Async Foundations Working Group; if nothing else, you can help us by filing bugs about polish issues or by nominating those bugs that are bothering you the most, to help direct our efforts.

Many thanks are due to the people who made async-await a reality. The implementation and design would never have happened without the leadership of cramertj and withoutboats, the implementation and polish work from the compiler side (davidtwco, tmandry, gilescope, csmoe), the core generator support that futures builds on (Zoxc), the foundational work on Future and the Pin APIs (aturon, alexcrichton, RalfJ, pythonesque), and of course the input provided by so many community members on RFC threads and discussions.

Major developments in the async ecosystem

Now that async-await is approaching stabilization, all the major Async I/O runtimes are at work adding and extending their support for the new syntax:

Async-await: a quick primer

(This section and the next are reproduced from the "Async-await hits beta!" post.)

So, what is async await? Async-await is a way to write functions that can "pause", return control to the runtime, and then pick up from where they left off. Typically those pauses are to wait for I/O, but there can be any number of uses.

You may be familiar with the async-await from JavaScript or C#. Rust's version of the feature is similar, but with a few key differences.

To use async-await, you start by writing async fn instead of fn:

async fn first_function() -> u32 { .. }

Unlike a regular function, calling an async fn doesn't have any immediate effect. Instead, it returns a Future. This is a suspended computation that is waiting to be executed. To actually execute the future, use the .await operator:

async fn another_function() { // Create the future: let future = first_function(); // Await the future, which will execute it (and suspend // this function if we encounter a need to wait for I/O): let result: u32 = future.await; ... }

This example shows the first difference between Rust and other languages: we write future.await instead of await future. This syntax integrates better with Rust's ? operator for propagating errors (which, after all, are very common in I/O). You can simply write future.await? to await the result of a future and propagate errors. It also has the advantage of making method chaining painless.

Zero-cost futures

The other difference between Rust futures and futures in JS and C# is that they are based on a "poll" model, which makes them zero cost. In other languages, invoking an async function immediately creates a future and schedules it for execution: awaiting the future isn't necessary for it to execute. But this implies some overhead for each future that is created.

In contrast, in Rust, calling an async function does not do any scheduling in and of itself, which means that we can compose a complex nest of futures without incurring a per-future cost. As an end-user, though, the main thing you'll notice is that futures feel "lazy": they don't do anything until you await them.

If you'd like a closer look at how futures work under the hood, take a look at the executor section of the async book, or watch the excellent talk that withoutboats gave at Rust LATAM 2019 on the topic.


We believe that having async-await on stable Rust is going to be a key enabler for a lot of new and exciting developments in Rust. If you've tried Async I/O in Rust in the past and had problems -- particularly if you tried the combinator-based futures of the past -- you'll find async-await integrates much better with Rust's borrowing system. Moreover, there are now a number of great runtimes and other libraries available in the ecosystem to work with. So get out there and build stuff!

Categorieën: Mozilla-nl planet