mozilla

Mozilla Nederland LogoDe Nederlandse
Mozilla-gemeenschap

Abonneren op feed Mozilla planet
Planet Mozilla - http://planet.mozilla.org/
Bijgewerkt: 2 maanden 1 dag geleden

Ehsan Akhgari: Quantum Flow Engineering Newsletter #18

vr, 04/08/2017 - 10:04

This has been a busy week.  A lot of fixes have landed, setting up the Firefox 57 cycle for a good start.  On the platform side, a notable change that will be in the upcoming Nightly is the fix for document.cookie using synchronous IPC.  This super popular API call slows down various web pages today in Firefox, and starting from tomorrow, the affected pages should experience a great speedup.  I have sometimes seen the slowdown caused by this one issue to amount to a second or more in some situations, thanks a lot to Amy and Josh for their hard work on this feature.  The readers of these newsletters know that the work on fixing this issue has gone on for a long time, and it’s great to see it land early in the cycle.

On the front-end side, more and more of the UI changes of Photon are landing in Nightly.  One of the overall changes that I have seen is that the interface is starting to feel a lot more responsive and snappy than it was around this time last year.  This is due to many different details.  A lot of work has gone into fixing rough edges in the performance of the existing code, some of which I have covered but most of which is under the Photon Performance project.  Also the new UI is built with performance in mind, so for example where animations are used, they use the compositor and don’t run on the main thread.  All of the pieces of this performance puzzle are nicely coming to fit in together, and it is great to see that this hard work is paying off.

On the Speedometer front, things are progressing with fast pace.  We have been fixing issues that have been on our list from the previous findings, which has somewhat slowed down the pace of finding new issues to work on.  Although the SpiderMonkey team haven’t waited around and are continually finding new optimization opportunities out of further investigations.  There is still more work to be done there!

I will now move own to acknowledge the great work of all of those who helped make Firefox faster last week.  I hope I am not mistakenly forgetting any names here!

Categorieën: Mozilla-nl planet

Cameron Kaiser: And now for something completely different: when good Mac apps go bought

vr, 04/08/2017 - 09:38
The Unarchiver, one of the more handy tools for, uh, unarchiving, um, archives, is now a commercial app. 3.11.1 can run on 10.4 PowerPC, but the "legacy" download they offer has a defective resource fork, and the source code is no longer available.

The same author also wrote an image display tool called Xee. 2.2 would run on 10.4 PowerPC. After Unarchiver's purchase, it seems Xee was part of the same deal and now only Xee 3 is available.

Fortunately my inveterate digital hoarding habit came in handy, because I managed to get both a working archive of The Unarchiver 3.11.1 and Xee 2.2 and the source code, so I can try to maintain them for our older platforms. (Xee I have compiling and running happily; Unarchiver will need a little work, but it's doable.) But that's kind of the trick, isn't it? If I hadn't thought to grab these and their source code a number of months ago as part of my standard operating procedure, they'd be gone, probably forever. I'm sure MacPaw (the new owners) are good people but I don't foresee them putting any time in to toss a bone to legacy Power Macs, let alone actually continue support. When these things happen without warning to a long-time open source free utility, that's even worse.

That said, the X Lossless Decoder, which I use regularly to rip CDs and change audio formats and did donate to, is still trucking along. Here's a real Universal app: it runs on any system from 10.4 to 10.12, on PowerPC or Intel, and the latest version of July 29, 2017 actually fixes a Tiger PowerPC bug. I'm getting worried about its future on our old machines, though: it's a 32-bit Intel app, and Apple has ominously said High Sierra "will be the last macOS release to support 32-bit apps without compromise." They haven't said what they mean by that, but my guess is that 10.14 might be the first release where Intel 32-bit Carbon apps either no longer run or have certain features disabled, and it's very possible 10.15 might not run any 32-bit applications (Carbon or Cocoa) at all. It might be possible to build 64-bit Intel and still lipo it with a 32-bit PowerPC executable, but these are the kinds of situations that get previously working configurations tossed in the eff-it bucket, especially if the code bases for each side of the fat binary end up diverging substantially. I guess I'd better grab a source snapshot too just in case.

As these long lived apps founder and obsolesce, if you want to something kept right, you keep it yourself.

Categorieën: Mozilla-nl planet

David Teller: Towards a JavaScript Binary AST

do, 03/08/2017 - 23:31
In this blog post, I would like to introduce the JavaScript Binary AST, an ongoing project that we hope will help make webpages load faster, along with a number of other benefits. A little background Over the years, JavaScript has grown from one of the slowest scripting languages available to a high-performance powerhouse, fast enough that it can run desktop, server, mobile and even embedded applications, whether through web browsers or other environments.
Categorieën: Mozilla-nl planet

Air Mozilla: Intern Presentations: Round 3: Thursday, August 3rd

do, 03/08/2017 - 22:00

 Thursday, August 3rd Intern Presentations 10 presenters Time: 1:00PM - 3:30PM (PDT) - each presenter will start every 15 minutes 8 SF, 2 PDX

Categorieën: Mozilla-nl planet

Air Mozilla: Intern Presentations: Round 3: Thursday, August 3rd

do, 03/08/2017 - 22:00

 Thursday, August 3rd Intern Presentations 10 presenters Time: 1:00PM - 3:30PM (PDT) - each presenter will start every 15 minutes 8 SF, 2 PDX

Categorieën: Mozilla-nl planet

Firefox Test Pilot: Gaining Insights Early: Concept Evaluation for Firefox Send

do, 03/08/2017 - 21:23

Earlier this week, the Firefox Send experiment launched in Test Pilot. The experiment allows people to transfer files in a simple and secure way using Firefox.

The idea for Send stemmed from what we’ve learned from past research about what people do online. For instance, from the multi-phase study on workflows conducted by the Firefox User Research team last year, we know that transferring files — images, text documents, videos — to oneself or other people is an atomic step within many common workflows. Once the Test Pilot team has an idea for an experiment, one of the first steps in our process is to evaluate the viability of that idea or concept.

Concept evaluation vs. usability testing

During the early idea stage of an experiment, our user research efforts focus on trying to understand the problem space the experiment is intended to address. We do this by seeking answers to the following types of questions:

  • Does the experiment idea address an existing need participants have?
  • Is the experiment intelligible by participants who might use it?
  • Which, if any, mental models do participants most closely associate with the experiment idea?
  • Is the experiment idea comparable to any tools participants are already using? If so, how is the experiment idea unique?
  • What expectations do participants have of the experiment?
  • Do participants have any concerns about the experiment?

The findings from this early research help determine the kinds and amount of effort that our product, UX, visual design, content strategy, and engineering team members will invest in the experiment moving forward. At this stage, we are less concerned with how usable an early design of an experiment is because we know that as we grow our understanding of existing needs, behaviors, mental models, and attitudes, the design could change significantly.

Research design for Firefox Send

To evaluate the idea for Firefox Send, we conducted remote interviews with individuals who reported having transferred a file online in the last week. Five participants were recruited to represent a mix of age, gender, ethnicity, U.S. location, level of educational attainment, household income, and primary desktop browser. Participants were asked to complete a series of think-aloud tasks using a desktop prototype of Send and were also asked about the ways they currently transfer files online. Each interview lasted approximately 45 minutes, and in addition to the researcher, other members of the Test Pilot team joined the interviews as notetakers and observers.

One of the screens from the prototype used in the Firefox Send concept evaluation

What We Learned

  • Participants use a variety of methods to transfer files online. Email was the most common method reported. Methods like email that involve uploading locally-stored files in order to share were perceived as more secure — because of greater perceived control over local files — than sharing that commenced from files already in the cloud.
  • Participants could not tell what the Send feature did based on the early UI for the browser toolbar.
  • Participants expected to be able to email the share link from the Send UI, which would require integration with email services.
  • Participants were unclear whether people receiving files transferred via Send had to use Firefox to access the files.
  • Participants expected file view and download settings to be more flexible than suggested by the prototype UI. For example, one participant noted that the default one download limit would be problematic for her because she sometimes needs to download the same file on her work computer and then on her computer at home.
  • No participants expressed preference for cloud versus peer-to-peer file sharing.

What We Needed to Do After Research

The Send concept evaluation study produced a list of 15 detailed recommendations for the Test Pilot team. To summarize, we needed to take take three actions:

  1. Make the Send functionality more discernible (including accessible) in the browser
  2. Make the secure nature of transferring files via Send apparent
  3. Give people more control over how long and/or how many times shared files can be viewed

What’s next

Now that Firefox Send has launched, we will monitor metrics and conduct additional qualitative research to understand the usability of the experiment for people transferring files as well as people receiving files. Give Send a try.

The report

The full report contains detailed findings and recommendations

View the full report from this study. The Test Pilot team is working hard to make all of our user research reports public in the remainder of this year. As we do this, links to other study-related documents may break. If you have any questions about details related to Test Pilot user research, please contact Sharon Bautista at: sharon@mozilla.com

Gaining Insights Early: Concept Evaluation for Firefox Send was originally published in Firefox Test Pilot on Medium, where people are continuing the conversation by highlighting and responding to this story.

Categorieën: Mozilla-nl planet

Air Mozilla: Reps Weekly Meeting Aug. 03, 2017

do, 03/08/2017 - 18:00

Reps Weekly Meeting Aug. 03, 2017 This is a weekly call with some of the Reps to discuss all matters about/affecting Reps and invite Reps to share their work with everyone.

Categorieën: Mozilla-nl planet

Air Mozilla: Reps Weekly Meeting Aug. 03, 2017

do, 03/08/2017 - 18:00

Reps Weekly Meeting Aug. 03, 2017 This is a weekly call with some of the Reps to discuss all matters about/affecting Reps and invite Reps to share their work with everyone.

Categorieën: Mozilla-nl planet

Mozilla Addons Blog: Extension Examples: See the APIs in Action

do, 03/08/2017 - 17:00

In the past year, we’ve added a tremendous amount of add-on documentation to MDN Web Docs. One resource we’ve spent time building out is the Extension Examples repository on GitHub, where you can see sample extension code using various APIs. This is helpful for seeing how WebExtensions APIs are used in practice, and it is especially helpful for people just getting started building extensions.

To make the example extensions easier to understand, there is a short README page for each example. There is also a page on MDN Web Docs that lists the JavaScript APIs used in each example.

With the work the Firefox Developer Tools team has completed for add-on developers, it is easier to temporarily install extensions in Firefox for debugging purposes. Feel free to try it out with the example extensions.

As we ramp up our efforts for Firefox 57, expect more documentation and examples to be available on MDN Web Docs and our GitHub repository. There are currently 47 example extensions, and you can help grow it by following these instructions.

Let us know in the comments if you find these examples useful, or contact us using these methods. We encourage you to contribute your own examples as well!

Thank you to all who have contributed to growing the repository.

The post Extension Examples: See the APIs in Action appeared first on Mozilla Add-ons Blog.

Categorieën: Mozilla-nl planet

Mozilla Open Innovation Team: Frameworks for Governance, Incentive and Consequence in FOSS

do, 03/08/2017 - 16:44

This is the fourth in a series of posts reporting findings from research into the state of D&I in Mozilla’s communities. The current state of our communities is a mix, when it comes to inclusivity: we can do better, and as with the others, this blog post is an effort to be transparent about what we’ve learned in working toward that goal.

Mobilizing the Community Participation Guidelines

In May 2017 after extensive community feedback we revised our guidelines to be much more specific, comprehensible, and actionable.

Click to view Mozilla’s Community Participation Guidelines in Full

In community D&I research interviews, we asked people what they knew about Mozilla’s Community Participation Guidelines. A majority were not aware of the CPG or, we suspect, shared guesses based on what they knew about Code of Conducts generally. And while awareness is growing thanks to circulated feedback and learning opportunities, there remain many ‘myths to bust’ around our guidelines, who they apply to and why they are as much a toolkit for community health and empowerment as they are for consequence.

…this moment in time is a pivotal one for all open projects who have adopted a Code of Conduct: We’re at a critical stage in making inclusive open project governance effective, and understood — real.

And this is not only true for Mozilla. In recent conversations with other open project leaders, I’ve started to see that this moment in time is a pivotal one for all open projects that have adopted a Code of Conduct: We’re at a critical stage in making inclusive open project governance effective, and understood — real. While effectively enforcing our guidelines will at times feel uncomfortable, and even met with resistance, there are far more for whom empowerment, safety and inclusion will be celebrated and embraced.

Photo credit: florianric via Visual Hunt

I tried to imagine the necessary components in developing a framework for embedding our CPG in our community workflows and culture. (And as much as possible we need to collaborate with other open source communities, building-on and extending each other’s work.)

Education — Curated learning resources, online and in-person that deliver meaningful and personalized opportunities to interact with the guidelines, and ways to measure educational approaches to inclusion across differences including cultural and regional ones.

Culture & Advocacy — Often the first time people interact with a Code of Conduct it’s in response to something negative — the CPG needs champions and experiments in building trust, self-reflection, empowerment, psychological safety, and opportunity.

Designed Reporting & Resolution Processes — Well-designed resolution processes mean getting serious about building templates, resources, investigative methods, and decision making workflows. We’re starting to do just this testing with regional community conflicts. It also means building on the work of our peers in other open communities; and we’re starting to do that too.

Consultation and Consensus — As part of resolution process — understanding and engaging key stakeholders, and important perspectives will drive effective resolutions, and key health initiatives. Right now this is showing up in formation of conflict-specific working groups, but it should also leverage what we’ve learned from the past.

Development — Strengthening our guidelines by treating them as a living document, improving as we learn.

Standardizing IncentivePhoto credit: mozillaeu via VisualHunt

Mozilla communities are filled with opportunity — opportunity to learn, grow, innovate, build, collaborate and be the change the world needs. And this enthusiasm overflowed in interviews — even when gatekeeping, and other negative attributes of community health were present.

While feeling valued was important, our interviews highlighted the need for contributors to surface and curate their accomplishments in formats that can be validated by external sources as having real-world-value.

Despite positive sentiment, and optimism we heard a great deal of frustration (and some delivered tears) when people were asked to discuss elements of participatory design that made contributing feel valuable to them. Opportunity, recognition and resources were perceived to be largely dependent on staff and core contributors. Additionally, recognition itself varies wildly across the project to the omission or inflation of achievement and impact on the project. We heard that those best at being seen, are also the loudest and most consistent at seeking recognition — further proof that meritocracy doesn’t exist.

While feeling valued was important, our interviews highlighted the need for contributors to surface and curate their accomplishments in formats that can be validated by external sources as having real-world-value.

“Social connections are the only way volunteers progress: You are limited by what you know, and who you know, not by what you do” (community interview)

Emerging from this research was a sense that standards for recognition across the project would be incredibly valuable in combating variability, creating visions for success and surfacing the achievements. Minimally standards help people understand where they are going, and the potential of their success; most optimistically standards make contributing a portal for learning and achievement to rival formal education and mentorship programs. Success of diverse groups is almost certainly dependent on getting recognition right.

If you are involved in open source project governance, please reach out ! I would love to talk to you — to build a bridge between our work and yours ❤

Our next post in this series ‘Designing Inclusive Events’, will be published in the second week of August.

Frameworks for Governance, Incentive and Consequence in FOSS was originally published in Mozilla Open Innovation on Medium, where people are continuing the conversation by highlighting and responding to this story.

Categorieën: Mozilla-nl planet

Emma Irwin: Frameworks for Governance, Incentive and Consequence in FOSS

do, 03/08/2017 - 16:20
This is the fourth in a series of posts reporting findings from research into the state of D&I in Mozilla’s communities. The current state of our communities is a mix, when it comes to inclusivity: we can do better, and as with the others, this blog post is an effort to be transparent about what we’ve learned in working toward that goal. Mobilizing the Community Participation Guidelines

In May 2017 after extensive community feedback we revised our guidelines to be much more specific, comprehensible, and actionable.

Click to view Mozilla’s Community Participation Guidelines in Full

In community D&I research interviews, we asked people what they knew about Mozilla’s Community Participation Guidelines. A majority were not aware of the CPG or, we suspect, shared guesses based on what they knew about Code of Conducts generally. And while awareness is growing thanks to circulated feedback and learning opportunities, there remain many ‘myths to bust’ around our guidelines, who they apply to and why they are as much a toolkit for community health and empowerment as they are for consequence.

…this moment in time is a pivotal one for all open projects who have adopted a Code of Conduct: We’re at a critical stage in making inclusive open project governance effective, and understood — real.

And this is not only true for Mozilla. In recent conversations with other open project leaders, I’ve started to see that this moment in time is a pivotal one for all open projects that have adopted a Code of Conduct: We’re at a critical stage in making inclusive open project governance effective, and understood — real. While effectively enforcing our guidelines will at times feel uncomfortable, and even met with resistance, there are far more for whom empowerment, safety and inclusion will be celebrated and embraced.

Photo credit: florianric via Visual Hunt

I tried to imagine the necessary components in developing a framework for embedding our CPG in our community workflows and culture. (And as much as possible we need to collaborate with other open source communities, building-on and extending each other’s work.)

Education — Curated learning resources, online and in-person that deliver meaningful and personalized opportunities to interact with the guidelines, and ways to measure educational approaches to inclusion across differences including cultural and regional ones.

Culture & Advocacy — Often the first time people interact with a Code of Conduct it’s in response to something negative — the CPG needs champions and experiments in building trust, self-reflection, empowerment, psychological safety, and opportunity.

Designed Reporting & Resolution Processes — Well-designed resolution processes mean getting serious about building templates, resources, investigative methods, and decision making workflows. We’re starting to do just this testing with regional community conflicts. It also means building on the work of our peers in other open communities; and we’re starting to do that too.

Consultation and Consensus — As part of resolution process — understanding and engaging key stakeholders, and important perspectives will drive effective resolutions, and key health initiatives. Right now this is showing up in formation of conflict-specific working groups, but it should also leverage what we’ve learned from the past.

Development — Strengthening our guidelines by treating them as a living document, improving as we learn.

Standardizing Incentive

Photo credit: mozillaeu via VisualHunt

Mozilla communities are filled with opportunity — opportunity to learn, grow, innovate, build, collaborate and be the change the world needs. And this enthusiasm overflowed in interviews — even when gatekeeping, and other negative attributes of community health were present.

While feeling valued was important, our interviews highlighted the need for contributors to surface and curate their accomplishments in formats that can be validated by external sources as having real-world-value.

Despite positive sentiment, and optimism we heard a great deal of frustration (and some delivered tears) when people were asked to discuss elements of participatory design that made contributing feel valuable to them. Opportunity, recognition and resources were perceived to be largely dependent on staff and core contributors. Additionally, recognition itself varies wildly across the project to the omission or inflation of achievement and impact on the project. We heard that those best at being seen, are also the loudest and most consistent at seeking recognition — further proof that meritocracy doesn’t exist.

While feeling valued was important, our interviews highlighted the need for contributors to surface and curate their accomplishments in formats that can be validated by external sources as having real-world-value.

“Social connections are the only way volunteers progress: You are limited by what you know, and who you know, not by what you do” (community interview)

Emerging from this research was a sense that standards for recognition across the project would be incredibly valuable in combating variability, creating visions for success and surfacing the achievements. Minimally standards help people understand where they are going, and the potential of their success; most optimistically standards make contributing a portal for learning and achievement to rival formal education and mentorship programs. Success of diverse groups is almost certainly dependent on getting recognition right.

If you are involved in open source project governance, please reach out ! I would love to talk to you — to build a bridge between our work and yours ❤

Our next post in this series ‘Designing Inclusive Events’, will be published in the second week of August. 

FacebookTwitterGoogle+Share

Categorieën: Mozilla-nl planet

Nick Fitzgerald: Scrapmetal — Scrap Your Rust Boilerplate

do, 03/08/2017 - 09:00

TLDR: I translated some of the code and ideas from Scrap Your Boilerplate: A Practical Design Pattern for Generic Programming by Lämmel and Peyton Jones to Rust and it’s available as the scrapmetal crate.

Say we work on some software that models companies, their departments, sub-departments, employees, and salaries. We might have some type definitions similar to this:

pub struct Company(pub Vec<Department>); pub struct Department(pub Name, pub Manager, pub Vec<SubUnit>); pub enum SubUnit { Person(Employee), Department(Box<Department>), } pub struct Employee(pub Person, pub Salary); pub struct Person(pub Name, pub Address); pub struct Salary(pub f64); pub type Manager = Employee; pub type Name = &'static str; pub type Address = &'static str;

One of our companies has had a morale problem lately, and we want to transform it into a new company where everyone is excited to come in every Monday through Friday morning. But we can’t really change the nature of the work, so we figure we can just give the whole company a 10% raise and call it close enough. This requires writing a bunch of functions with type signatures like fn(self, k: f64) -> Self for every type that makes up a Company, and since we recognize the pattern, we should be good Rustaceans and formalize it with a trait:

pub trait Increase: Sized { fn increase(self, k: f64) -> Self; }

A company with increased employee salaries is made by increasing the salaries of each of its departments’ employees:

impl Increase for Company { fn increase(self, k: f64) -> Company { Company( self.0 .into_iter() .map(|d| d.increase(k)) .collect() ) } }

A department with increased employee salaries is made by increasing its manager’s salary and the salary of every employee in its sub-units:

impl Increase for Department { fn increase(self, k: f64) -> Department { Department( self.0, self.1.increase(k), self.2 .into_iter() .map(|s| s.increase(k)) .collect(), ) } }

A sub-unit is either a single employee or a sub-department, so either increase the employee’s salary, or increase the salaries of all the people in the sub-department respectively:

impl Increase for SubUnit { fn increase(self, k: f64) -> SubUnit { match self { SubUnit::Person(e) => { SubUnit::Person(e.increase(k)) } SubUnit::Department(d) => { SubUnit::Department(Box::new(d.increase(k))) } } } }

An employee with an increased salary, is that same employee with the salary increased:

impl Increase for Employee { fn increase(self, k: f64) -> Employee { Employee(self.0, self.1.increase(k)) } }

And finally, a lone salary can be increased:

impl Increase for Salary { fn increase(self, k: f64) -> Salary { Salary(self.0 * (1.0 + k)) } }

Pretty straightforward.

But at the same time, that’s a whole lot of boilerplate. The only interesting part that has anything to do with actually increasing salaries is the impl Increase for Salary. The rest of the code is just traversal of the data structures. If we were to write a function to rename all the employees in a company, most of this code would remain the same. Surely there’s a way to factor all this boilerplate out so we don’t have to manually write it all the time?

In the paper Scrap Your Boilerplate: A Practical Design Pattern for Generic Programming, Lämmel and Peyton Jones show us a way to do just that in Haskell. And it turns out the ideas mostly translate into Rust pretty well, too. This blog post explores that translation, following much the same outline from the original paper.

When we’re done, we’ll be able to write the exact same salary increasing functionality with just a couple lines:

// Definition let increase = |s: Salary| Salary(s.0 * 1.1); let mut increase = Everywhere::new(Transformation::new(increase)); // Usage let new_company = increase.transform(old_company);

We have a few different moving parts involved here:

  • A function that transforms a specific type: FnMut(T) -> T. In the increase example this is the closure |s: Salary| Salary(s.0 * 1.1).

  • We have Transformation::new, which lifts the transformation function from transforming a single, specific type (FnMut(T) -> T) into transforming all types (for<U> FnMut(U) -> U). If we call this new transformation with a value of type T, then it will apply our T-specific transformation function. If we call it with a value of any other type, it simply returns the given value.

    Of course, Rust doesn’t actually support rank-2 types, but we can work around this by passing a trait with a generic method, anywhere we wanted to pass for<U> FnMut(U) -> U as a parameter. This trait gets implemented by Transformation:

// Essentially, for<T> FnMut(T) -> T pub trait GenericTransform { fn transform<U>(&mut self, t: U) -> U; }
  • Next is Everywhere::new, whose result is also a for<U> FnMut(U) -> U (aka implements the GenericTransform trait). This is a combinator that takes a generic transformation function, and traverses a tree of values, applying the generic transformation function to each value along the way.

  • Finally, behind the scenes there are two traits: Term and Cast. The former provides enumeration of a value’s immediate edges in the value tree. The latter enables us to ask some generic U if it is a specific T. These traits completely encapsulate the boilerplate we’ve been trying to rid ourselves of, and neither require any implementation on our part. Term can be generated mechanically with a custom derive, and Cast can be implemented (in nightly Rust) with specialization.

Next, we’ll walk through the implementation of each of these bits.

Implementing Cast

The Cast trait is defined like so:

trait Cast<T>: Sized { fn cast(self) -> Result<T, Self>; }

Given some value, we can try and cast it to a T or if that fails, get the original value back. You can think of it like instanceof in JavaScript, but without walking some prototype or inheritance chain. In the original Haskell, cast returns the equivalent of Option<T>, but we need to get the original value back if we ever want to use it again because of Rust’s ownership system.

To implement Cast requires specialization, which is a nightly Rust feature. We start with a default blanket implementation of Cast that fails to perform the conversion:

impl<T, U> Cast<T> for U { default fn cast(self) -> Result<T, Self> { Err(self) } }

Then we define a specialization for when Self is T that allows the cast to succeed:

impl<T> Cast<T> for T { fn cast(self) -> Result<T, Self> { Ok(self) } }

That’s it!

Here is Cast in action:

assert_eq!(Cast::<bool>::cast(1), Err(1)); assert_eq!(Cast::<bool>::cast(true), Ok(true)); Implementing Transformation

Once we have Cast, implementing generic transformations is easy. If we can cast the value to our underlying non-generic transformation function’s input type, then we call it. If we can’t, then we return the given value:

pub struct Transformation<F, U> where F: FnMut(U) -> U, { f: F, } impl<F, U> GenericTransform for Transformation<F, U> where F: FnMut(U) -> U, { fn transform<T>(&mut self, t: T) -> T { // Try to cast the T into a U. match Cast::<U>::cast(t) { // Call the transformation function and then cast // the resulting U back into a T. Ok(u) => match Cast::<T>::cast((self.f)(u)) { Ok(t) => t, Err(_) => unreachable!("If T=U, then U=T."), }, // Not a U, return unchanged. Err(t) => t, } } }

For example, we can lift the logical negation function into a generic transformer. For booleans, it will return the complement of the value, for other values, it leaves them unchanged:

let mut not = Transformation::new(|b: bool| !b); assert_eq!(not.transform(true), false); assert_eq!(not.transform("str"), "str"); Implementing Term

The next piece of the puzzle is Term, which enumerates the direct children of a value. It is defined as follows:

pub trait Term: Sized { fn map_one_transform<F>(self, f: &mut F) -> Self where F: GenericTransform; }

In the original Haskell, map_one_transform is called gmapT for “generic map transform”, and as mentioned earlier GenericTransform is a workaround for the lack of rank-2 types, and would otherwise be for<U> FnMut(U) -> U.

It is important that map_one_transform does not recursively call its children’s map_one_transform methods. We want a building block for making all different kinds of traversals, not one specific traversal hard coded.

If we were to implement Term for Employee, we would write this:

impl Term for Employee { fn map_one_transform<F>(self, f: &mut F) -> Self where F: GenericTransform, { Employee(f.transform(self.0), f.transform(self.1)) } }

And for SubUnit, it would look like this:

impl Term for SubUnit { fn map_one_transform<F>(self, f: &mut F) -> Self where F: GenericTransform, { match self { SubUnit::Person(e) => SubUnit::Person(f.transform(e)), SubUnit::Department(d) => SubUnit::Department(f.transform(d)), } } }

On the other hand, a floating point number has no children to speak of, and so it would do less:

impl Term for f64 { fn map_one_transform<F>(self, _: &mut F) -> Self where F: GenericTransform, { self } }

Note that each of these implementations are driven purely by the structure of the implementation’s type. enums transform whichever variant they are, structs and tuples transfrom each of their fields, etc. It’s 100% mechanical and 100% uninteresting.

It’s easy to write a custom derive for implementing Term. After that’s done, we just add #[derive(Term)] to our type definitions:

#[derive(Term)] pub struct Employee(pub Person, pub Salary); // Etc... Implementing Everywhere

Everywhere takes a generic transformation and then uses Term::map_one_transform to recursively apply it to the whole tree. It does so in a bottom up, left to right order.

Its definition and constructor are trivial:

pub struct Everywhere<F> where F: GenericTransform, { f: F, } impl<F> Everywhere<F> where F: GenericTransform, { pub fn new(f: F) -> Everywhere<F> { Everywhere { f } } }

Then, we implement GenericTransform for Everywhere. First we recursively map across the value’s children, then we transform the given value. This transforming of children first is what causes the traversal to be bottom up.

impl<F> GenericTransform for Everywhere<F> where F: GenericTransform, { fn transform<T>(&mut self, t: T) -> T where T: Term, { let t = t.map_one_transform(self); self.f.transform(t) } }

If instead we wanted to perform a top down traversal, our choice to implement mapping non-recursively for Term enables us to do so:

impl<F> GenericTransform for EverywhereTopDown<F> where F: GenericTransform, { fn transform<T>(&mut self, t: T) -> T where T: Term, { // Calling `transform` before `map_one_transform` now. let t = self.f.transform(t); t.map_one_transform(self) } } So What?

At this point, you might be throwing up your hands and complaining about all the infrastructure we had to write in order to get to the two line solution for increasing salaries in a company. Surely all this infrastructure is at least as much code as the original boilerplate? Yes, but this infrastructure can be shared for all the transformations we ever write, and not even just for companies, but values of all types!

For example, if we wanted to make sure every employee in the company was a good culture fit, we might want to rename them all to “Juan Offus”. This is all the code we’d have to write:

// Definition let rename = |p: Person| Person("Juan Offus", p.1); let mut rename = Everywhere::new(Transformation::new(rename)); // Usage let new_company = rename.transform(old_company);

Finally, the paper notes that this technique is more future proof than writing out the boilerplate:

Furthermore, if the data types change – for example, a new form of SubUnit is added – then the per-data-type boilerplate code must be re-generated, but the code for increase [..] is unchanged.

Queries

What if instead of consuming a T and transforming it into a new T, we wanted to non-destructively produce some other kind of result type R? In the Haskell code, generic queries have this type signature:

forall a. Term a => a -> R

Translating this into Rust, thinking about ownership and borrowing semantics, and using a trait with a generic method to avoid rank-2 function types, we get this:

// Essentially, for<T> FnMut(&T) -> R pub trait GenericQuery<R> { fn query<T>(&mut self, t: &T) -> R where T: Term; }

Similar to the Transformation type, we have a Query type, which lifts a query function for a particular U type (FnMut(&U) -> R) into a generic query over all types (for<T> FnMut(&T) -> R aka GenericQuery). The catch is that we need some way to create a default instance of R for the cases where our generic query function is invoked on a value that isn’t of type &U. This is what the D: FnMut() -> R is for.

pub struct Query<Q, U, D, R> where Q: FnMut(&U) -> R, D: FnMut() -> R, { make_default: D, query: Q, }

When constructing a Query, and our result type R implements the Default trait, we can use Default::default as D:

impl<Q, U, R> Query<Q, U, fn() -> R, R> where Q: FnMut(&U) -> R, R: Default, { pub fn new(query: Q) -> Query<Q, U, fn() -> R, R> { Query { make_default: Default::default, query, } } }

Otherwise, we require a function that we can invoke to give us a default value when we need one:

impl<Q, U, D, R> Query<Q, U, D, R> where Q: FnMut(&U) -> R, D: FnMut() -> R, { pub fn or_else(make_default: D, query: Q) -> Query<Q, U, D, R> { Query { make_default, query, } } }

Here we can see Query in action:

let mut char_to_u32 = Query::or_else(|| 42, |c: &char| *c as u32); assert_eq!(char_to_u32.query(&'a'), 97); assert_eq!(char_to_u32.query(&'b'), 98); assert_eq!(char_to_u32.query("str is not a char"), 42);

Next, we extend the Term trait with a map_one_query method, similar to map_one_transform, that applies the generic query to each of self’s direct children.

Note that this produces zero or more R values, not a single R! The original Haskell code returns a list of R values, and its laziness allows one to only actually compute as many as end up getting used. But Rust is not lazy, and is much more explicit about things like physical layout and storage of values. We don’t want to allocate a (generally small) vector on the heap for every single map_one_query call. Instead, we use a callback interface, so that callers can decide if and when to heap allocate the results.

pub trait Term: Sized { // ... fn map_one_query<Q, R, F>(&self, query: &mut Q, each: F) where Q: GenericQuery<R>, F: FnMut(&mut Q, R); }

Implementing map_one_query for Employee would look like this:

impl Term for Employee { // ... fn map_one_query<Q, R, F>(&self, q: &mut Q, mut f: F) where Q: QueryAll<R>, F: FnMut(&mut Q, R), { let r = q.query(&self.0); f(q, r); let r = q.query(&self.1); f(q, r); } }

And implementing it for SubUnit like this:

impl Term for SubUnit { // ... fn map_one_query<Q, R, F>(&self, q: &mut Q, mut f: F) where Q: QueryAll<R>, F: FnMut(&mut Q, R), { match *self { SubUnit::Person(ref p) => { let r = q.query(p); f(q, r); } SubUnit::Department(ref d) => { let r = q.query(d); f(q, r); } } } }

Once again, map_one_query’s implementation directly falls out of the structure of the type: querying each field of a struct, matching on a variant and querying each of the matched variant’s children. It is also mechanically implemented inside #[derive(Term)].

The final querying puzzle piece is a combinator putting the one-layer querying traversal together with generic query functions into recursive querying traversal. This is very similar to the Everywhere combinator, but now we also need a folding function to reduce the multiple R values we get from map_one_query into a single resulting R value.

Here is its definition and constructor:

pub struct Everything<Q, R, F> where Q: GenericQuery<R>, F: FnMut(R, R) -> R, { q: Q, fold: F, } impl<Q, R, F> Everything<Q, R, F> where Q: GenericQuery<R>, F: FnMut(R, R) -> R, { pub fn new(q: Q, fold: F) -> Everything<Q, R, F> { Everything { q, fold, } } }

We implement the Everything query traversal top down by querying the given value before mapping the query across its children and folding their results together. The wrapping into and unwrapping out of Options allow fold and the closure to take r by value; Option is essentially acting as a “move cell”.

impl<Q, R, F> GenericQuery<R> for Everything<Q, R, F> where Q: GenericQuery<R>, F: FnMut(R, R) -> R, { fn query<T>(&mut self, t: &T) -> R where T: Term, { let mut r = Some(self.q.query(t)); t.map_one_query( self, |me, rr| { r = Some((me.fold)(r.take().unwrap(), rr)); }, ); r.unwrap() } }

With Everything defined, we can perform generic queries! For example, to find the highest salary paid out in a company, we can query by grabbing an Employee’s salary (wrapped in an Option because we could have a shell company with no employees), and folding all the results together with std::cmp::max:

use std::cmp::max; // Definition let get_salary = |e: &Employee| Some(e.1.clone()); let mut query_max_salary = Everything::new(Query::new(get_salary), max); // Usage let max_salary = query_max_salary.query(&some_company);

If we were only querying for a single value, for example a Department with a particular name, the Haskell paper shows how we could leverage laziness to avoid traversing the whole search tree once we’ve found an acceptable answer. This is not an option for Rust. To have equivalent functionality, we would need to thread a break-or-continue control value from the query function through to map_one_query implementations. I haven’t implemented this, but if you want to, send me a pull request ;-)

However, we can prune subtrees from the search/traversal with the building blocks we’ve defined so far. For example, EverythingBut is a generic transformer combinator that only transforms the subtrees for which its predicate returns true, and leaves other subtrees as they are:

pub struct EverywhereBut<F, P> where F: GenericTransform, P: GenericQuery<bool>, { f: F, predicate: P, } impl<F, P> GenericTransform for EverywhereBut<F, P> where F: GenericTransform, P: GenericQuery<bool>, { fn transform<T>(&mut self, t: T) -> T where T: Term, { if self.predicate.query(&t) { let t = t.map_one_transform(self); self.f.transform(t) } else { t } } } What’s Next?

The paper continues by generalizing transforms, queries, and monadic transformations into brain-twisting generic folds over the value tree. Unfortunately, I don’t think that this can be ported to Rust, but maybe you can prove me wrong. I don’t fully grok it yet :)

If the generic folds can’t be expressed in Rust, that means that for every new kind of generic operation we might want to perform (eg add a generic cloning operation for<T> FnMut(&T) -> T) we would need to extend the Term trait and its custom derive. The consequences are that downstream crates are constrained to only use the operations predefined by scrapmetal, and can’t define their own arbitrary new operations.

The paper is a fun read — go read it!

Finally, check out the scrapmetal crate, play with it, and send me pull requests. I still need to implement Term for all the types that are exported in the standard library, and would love some help in this department. I’d also like to figure out what kinds of operations should come prepackaged, what kinds of traversals and combinators should be built in, and of course some help implementing them.

Categorieën: Mozilla-nl planet

Mozilla Open Policy & Advocacy Blog: Fighting Crime Shouldn’t Kill the Internet

do, 03/08/2017 - 02:39

The internet has long been a vehicle for creators and commerce. Yesterday, the Senate introduced a bill that would impose significant limitations on protections that have created vibrant online communities and content platforms, and allow users to create and consume uncurated material. While well intentioned, the liabilities placed on intermediaries in the bill would chill online speech and commerce. This is a counterproductive way to address sex trafficking, the ostensible purpose of the bill.

The internet, from its inception, started as a place to foster platforms and creators. In 1996 a law was passed that was intended to limit illegal content online – the Communications Decency Act (CDA). However, section 230 of the CDA provided protections for intermediaries: if you don’t know about particular illegal content, you aren’t held responsible for it. Intermediaries include platforms, websites, ISPs, and hosting providers, who as a result of CDA 230 are not held responsible for the actions of users. Section 230 is one of the reasons that YouTube, Facebook, Medium and online commenting systems can function without the technical burden or legal risk of screening every piece of user-generated content. Online platforms – love ‘em or hate ‘em – have enabled millions of less technical creators to share their work and opinions.

A fundamental part of the CDA is that it only punishes “knowing conduct” by intermediaries. This protection is missing from the changes this new bill proposes to CDA 230. The authors of the bill appear to be trying to preserve this core balance of – but they don’t add the “knowing conduct” language back into the CDA. Because they put it in the sex trafficking criminal statute instead, only Federal criminal cases would need to show that the site knew about the problematic content. The bill would introduce gaps in liability protections into CDA 230 that are not so easily covered. State laws can target intermediary behavior too, and without a “knowing conduct” standard in CDA directly, platforms of all types could be held liable for conduct of others that they know nothing about. This is also true of the (new) Federal civil right of action that this bill introduces. That means a small drafting choice strikes at the heart of the safe harbor provisions that make CDA 230 a powerful driver of the internet.

This bill is not well scoped to solve the problem, and does not impact the actual perpetrators of sex trafficking. Counterintuitively, it disincentivizes content moderation by removing the safe harbor around moderation (including automated moderation) that companies develop, including to detect illegal content like trafficking. And why would a company want to help law enforcement find the criminal content on their service when someone is going to turn around and sue them for having had it in the first place? Small and startup companies who are relying on the safe harbor to be innovative would face a greater risk environment for any user activity they facilitate. And users would have a much harder time finding places to do business, create, and speak.

The bill claims that CDA was never intended to protect websites that promote trafficking – but it was carefully tailored to ensure that intermediaries are not responsible for the conduct of their users. It has to work this way in order for the internet we know and love to exist. That doesn’t mean law enforcement can’t do its job – the CDA was built to provide ways to go after the bad guys (and to incentivize intermediaries to help). The proposed bill doesn’t do that.

The post Fighting Crime Shouldn’t Kill the Internet appeared first on Open Policy & Advocacy.

Categorieën: Mozilla-nl planet

Daniel Stenberg: The curl bus factor

wo, 02/08/2017 - 23:57

bus factor: the minimum number of team members that have to suddenly disappear from a project before the project stalls due to lack of knowledgeable or competent personnel.

Projects should strive to survive

If a project is worth using and deploying today and if it is a project worth sending patches to right now, it is also a project that should position itself to survive a loss of key individuals. However unlikely or unfortunate such an event would be.

Tools to calculate bus factor

All the available tools that determine the bus factor for a given project only run on code and check for commits, code churn or check how many files each person has done a significant share of changes in etc.

This number is really impossible to figure out without tools and tools really cannot take “general knowledge” into account, or “this person answers a lot of email on the list”, or this person has 48k in reputation on stack overflow already for responding to questions about the project.

The bus factor as evaluated by a tool pretty much has to be about amount of code, size of code or number of code changes, which may or may not be a good indicator of who knows what about the code. Those who author and commit changes probably have a good idea but a real problem is that you can’t reverse that view and say that just because you didn’t commit or change something, you don’t know. Do you know more about the code if you did many commits? Do you know more about the code if you changed more lines of code?

We can’t prove or assume lack of knowledge or interest by an absence of commits, edits or changes. And yet we can’t calculate bus factor if there’s no tool or way to calculate it.

A look at curl

curl is soon 20 years old and boasts 22k something commits. I’m the author of about 57% of them, and the second-most committer (who’s not involved anymore) has about 12%. That makes two committers having done 15.3k commits out of the 22k. If we for simplicity calculate bus factor based on commit numbers, we’d need 8580 commits from others and I would stop completely, to reach bus factor >2 (when the 2 top committers have less than 50% of the commits), which at the current commit rate equals in about 5 years. And it would take about 3 years to just push the factor above 1. So even when new people joins the project, they have a really hard time to significantly change the bus factor…

The image above shows the relative share of commits done in the curl project’s git source code repository (as a share of the total amount) by the top 4 commiters from January 1 2010 to July 5 2017 (click for higher resolution). The top dotted line shows the combined share of all four (at 82% right now) and the dark blue line is my share. You can see how my commit share has shrunk from 72% down to 57% over these last 7.5 years. If this trend holds, I’ll have less than 50% of the total commits done in curl in 3-4 years.

At the same time, the thicker light blue line that climbs up into the right is the total number of authors in the git repository, which recently surpassed 500 as you can see. (The line uses the right Y-axes)

We’re approaching 1600 individually named contributors thanked in the project and every release we do (we ship one every 8 weeks) has around 40 contributors, out of which typically around half are newcomers. The long tail is very long and the amount of drive-by just-once contributors is high. Also note how the number 1600 is way higher than the 500 something that has authored commits. Lots of people contribute in other ways.

When we ask our users “why don’t you contribute (more) to the project?” (which we do annually) what do they answer? They say its because 1) everything works, 2) I don’t have time 3) things get fixed fast enough 4) I don’t know the programming language 5) I don’t have the energy.

First as the 6th answer (at 5% 2017) comes “other” where some people actually say they wouldn’t know where to start and so on.

All of this taken together: there are no visible signs of us suffering from having a low bus factor. Lots of signs that people can do things when they want to if I don’t do it. Lots of signs that the code and concepts are understood.

Lots of signs that a low bus factor is not a big problem here. Or perhaps rather that the bus factor isn’t really as low as any tool would calculate it.

What if I…

Do I know who would pick up the project and move on if I die today? No. We’re a 100% volunteer-driven project. We create one of the world’s most widely used software components (easily more than three billion instances and counting) but we don’t know who’ll be around tomorrow to work on it. I can’t know because that’s not how the project works.

Given the extremely wide use of our stuff, given the huge contributor base, given the vast amounts of documentation and tests I think it’ll work out.

Just because you have a large bus factor doesn’t necessarily make the project a better place to ask questions. We’ve seen projects in the past where N persons involved are all from the same company and when that company removes its support for that project those people all go away. High bus factor, no people to ask.

Finally, let me just add that I would of course love to have many more committers and contributors in the curl project, and I think we would be an even better project if we did. But that’s a separate issue.

Categorieën: Mozilla-nl planet

About:Community: Firefox 55 new contributors

wo, 02/08/2017 - 22:23

With the release of Firefox 55, we are pleased to welcome the 108 developers who contributed their first code change to Firefox in this release, 89 of whom were brand new volunteers! Please join us in thanking each of these diligent and enthusiastic individuals, and take a look at their contributions:

Categorieën: Mozilla-nl planet

The Firefox Frontier: [Watch] Where Do You Draw The Line for TMI Online?

wo, 02/08/2017 - 20:21

As part of our tireless crusade to raise awareness around the issue of personal privacy on the web and to advocate for a free and open internet, we recently teamed … Read more

The post [Watch] Where Do You Draw The Line for TMI Online? appeared first on The Firefox Frontier.

Categorieën: Mozilla-nl planet

Air Mozilla: The Joy of Coding - Episode 108

wo, 02/08/2017 - 19:00

The Joy of Coding - Episode 108 mconley livehacks on real Firefox bugs while thinking aloud.

Categorieën: Mozilla-nl planet

Air Mozilla: Weekly SUMO Community Meeting August 2, 2017

wo, 02/08/2017 - 18:00

Weekly SUMO Community Meeting August 2, 2017 This is the sumo weekly call

Categorieën: Mozilla-nl planet

Botond Ballo: Trip Report: C++ Standards Meeting in Toronto, July 2017

wo, 02/08/2017 - 16:00
Summary / TL;DR

Project What’s in it? Status C++17 See below Draft International Standard published; on track for final publication by end of 2017 Filesystems TS Standard filesystem interface Part of C++17 Library Fundamentals TS v1 optional, any, string_view and more Part of C++17 Library Fundamentals TS v2 source code information capture and various utilities Published! Concepts TS Constrained templates Merged into C++20 with some modifications Parallelism TS v1 Parallel versions of STL algorithms Part of C++17 Parallelism TS v2 Task blocks, library vector types and algorithms and more Under active development Transactional Memory TS Transaction support Published! Uncertain whether this is headed towards C++20 Concurrency TS v1 future.then(), latches and barriers, atomic smart pointers Published! Parts of it headed for C++20 Concurrency TS v2 See below Under active development Networking TS Sockets library based on Boost.ASIO Voted for publication! Ranges TS Range-based algorithms and views Voted for publication! Coroutines TS Resumable functions, based on Microsoft’s await design Voted for publication! Modules TS A component system to supersede the textual header file inclusion model Preliminary Draft voted out for balloting by national standards bodies Numerics TS Various numerical facilities Under active development Graphics TS 2D drawing API Under active design review Reflection Code introspection and (later) reification mechanisms Introspection proposal passed core language and library design review; next stop is wording review. Targeting a Reflection TS. Contracts Preconditions, postconditions, and assertions Proposal passed core language and library design review; next stop is wording review.

Some of the links in this blog post may not resolve until the committee’s post-meeting mailing is published. If you encounter such a link, please check back in a few days.

Introduction

A couple of weeks ago I attended a meeting of the ISO C++ Standards Committee (also known as WG21) in Toronto, Canada (which, incidentally, is where I’m based). This was the second committee meeting in 2017; you can find my reports on previous meetings here (November 2016, Issaquah) and here (February 2017, Kona). These reports, particularly the Kona one, provide useful context for this post.

With the C++17 Draft International Standard (DIS) being published (and its balloting by national standards bodies currently in progress), this meeting was focused on C++20, and the various Technical Specifications (TS) we have in flight.

What’s the status of C++17?

From a technical point of view, C++17 is effectively done.

Procedurally, the DIS ballot is still in progress, and will close in August. Assuming it’s successful (which is widely expected), we will be in a position to vote to publish the final standard, whose content would be the same as the DIS with possible editorial changes, at the next meeting in November. (In the unlikely event that the DIS ballot is unsuccessful, we would instead publish a revised document labelled “FDIS” (Final Draft International Standard) at the November meeting, which would need to go through one final round of balloting prior to publication. In this case the final publication would likely happen in calendar year 2018, but I think the term “C++17” is sufficiently entrenched by now that it would remain the colloquial name for the standard nonetheless.)

C++20

With C++17 at the DIS stage, C++20 now has a working draft and is “open for business”; to use a development analogy, C++17 has “branched”, and the standard’s “trunk” is open for new development. Indeed, several changes have been voted into the C++20 working draft at this meeting.

Technical Specifications

This meeting was a particularly productive one for in-progress Technical Specifications. In addition to Concepts (which had already been published previously) being merged into C++20, three TSes – Coroutines, Ranges, and Networking – passed a publication vote this week, and a fourth, Modules, was sent out for its PDTS ballot (a ballot process that allows national standards bodies to vote and comment on the proposed TS, allowing the committee to incorporate their feedback prior to sending out a revised document for publication).

Coroutines TS

The Coroutines TS – which contains a stackless coroutine design, sometimes called co_await after one of the keywords it uses – had just been sent out for its PDTS ballot at the previous meeting. The results were in before this meeting began – the ballot had passed, with some comments. The committee made it a priority to get through all the comments at this meeting and draft any resulting revisions, so that the revised TS could be voted for final publication, which happened (successfully) during the closing plenary session.

Meanwhile, an independent proposal for stackful coroutines with a library-only interface is making its way through the Library groups. Attempts to unify the two varieties of coroutines into a single design seem to have been abandoned for now; the respective proposal authors maintain that the two kinds of coroutines are useful for different purposes, and could reasonably co-exist (no pun intended) in the language.

Ranges TS

The Ranges TS was sent out for its PDTS ballot two meetings ago, but due to the focus on C++17, the committee didn’t get through all of the resulting comments at the last meeting. That work was finished at this meeting, and this revised TS also successfully passed a publication vote.

Networking TS

Like the Ranges TS, the Networking TS was also sent out for its PDTS ballot two meetings ago, and resolving the ballot comments was completed at this meeting, leading to another successful publication vote.

Modules TS

Modules had come close to being sent out for its PDTS ballot at the previous meeting, but didn’t quite make it due to some procedural mis-communication (detailed in my previous report if you’re interested).

Modules is kind of in an interesting state. There are two relatively mature implementations (in MSVC and Clang), whose development either preceded or was concurrent with the development of the specification. Given this state of affairs, I’ve seen the following dynamic play out several times over the past few meetings:

  • a prospective user, or someone working on a new implementation (such as the one in GCC), comes to the committee seeking clarification about what happens in a particular scenario (like this one)
  • the two existing implementers consult their respective implementations, and give different answers
  • the implementers trace the difference in outcome back to a difference in the conceptual model of Modules that they have in their mind
  • the difference in the conceptual model, once identified, is discussed and reconciled by the committee, typically in the Evolution Working Group (EWG)
  • the implementers work with the Core Working Group (CWG) to ensure the specification wording reflects the new shared understanding

Of course, this is a desirable outcome – identifying and reconciling differences like this, and arriving at a specification that’s precise enough that someone can write a new implementation based purely on the spec, is precisely what we want out of a standards process. However, I can’t help but wonder if there isn’t a more efficient way to identify these differences – for example, by the two implementers actually studying each other’s implementations (I realize that’s complicated by the fact that one is proprietary…), or at least discussing their respective implementation strategies in depth.

That said, Modules did make good progress at this meeting. EWG looked at several proposal changes to the spec (I summarize the technical discussion below), CWG worked diligently to polish the spec wording further, and in the end, we achieved consensus to – finally – send out Modules for its PDTS ballot!

Parallelism TS v2

The Parallelism TS v2 (working draft here) picked up a new feature, vector and wavefront policies. Other proposals targeting it, like vector types and algorithms, are continuing to work their way through the library groups.

Concurrency TS v2

SG 1, the Study Group that deals with concurrency and parallelism, reviewed several proposals targeting the Concurrency TS v2 (which still does not yet have a working draft) at this meeting, including a variant of joining_thread with cooperative cancellation, lock-free programming techniques for reclamation, and stackful coroutines (which I’ve mentioned above in connection with the Coroutines TS).

Executors are still likely slated for a separate TS. The unified proposal presented at the last meeting has been simplified as requested, to narrow its scope to something manageable for initial standardization.

Merging Technical Specifications Into C++20

We have some technical specifications that are already published and haven’t been merged into C++17, and are thus candidates for merging into C++20. I already mentioned that Concepts was merged with some modifications (details below).

Parts of the Concurrency TS are slated to be merged into C++20: latches, with barriers to hopefully follow after some design issues are ironed out, and an improved version of atomic shared pointers. future.then() is going to require some more iteration before final standardization.

The Transactional Memory TS currently has only one implementation; the Study Group that worked on it hopes for some more implementation and usage experience prior to standardization.

The Library Fundamentals TS v2 seems to be in good shape to be merged into C++20, though I’m not sure of the exact status / concrete plans.

In addition to the TSes that are already published, many people are eager to see the TSes that were just published (Coroutines, Ranges, and Networking), as well as Modules, make it into C++20 too. I think it’s too early to try and predict whether they will make it. From a procedural point of view, there is enough time for all of these to complete their publication process and be merged in the C++20 timeframe. However, it will really depend on how much implementation and use experience these features get between now and the C++20 feature-complete date (sometime in 2019), and what the feedback from that experience is.

Future Technical Specifications

Finally, I’ll give some mention to some planned future Technical Specifications that don’t have an official project or working draft yet:

Reflection

A proposal for static introspection (sometimes called “reflexpr” after the keyword it uses; see its summary, design, and specification for details) continues to head towards a Reflection TS. It has been approved by SG 7 (the Reflection and Metaprogramming Study Group) and the Evolution Working Group at previous meetings. This week, it was successfully reviewed by the Library Evolution Working Group, allowing it to move on to the Core and Library groups going forward.

Meanwhile, SG 7 is continuing to look at more forward-looking reflection and metaprogramming topics, such as a longer-term vision for metaprogramming, and a proposal for metaclasses (I talk more about these below).

Graphics

The Graphics TS, which proposes to standardize a set of 2D graphics primitives inspired by cairo, continues to be under active review by the Library Evolution Working Group; the latest draft spec can be found here. The proposal is close to being forwarded to the Library Working Group, but isn’t quite there yet.

While I wasn’t present for its discussion in LEWG, I’m told that one of the changes that have been requested is to give the library a stateless interface. This matches the feedback I’ve heard from Mozilla engineers knowledgeable about graphics (and which I’ve tried to relay, albeit unsuccessfully, at a previous meeting).

Evolution Working Group

I’ll now write in a bit more detail about the technical discussions that took place in the Evolution Working Group, the subgroup that I sat in for the duration of the week.

All proposals discussed in EWG at this meeting were targeting C++20 (except for Modules, where we discussed changes targeting the Modules TS). I’ve categorized them into the usual “accepted”, “further work encouraged”, and “rejected” categories:

Accepted proposals:

  • Default member initializers for bitfields. Previously, bit-fields couldn’t have default member initializers; now they can, with the “natural” syntax, int x : 5 = 42; (brace initialization is also allowed). A disambiguation rule was added to deal with parsing ambiguities (since e.g. an = could conceivably be part of the bitfield width expression).
  • Tweaking the rules for constructor template argument deduction. At the last meeting, EWG decided that for wrapper types like tuple, copying should be preferable to wrapping; that is, something like tuple t{tuple{1, 2}}; should deduce the type of t as tuple<int, int> rather than tuple<tuple<int, int>>. However, it had been unclear whether this guidance applied to types like vector that had std::initializer_list constructors. EWG clarified that copying should indeed be preferred to wrapping for those types, too. (The paper also proposed several other tweaks to the rules, which did not gain consensus to be approved just yet; the author will come back with a revised paper for those.)
  • Resolving a language defect related to defaulted copy constructors. This was actually a proposal that I co-authored, and it was prompted by me running into this language defect in Mozilla code (it prevented the storing of an nsAutoPtr inside a std::tuple). It’s also, to date, my first proposal to be approved by EWG!
  • A simpler solution to the problem that allowing the template keyword in unqualified-ids aimed to solve. While reviewing that proposal, the Core Working Group found that the relevant lookup rules could be tweaked so as to avoid having to use the template keyword at all. The proposed rules technically change the meaning of certain existing code patterns, but only ones that are very obscure and unlikely to occur in the wild. EWG was, naturally, delighted with this simplification.
  • An attribute to mark unreachable code. This proposal aims to standardize existing practice where a point in the code that the author expects cannot be reached is marked with __builtin_unreachable() or __assume(false). The initial proposal was to make the standardized version an [[unreachable]] attribute, but based on EWG’s feedback, this was revised to be a std::unreachable() library function instead. The semantics is that if such a call is reached during execution, the behaviour is undefined. (EWG discussed at length whether this facility should be tied to the Contracts proposal. The outcome was that it should not be; since “undefined behaviour” encompasses everything, we can later change the specified behaviour to be something like “call the contract violation handler” without that being a breaking change.) The proposal was sent to LEWG, which will design the library interface more precisely, and consider the possibility of passing in a compile-time string argument for diagnostic purposes.
  • Down with typename! This paper argued that in some contexts where typename is currently required to disambiguate a name nested in a dependent scope as being a type, the compiler can actually disambiguate based on the context, and proposed removing the requirement of writing typename in such contexts. The proposal passed with flying colours. (It was, however, pointed out that the proposal prevents certain hypothetical future extensions. For example, one of the contexts in question is y in using x = y;: that can currently only be a type. However, suppose we later want to add expression aliases to C++; this proposal rules out re-using the using x = y; syntax for them.)
  • Removing throw(). Dynamic exception specifications have been deprecated since C++11 (superseded by noexcept), and removed altogether in C++17, with the exception of throw() as an alias for noexcept(true). This paper proposed removing that last vestige, too, and EWG approved it. (The paper also proposed removing some other things that were deprecated in C++17, which were rejected; I mention those in the list of rejected proposals below.)
  • Ranged-based for statement with initializer. This introduces a new form of range-for: for (T var = init; U elem : <range-expression>); here, var is a variable that lives for the duration of the loop, and can be referenced by <range-expression> (whereas elem is the usual loop variable that takes on a new value on every iteration). This is useful for both scope hygiene (it avoids polluting the enclosing scope with var) and resolving a category of lifetime issues with range-for. EWG expressed concerns about parseability (parsers will now need to perform more lookahead to determine which form of loop they are parsing) and readability (the “semicolon colon” punctuation in a loop header of this form can look deceptively like the “semicolon semicolon” punctuation in a traditional for loop), but passed the proposal anyways.
  • Some changes to the Modules TS (other proposed changes were deferred to Modules v2) – I talk about these below
  • Changes to Concepts – see below

Proposals for which further work is encouraged:

  • Non-throwing container operations. This paper acknowledges the reality that many C++ projects are unable to or choose not to use exceptions, and proposes that standard library container types which currently rely on exceptions to report memory allocation failure, provide an alternative API that doesn’t use exceptions. Several concrete alternatives are mentioned. EWG sent this proposal to the Library Evolution Working Group to design a concrete alternative API, with the understanding that the resulting proposal will come back to EWG for review as well.
  • Efficient sized deletion for variable-sized classes. This proposal builds on the sized deletion feature added to C++14 to enable this optimization for “variable-sized classes” (that is, classes that take control of their own allocation and allocate a variable amount of extra space before or after the object itself to store extra information). EWG found the use cases motivating, but encouraged the development of a syntax that less prone to accidental misuse, as well as additional consultation with implementers to ensure that ABI is not broken.
  • Return type deduction and SFINAE. This paper proposes a special syntax for single-expression lambdas, which would also come with the semantic change that the return expression be subject to SFINAE (this is a desirable property that often leads authors to repeat the return expression, wrapped in decltype, in an explicit return type declaration (and then to devise macros to avoid the repetition)). EWG liked the goal but had parsing-related concerns about the syntax; the author was encouraged to continue exploring the syntax space to find something that’s both parseable and readable. Continued exploration of terser lambdas, whether as part of the same proposal or a separate proposal, was also encouraged. It’s worth noting that there was another proposal in the mailing (which wasn’t discussed since the author wasn’t present) that had significant overlap with this proposal; EWG observed that it might make sense to collaborate on a revised proposal in this space.
  • Default-constructible stateless lambdas. Lambdas are currently not default-constructible, but for stateless lambdas (that is, lambdas that do not capture any variables) there is no justification for this restriction, so this paper proposed removing it. EWG agreed, but suggested that they should also be made assignable. (An example of a situation where one currently runs into these restrictions is transform iterators: such iterators often aggregate the transformation function, and algorithms often default-construct or assign iterators.)
  • Product type access. Recall that structured bindings work with both tuple-like types, and structures with public members. The former expose get<>() functions to access the tuple elements by index; for the latter, structured bindings achieve such index-based access by “language magic”. This paper proposed exposing such language magic via a new syntax or library interface, so that things other than structured bindings (for example, code that wants to iterate over the public members of a structure) can take advantage of it. EWG agreed this was desirable, expressed a preference for a library interface, and sent the proposal onward to LEWG to design said interface. (Compilers will likely end up exposing intrinsics to allow library implementers to implement such an interface. I personally don’t see the advantage of doing things this way over just introducing standard language syntax, but I’m happy to get the functionality one way or the other.)
  • Changing the attack vector of constexpr_vector. At the previous meeting, implementers reported that supporting full-blown dynamic memory allocation in a constexpr context was not feasible to implement efficiently, and suggested a more limited facility, such as a special constexpr_vector container. This proposal argues that such a container would be too limiting, and suggests supporting a constexpr allocator (which can then be used with regular containers) instead. Discussion at this meeting suggested that (a) on the one hand, a constexpr allocator is no less general (and thus no easier to support) than new itself; but (b) on the other hand, more recent implementer experiments suggest that supporting new itself, with some limitations, might be feasible after all. Continued exploration of this topic was warmly encouraged.
  • Implicit evaluation of auto variables. This is a resurrection of an earlier proposal to allow a class to opt into having a conversion function of some sort called when an instance of it is assigned to an auto-typed variable. The canonical use case is an intermediate type is an expression template system, for which it’s generally desirable to trigger evaluation when initializing an auto-typed variable. EWG wasn’t fond of widening the gap between the deduction rules for auto and the deduction rules for template parameters (which is what auto is modelled on), and suggested approaching the problem form a different angle; one idea that was thrown around was the possibility of extending the notion of deduction guides (currently used for class template argument deduction) to apply to situations like this.
  • Allowing class template specializations in unrelated namespaces. The motivation here is to avoid having to reopen the namespace in which a class template was defined, to provide a specialization of that template. EWG liked the idea, but suggested that it might be prudent to still restrict such specializations to occur within associated namespaces of the specialization (e.g. the namespaces of the specialization’s template arguments) – kind of like how Rust doesn’t allow you to implement a trait unless you’re either the author of the trait, or the author of the type you’re implementing the trait for.
  • Precise semantics for contract assertions. This paper explores the design space of contract assertions, enumerating the various (sometimes contradictory) objectives we may want to achieve by using them, and proposes a set of primitive operations that facilitate implementing assertions in ways that meet useful subsets of these objectives. EWG expressed an interest in standardizing some of the proposed primitives, specifically a mechanism to deliberately introduce unspecified (nondeterministic) behaviour into a program, and a “prevent continuation handler” that an assertion check can invoke if an assertion fails and execution should not continue as a result. (A third primitive, for deliberately invoking undefined behaviour, is already handled by the independently proposed std::unreachable() function that EWG approved at this meeting.)

Rejected proposals:

  • Attributes for structured bindings. This proposal would have allowed applying attributes to individual bindings, such as auto [a, b [[maybe_unused]], c] = f();. EWG found this change insufficiently motivated; some people also thought it was inappropriate to give individual bindings attributes when we can’t even give them types.
  • Making pointers-to-members callable. This would have allowed p(s) to be valid and equivalent to s.*p when p is a pointer to a member of a type S, and s is an object of that type. It had been proposed before, and was rejected for largely the same reason: some people argued that it was a half-baked unified call syntax proposal. (I personally thought this was a very sensible proposal – not at all like unified call syntax, which was controversial for changing name lookup rules, which this proposal didn’t do.)
  • Explicit structs. The proposal here was to allow marking a struct as explicit, which meant that all its fields had to be initialized, either by a default member initializer, by a constructor intializer, or explicitly by the caller (not relying on the fallback to default initialization) during aggregate initialization. EWG didn’t find this well enough motivated, observing that either your structure has an invariant, in which case it’s likely to be more involved than “not the default values”, or it doesn’t, in which case the default values should be fine. (Uninitialized values, which can arise for primitive types, are another matter, and can be addressed by different means, such as via the [[uninitialized]] attribute proposal.)
  • Changing the way feature-test macros are standardized. Feature test macros (like __cpp_constexpr, intended to be defined by an implementation if it supports constexpr) are currently standardized in the form of a standing document published by the committee, which is not quite a standard (for example, it does not undergo balloting by national bodies). As they have become rather popular, Microsoft proposed that they be standardized more formally; they claimed that it’s something they’d like to support, but can’t unless it’s a formal standard, because they’re trying to distance themselves from their previous habit of supporting non-standard extensions. (I didn’t quite follow the logic behind this, but I guess large companies sometimes operate in strange ways.) However, there was no consensus to change how feature test macros are standardized; some on the committee dislike them, in part because of their potential for fragmentation, and because they don’t completely remove the need for compiler version checks and such (due to bugs etc.)
  • Removing other language features deprecated in C++17. In addition to throw() (whose removal passed, as mentioned above), two other removals were proposed.
    • Out-of-line declarations of static constexpr data members. By making static constexpr data members implicitly inline, C++17 made it so that the in-line declaration which provides the value is also a definition, making an out-of-line declaration superfluous. Accordingly, the ability to write such an out-of-line declaration at all was deprecated, and was now proposed for removal in C++20.
    • Implicit generation of a copy constructor or copy assignment operator in a class with a user-defined copy assignment operator, copy constructor, or destructor. This has long been known to be a potential footgun (since generally, if you need to user-define one of these functions, you probably need to user-define all three), and C++11 already broke with this pattern by having a user-defined move operation disable implicit generation of the copy operations. The committee has long been eyeing the possibility extending this treatment to user-defined copy operations, and the paper suggested that perhaps C++20 would be the time to do so. However, the reality is that there still is a lot of code out there that relies on this implicit generation, and much of it isn’t actually buggy (though much of it is).

    Neither removal gained consensus. In each case, undeprecating them was also proposed, but that was rejected too, suggesting that the hope that these features can be removed in a future standard remains alive.

  • Capturing *this with initializer. C++17 added the ability to have a lambda capture the entire *this object by value. However, it’s still not possible to capture it by move (which may be reasonable if e.g. constructing the lambda is the last thing you do with the current object). To rectify this, this paper proposed allowing the capture of *this with the init-capture syntax. Unfortunately, this effectively allows rebinding this to refer to a completely unrelated object inside the lambda, which EWG believed would be far too confusing, and there didn’t appear to be a way to restrict the feature to only work for the intended use case of moving the current object.
  • bit_sizeof and bit_offsetof. These are similar to sizeof and offsetof, but count the number of bits, and work on bitfield members. EWG preferred to hold off on these until they are implementable with a library interface on top of reflection.
  • Parametric functions. This oddly-named proposal is really a fresh approach to named arguments. In contrast with the named arguments proposal that I co-authored a few years back, which proposed to allow using named arguments with existing functions and existing parameter names (and garnered significant opposition over concerns that it would make parameter names part of a function’s interface when the function hadn’t been written with that in mind), this paper proposed introducing a new kind of function, which can have named arguments, declared with a new syntax, and for which the parameter names are effectively part of the function’s type. While this approach does address the concerns with my proposal, EWG felt the new syntax and new language machinery it would require was disproportionate to the value of the feature. In spite of the idea’s repeated rejection, no one was under any illusion that this would be the last named arguments proposal to come in front of EWG.

There were a handful of proposals that were not discussed due to their authors not being present. They included the other terse lambdas proposal and its offshoot idea of making forwarding less verbose, and a proposal for an [[uninitialized]] attribute.

Concepts

A major focus of this meeting was to achieve consensus to get Concepts into C++20. To this end, EWG spent half a day plus an evening session discussing several papers on the topic.

Two of the proposals – a unified concept definition syntax and semantic constraint matching – were write-ups of design directions that had already been discussed and approved in Kona; their discussion at this meeting was more of a rubber-stamp. (The second paper contained a provision to require that re-declarations of a constrained template use the same syntax (e.g. you can’t have one using requires-clauses and the other using a shorthand form); this provision had some objections, just as it did in Kona, but was passed anyways.)

EWG next looked at a small proposal to address certain syntax ambiguities; the affected scenarios involve constrained function templates with a requires-clause, where it can be ambiguous where the require-clause after the template parameter list ends, and where the function declaration itself begins. The proposed solution was to restrict the grammar for the expression allowed in a top-level requires-clauses so as to remove the ambiguity; expressions that don’t fit in the restricted grammar can still be used if they are parenthesized (as in requires (expr)). This allows common forms of constraints (like trait<T>::value or trait_v<T>) to be used without parentheses, while allowing any expression with parentheses. This was also approved.

That brings us to the controversial part of the discussion: abbreviated function templates (henceforth, “AFTs”), also called “terse templates”. To recap, AFTs are function templates declared without a template parameter list, where the parameter types use concept names (or auto), which the compiler turns into invented template parameters. A canonical example is void sort(Sortable& s);, which is a shorthand for template <Sortable S> void sort(S& s); (which is itself a shorthand for template <typename S> requires Sortable<S> void sort(S& s);).

AFTs have been controversial since their introduction, due to their ability to make template code look like non-template code. Many have argued that this is a bad idea, beacuse template code is fundamentally different from non-template code (e.g. consider different name lookup rules, the need for syntactic disambiguators like typename, and the ability to define a function out of line). Others have argued that making generic programming (programming with templates) look more like regular programming is a good thing.

(A related feature that shared some of the controversy around AFTs was concept introductions, which were yet another shorthand, making Mergeable{In1, In2, Out} void merge(In1, In1, In2, Out); short for template <typename In1, typename In2, typename Out> requires Mergeable<In1, In2, Out> void merge(In1, In1, In2, Out);. Concept introductions don’t make a template look like a non-template the way AFTs do, but were still controversial as many felt they were an odd syntax and offered yet another way of defining constrained function templates with relatively little gain in brevity.)

The controversy around AFTs and concept introductions was one of the reasons the proposed merger of the Concepts TS into C++17 failed to gain consensus. Eager not to repeat this turn of events for C++20, AFTs and concept introductions were proposed for removal from Concepts, at least for the time being, with the hope that this would allow the merger of Concepts into C++20 to gain consensus. After a long and at times heated discussion, EWG approved this removal, and approved the merger of Concepts, as modified by this removal (and by the other proposals mentioned above), into C++20. As mentioned above, this merger was subsequently passed by the full committee at the end of the week, resulting in Concepts now being in the C++20 working draft!

It’s important to note that the removal of AFTs was not a rejection of having a terse syntax for defining constrained function templates in general. There is general agreement that such a terse syntax is desirable; people just want such a syntax to come with some kind of syntactic marker that makes it clear that a function template (as opposed to a non-template function) is being declared. I fully expect that proposals for an alternative terse syntax that comes with such a syntactic marker will forthcome (in fact, I’ve already been asked for feedback on one such draft proposal), and may even be approved in the C++20 timeframe; after all, we’re still relatively early in the C++20 cycle.

There was one snag about the removal of AFTs that happened at this meeting. In the Concepts wording, AFTs are specified using a piece of underlying language machinery called constrained type specifiers. Besides AFTs, this machinery powers some other features of Concepts, such as the ability to write ConceptName var = expr;, or even vector<auto> var = expr;. While these other features weren’t nearly as controversial as AFTs were, from a specification point of view, removing AFTs while keeping these in would have required a significant refactor of the wording that would have been difficult to accomplish by the end of the week. Since the committee wanted to “strike the iron while it’s hot” (meaning, get Concepts into C++20 while there is consensus for doing so), it was decided that for the time being, constrained type specifiers would be removed altogether. As a result, in the current C++20 working draft, things like vector<auto> var = expr; are ill-formed. However, it’s widely expected that this feature will make it back into C++20 Concepts at future meetings.

Lastly, I’ll note that there were two proposals (one I co-authored, and a second one that didn’t make the pre-meeting mailing) concerning the semantics of constrained type specifiers. The removal of constrained type specifiers made these proposals moot, at least for the time being, so they were not discussed at this meeting. However, as people propose re-adding some of the uses of constrained type specifiers, and/or terse templates in some form, these papers will become relevant again, and I expect they will be discussed at that time.

Modules

Another major goal of the meeting was to send out the Modules TS for its PDTS ballot. I gave an overview of the current state of Modules above. Here, I’ll mention the Modules-related proposals that came before EWG this week:

  • Distinguishing the declaration of a module interface unit from the declaration of a module implementation unit. The current syntax is module M; for both. In Kona, a desire was expressed for interface units to have a separate syntax, and accordingly, one was proposed: export module M;. (The re-use of the export keyword here is due to the committee’s reluctance to introduce new keywords, even context-sensitive ones. module interface M; would probably have worked with interface being a keyword in this context only.) This was approved for the Modules TS.
  • Module partitions (first part of the paper only). These are module units that form part of a module interface, rather than being a complete module interface; the proposed syntax in this paper is module M [[partition]];. This proposal failed to gain consensus, not over syntax concerns, but over semantic concerns: unlike the previous module partitions proposal (which was not presented at this meeting, ostensibly for lack of time), this proposal did not provide for a way for partitions to declare dependencies on each other; rather, each partition was allowed to depend on entities declared in all other partitions, but only on their forward-declarations, which many felt was too limiting. (The corresponding implementation model was to do a quick “heuristic parse” of all partitions to gather such forward-declarations, and then do full processing of the partitions in parallel; this itself resulted in some raised eyebrows, as past experience doing “heuristic parsing” of C++ hasn’t been very promising.) Due to the controversy surrounding this topic, and not wanting to hold up the Modules TS, EWG decided to defer module partitions to Modules v2.
  • Exporting using-declarations. This wasn’t so much a proposal, as a request for clarification of the semantics. The affected scenario was discussed, and the requested clarification given; no changes to the Modules TS were deemed necessary.
  • Name clashes between private (non-exported) entities declared in different modules. Such a name clash is ill-formed (an ODR violation) according to the current spec; several people found that surprising, since one of the supposed advantages of Modules is to shield non-exported entities like private helpers from the outside world. This matter was discussed briefly, but a resolution was postponed to after the PDTS ballot (note: that’s not the same as being postponed to Modules v2; changes can be made to the Modules TS between the PDTS ballot and final publication).
  • A paper describing some requirements that a Modules proposal would need to have to be useful in evolving a particular large codebase (Bloomberg’s). Discussion revealed that the current spec meets some but not all of these requirements; the gaps mainly concern the ability to take a legacy (non-modular) code component, and non-intrusively (“additively”) provide a modular “view” of that component. No changes were proposed at this time, but some changes to fill these gaps are likely to appear as comments on the PDTS ballot.
  • Identifying module source code. Currently, the module M; or export module M; declaration that introduces a module unit is not required to be the first declaration in the file. Preceding declarations are interpreted as being part of the global module (and this is often used to e.g. include legacy headers). The author of this proposal would nonetheless like something that’s required to be the first declaration in the file, that announces “this file is a module unit”, and proposed module ; as being such a marker. EWG was favourable to the idea, but postponed discussion of a concrete syntax until after the PDTS.

With these discussions having taken place, the committee was successful in getting Modules sent out for its PDTS ballot. This is very exciting – it’s a major milestone for Modules! At the same time, I think it’s clear from the nature of the some of the proposals being submitted on the topic (including the feedback from implementation and deployment experience at Google, some of which is yet to be fully discussed by EWG) that this is a feature where there’s still a fair amount of room for implementation convergence and user feedback to gain confidence that the feature as specified will be useful and achieve its intended objectives for a broad spectrum of C++ users. The PDTS ballot formally begins the process of collecting that feedback, which is great! I am very curious about the kinds of comments it will garner.

If you’re interested in Modules, I encourage you to give the prototype implementations in Clang and MSVC a try, play around with them, and share your thoughts and experiences. (Formal PDTS comments can be submitted via your national standards body, but you can also provide informal feedback on the committee’s public mailing lists.)

Other Working Groups

The Library Working Group had a busy week, completing its wording review of the Ranges TS, Networking TS, and library components of the Coroutines TS, and allowing all three of these to be published at the end of the week. They are also in the process of reviewing papers targeting C++20 (including span, which provides an often-requested “array view” facility), Parallelism TS v2, and Library Fundamentals TS v3.

The Library Evolution Working Group was, as usual, working through its large backlog of proposed new library features. As much as I’d love to follow this group in as much detail as I follow EWG, I can’t be in two places at once, so I can’t give a complete listing of the proposals discussed and their outcomes, but I’ll mention a few highlights:

SG 7 (Reflection and Metaprogramming)

SG 7 met for an evening session and discussed three topics.

The first was an extension to the existing static reflection proposal (which is headed towards publication as the initial Reflection TS) to allow reflection of functions. Most of the discussion concerned a particular detail: whether you should be allowed to reflect over the names of a function’s parameters. It was decided that you should, but that in the case of a function with multiple declarations, the implementation is free to choose any one of them as the source of the reflected parameter names.

The second topic was what we want metaprogramming to look like in the longer term. There was a paper exploring the design space that identified three major paradigms: type-based metaprogramming (examples: Boost.MPL, the current static reflection proposal), heterogenerous value-based (example: Boost.Hana), and homogeneous value-based (this would be based on constexpr metaprogramming, and would require some language extensions). Another paper then argued that the third one, homogeneous value-based metaprogramming, is the best choice, both from a compile-time performance perspective (the other two involve a lot of template instantiations which are compile-time expensive), and because it makes metaprogramming look more like regular programming, making it more accessible. SG 7 agreed that this is the long-term direction we should aim for. Note that this implies that, while the Reflection TS will likely be published in its current form (with its type-based interface), prior to merger into the C++ IS it would likely be revised to have a homogenerous value-based interface.

The third topic was a proposal for a more advanced reflection/metaprogramming feature, metaclasses. Metaclasses combine reflection facilities with proposed code injection facilities to allow class definitions to undergo arbitrary user-defined compile-time transformations. A metaclass can be thought of as a “kind” or category of class; a class definition can be annotated (exact syntax TBD) to declare the class as belonging to that metaclass, and such a class definition will undergo the transformations specified in the metaclass definition. Examples of metaclasses might be “interfaces”, where the transformation includes making every method pure virtual, and “value types”, where the transformation includes generating memberwise comparison operators; widely used metaclasses could eventually become part of the standard library. Obviously, this is a very powerful feature, and the proposal is at a very early stage; many aspects, including the proposed code injection primitives (which are likely to be pursued as a separate proposal), need further development. Early feedback was generally positive, with some concerns raised about the feature allowing codebases to grow their own “dialects” of C++.

The Velocity of C++ Evolution

C++ standardization has accelerated since the release of C++11, with the adoption of a three-year standardization cycle, the use of Technical Specifications to gather early feedback on major new features, and an increase in overall participation and the breadth of domain areas and user communities represented in the committee.

All the same, sometimes it feels like the C++ language is still slow to evolve, and a big part of that is the significant constraint of needing to remain backwards-compatible, as much as possible, with older versions of the language. (Minor breakages have occurred, of course, like C++11 user-defined literals changing the meaning of "literal"MACRO. But by and large, the committee has held backwards-compatibility as one of its cornerstone principles.)

A paper, discussed at an evening session this week, explores the question of whether, in today’s age of modern tools, the committee still need to observe this constraint as strictly as it has in the past. The paper observes that it’s already the case that upgrading to a newer compiler version typically entails making some changes / fixes to a codebase. Moreover, in cases where a language change does break or change the meaning of code, compilers have gotten pretty good at warning users about it so they can fix their code accordingly (e.g. consider clang’s -Wc++11-compat warnings). The paper argues that, perhaps, the tooling landscape has matured to the point where we should feel free to make larger breaking changes, as long as they’re of the sort that compilers can detect and warn about statically, and rely on (or even require) compilers to warn users about affected code, allowing them to make safe upgrades (tooling could potentially help with the upgrades, too, in the form of automated refactorings). This would involve more work for compiler implementers, and more work for maintainers of code when upgrading compilers, but the reward of allowing the language to shed some of its cumbersome legacy features may be worth it.

The committee found this idea intriguing. No change in policy was made at this time, but further exploration of the topic was very much encouraged.

If the committee does end up going down this path, one particularly interesting implication would be about the future of the standard library. The Concepts-enabled algorithms in the Ranges TS are not fully backwards-compatible with the algorithms in the current standard library. As a result, when the topic of how to merge the Ranges TS into the C++ standard came up, the best idea people had was to start an “STLv2”, a new version of the standard library that makes a clean break from the current version, while being standardized alongside it. However, in a world where we are not bound to strict backwards-compatibility, that may not be necessary – we may just be able to merge the Ranges TS into the current standard library, and live with the resulting (relatively small) amount of breakage to existing code.

Conclusion

With C++17 effectively done, the committee had a productive meeting working on C++20 and advancing Technical Specifications like Modules and Coroutines.

The merger of Concepts into C++20 was a definite highlight of this meeting – this feature has been long in the making, having almost made C++11, and its final arrival is worth celebrating. Sending out Modules for its PDTS ballot was a close second, as it allows us to start collecting formal feedback on this very important C++ feature. And there are many other goodies in the pipeline: Ranges, Networking, Coroutines, contracts, reflection, graphics, and many more.

The next meeting of the Committee will be in Albuquerque, New Mexico, the week of November 6th, 2017. Stay tuned for my report!

Other Trip Reports

Others have written reports about this meeting as well. Some that I’ve come across include Herb Sutter’s and Guy Davidson’s. Michael Wong also has a report written just before the meeting, that covers concurrency-related topics in more depth than my reports. I encourage you to check them out!


Categorieën: Mozilla-nl planet

Andrew Halberstadt: Try Fuzzy: A Try Syntax Alternative

wo, 02/08/2017 - 15:50

It's no secret that I'm not a fan of try syntax, it's a topic I've blogged about on several occasions before. Today, I'm pleased to announce that there's a real alternative now landed on mozilla-central. It works on all platforms with mercurial and git. For those who just like to dive in:

bash $ mach mercurial-setup --update # only if using hg $ mach try fuzzy

This will prompt you to install fzf. After bootstrapping is finished, you'll enter an interface populated with a list of all possible taskcluster tasks. Start typing and the list will be filtered down using a fuzzy matching algorithm. I won't go into details on how to use this tool in this blog post, for that see:

bash $ mach try fuzzy --help # or $ man fzf

For who prefer to look before you leap, I've recorded a demo:

<video width="100%" height="100%" controls> <source src="/static/vid/blog/2017/mach-fuzzy.mp4" type="video/mp4"> </video>

Like the existing mach try command, this should work with mercurial via the push-to-try extension or git via git-cinnabar. If you encounter any problems or bad UX, please file a bug under Testing :: General.

Try Task Config

The following section is all about the implementation details, so if you're curious or want to write your own tools for selecting tasks on try, read on!

This new try selector is not based on try syntax. Instead it's using a brand new scheduling mechanism called try task config. Instead of encoding scheduling information in the commit message, mach try fuzzy encodes it in a JSON file at the root of the tree called try_task_config.json. Very simply (for now), the decision task knows to look for that file on try. If found, it will read the JSON object and schedule every task label it finds. There are also hooks to prevent this file from accidentally being landed on non-try branches.

What this means is that anything that can generate a list (or dict) of task labels can be a try selector. This new JSON format is much easier for tools to write, and for taskgraph to read.

Creating a Try Selector

There are currently two ways to schedule tasks on try (syntax and fuzzy). But I envision 4-5 different methods in the future. For example, we might implement a TestResolver based try selector which given a path can determine all affected jobs. Or there could be one that uses globbing/regex to filter down the task list which would be useful for saving "presets". Or there could be one that uses a curses UI like the hg trychooser extension.

To manage all this, each try selector is implemented as an @SubCommand of mach try. The regular syntax selector, is implemented under mach try syntax now (though mach try without any subcommand will dispatch to syntax to maintain backwards compatibility). All this lives in a newly created tryselect module.

If you have want to create a new try selector, you'll need two things:

  1. A list of task labels as input.
  2. The ability to write those labels to try_task_config.json and push it to try.

Luckily tryselect provides both those things. The first, can be obtained using the tasks.py module. It basically does the equivalent of running mach taskgraph target, but will also automatically cache the resulting task list so future invocations run much quicker.

The second can be achieved using the vcs.py module. This uses the same approach that the old syntax selector has been using all along. It will commit try_task_config.json temporarily and then remove all traces of the commit (and of try_task_config.json).

So to recap, creating a new try selector involves:

  1. Add an @SubCommand to the mach_commands.py, which dispatches to a file under the selectors directory.
  2. Generate a list of tasks using tasks.py.
  3. Somehow filter down that list (this part is up to you)
  4. Push the filtered list using vcs.py

You can inspect the fuzzy implementation to see how all this ties together.

Future Considerations

Right now, the try_task_config.json method only allows specifying a list of task labels. This is good enough to say what is running, but not how it should run. In the future, we could expand this to be a dict where task labels make up the keys. The values would be extra task metadata that the taskgraph module would know how to apply to the relevant tasks.

With this scheme, we could do all sorts of crazy things like set prefs/env/arguments directly from a try selector specialized to deal with those things. There are no current plans to implement any of this, but it would definitely be a cool ability to have!

Categorieën: Mozilla-nl planet

Pagina's