mozilla

Mozilla Nederland LogoDe Nederlandse
Mozilla-gemeenschap

Mozilla Open Policy & Advocacy Blog: Data localization: bad for users, business, and security

Mozilla planet - fr, 22/06/2018 - 22:59

Mozilla is deeply concerned by news reports that India’s first data protection law may include data localization requirements. Recent leaks suggest that the Justice Srikrishna Committee, the group charged by the Government of India with developing the country’s first data protection law, is considering requiring companies subject to the law to store critical personal data within India’s borders. A data localization mandate would undermine user security, harm the growth and competitiveness of Indian industry, and potentially burden relations between India and other countries. We urge the Srikrishna Committee and the Government of India to exclude this in the forthcoming legislative proposal.

Security Risks
Locating data within a given jurisdiction does not in itself convey any security benefits; rather, the law should require data controllers to strictly protect the data that they’re entrusted with. One only has to look to the recurring breaches of the Aadhaar demographic data to understand that storing data locally does not, by itself, keep data protected (see here, here and here). Until India has a data protection law and demonstrates robust enforcement of that law, it’s difficult to see how storing user data in India would be the most secure option.

In Puttaswamy, the Supreme Court of India unequivocally stated that privacy is a fundamental right, and put forth a proportionality standard that has profound implications for government surveillance in India. We respectfully recommend that if Indian authorities are concerned about law enforcement access to data, then a legal framework for surveillance with appropriate protections for users is a necessary first step. This would provide the lawful basis for the government to access data necessary for legal proceedings. A data localization mandate is an insufficient instrument for ensuring data access for the legitimate purposes of law enforcement.

Economic and Political Harms
A data localization mandate may also harm the Indian economy. India is home to many inspiring companies that are seeking to move beyond India’s generous borders. Requiring these companies to store data locally may thwart this expansion, and may introduce a tax on Indian industry by requiring them to maintain the legal and technical regimes of multiple jurisdictions.

Most Indian companies handle critical personal data, so even data localization for just this data subset could harm Indian industry. Such a mandate would force companies to use potentially cost-inefficient data storage and deny companies from using the most effective and efficient routing possible. Moreover, the Indian outsourcing industry is predicated on the idea of these firms being able to store and process data in India, and then transfer it to companies abroad. A data localization mandate could pose an existential risk to these companies.

At the same time, if India imposes data localization on foreign companies doing business in India, other countries may impose reciprocal data localization policies that force Indian companies to store user data within that country’s jurisdictional borders, leading to legal conflict and potential breakdown of trade.

Data Transfer, Not Data Localization
There are better alternatives to ensuring user data protection. Above all, obtaining an adequacy determination from the EU would both demonstrate commitment to a global high standard of data protection, and significantly benefit the Indian economy. Adequacy would allow Indian companies to more easily expand overseas, because they would already be compliant with the high standards of the GDPR. It would also open doors to foreign investment and expansion in the Indian market, as companies who are already GDPR-compliant could enter the Indian market with little to no additional compliance burden. Perhaps most significantly, this approach would make the joint EU-India market the largest in the world, thus creating opportunities for India to step into even greater economic global leadership.

If India does choose to enact data localization policies, we strongly urge it to also adopt provisions for transfer via Binding Corporate Rules (BCRs). This model has been successfully adopted by the EU, which allows for data transfer upon review and approval of a company’s data processing policies by the relevant Data Protection Authority (DPA). Through this process, user rights are protected, data is secured, and companies can still do business. However, adequacy offers substantial benefits over a BCR system. By giving all Indian companies the benefits of data transfer, rather than requiring each company to individually apply for approval from a DPA, Indian industry will likely be able to expand globally with fewer policy obstacles.

Necessity of a Strong Regulator
Whether considering user security or economic growth, data localization is a weak tool when compared to a strong data protection framework and regulator.

By creating strong incentives for companies to comply with data use, storage, and transfer regulations, a DPA that has enforcement power will get better data protection results than data localization, and won’t harm Indian users, industry, and innovation along the way. We remain hopeful that the Srikrishna Committee will craft a bill that centers on the user — this means strong privacy protections, strong obligations on public and private-sector data controllers, and a DPA that can enforce rules on behalf of all Indians.

The post Data localization: bad for users, business, and security appeared first on Open Policy & Advocacy.

Categorieën: Mozilla-nl planet

Support.Mozilla.Org: State of Mozilla Support: 2018 Mid-year Update – Part 1

Mozilla planet - fr, 22/06/2018 - 20:20
Hello, present and future Mozillians!

As you may have heard, Mozilla held one of its All Hands biannual meetings, this time in San Francisco. The support.mozilla.org Admin team was there as well, along with several members of the support community.

The All Hands meetings are meant to be gatherings summarizing the work done and the challenges ahead. San Francisco was no different from that model. The four days of the All Hands were full of things to experience and participate in. Aside from all the plenary and “big stage” sessions – most of which you should be able to find at Air Mozilla soon – we also took part in many smaller (formal and informal) meetings, workshops, and chats.

By the way, if you watch Denelle’s presentation, you may hear something about Mozillians being awesome through helping users ;-).

This is the first in a series of posts summarizing what we talked about regarding support.mozilla.org, together with many (maaaaaany) pages of source content we have been working on and receiving from our research partners over the last few months.

We will begin with the summary of quite a heap of metrics, as delivered to us in by the analytics and research consultancy from Copenhagen – Analyse & Tal (Analysis & Numbers). You can find all the (105!) pages here but you can also read the summary below, which captures the most important information.

The A&T team used descriptive statistics (to tell a story using numbers) and network analysis (emphasizing interconnectedness and correlations), taking information from the 11 years of data available in Kitsune’s databases and 1 year of Google Analytics data.

Almost all perspectives of the analysis brought to the spotlight the amount of work contributed and the dedication of numerous Mozillians over many years. It’s hard to overstate the importance of that for Mozilla’s mission and continuous presence and support for the millions of users of open source software who want an open web. We are all equally proud and humbled that we can share this voyage with you.

As you can imagine, analyzing a project as complex and stretched in time as Mozilla’s Support site is quite challenging and we could not have done it without cooperation with Open Innovation and our external partners.

Key Takeaways
  • In the 2010-2017 period, only 124 contributors were responsible for 63% of all contributions. Given that there are hundreds of thousands of registered accounts in the system, there is a lot of work to do for us to make contributions easier and more fun.
  • There are quite a few returning contributors who contribute steadily over several years.
  • There are several hundreds of contributors who are active within a short timeframe and even more very occasional helpers. In both cases, making sure long-term contributing is appealing to them.
  • While our community has not shown to be worryingly fragile, we have to make sure we understand better how and why contributions happen and what can be done to ensure a steady future for Mozilla’s community-powered Support.
  • The Q&A support forums on the site are the most popular place for contributions, with the core and most engaged contributors present mostly there.
  • On the other hand, the Knowledge Base, even if it has fewer contributors, sees more long-term commitment from returning contributors.
  • Contributors through Twitter are a separate group, usually not engaged in other support channels and focusing on this external platform.
  • Firefox is the most active product across all channels, but Thunderbird sees a lot of action as well. Many regular contributors are active supporting both products.
  • Among other Firefox related products, Firefox for Android is the most established one.
  • The top 15 locales amount to 76 percent of the overall revisions in the Knowledge Base, with the vast majority of contributions coming from core contributors mostly.
  • Based on network analysis, Russian, Spanish, Czech, and Japanese localization efforts are the most vulnerable to changes in sustainability.
What’s Next?

Most of the findings in the report support many anecdotal observations we have had, giving us a very powerful set of perspectives grounded in over 7 years’ worth of data. Based on the analysis, we are able to create future plans for our community that are more realistic and based on facts.

The A&T team provided us with a list of their recommendations:

  • Understanding the motivations for contributing and how highly dedicated contributors were motivated to start contributing should be a high priority for further strategic decisions.
  • Our metrics should be strategically expanded and used through interactive dashboards and real time measurements. The ongoing evolution of the support community could be better understood and observed thanks to dynamic definitions of more detailed contributor segments and localization, as well as community sustainability scores.
  • A better understanding of visitors and how they use the support pages (more detailed behaviour and opinions) would be helpful for understanding where to guide contributors to ensure a both a better user experience and an enhanced level of satisfaction among contributors.

Taking our own interpretation of the data analysis and the A&T recommendations into account, over the next few weeks we will be outlining more plans for the second half of the year, focusing on areas like:

  • Contributor onboarding and motivation insights
  • A review of metrics and tools used to obtain them
  • Recruitment and learning experiments
  • Backup and contingency plans for emergency gaps in community coverage
  • Tailoring support options for new products

As always, thank you for your patience and ongoing support of Mozilla’s mission. Stay tuned for more post-All Hands mid-year summaries and updates coming your way soon – and discuss them in the Contributors or Discourse forum threads.

Categorieën: Mozilla-nl planet

Mozilla VR Blog: This Week in Mixed Reality: Issue 10

Mozilla planet - fr, 22/06/2018 - 17:39
 Issue 10

Last week, the team was in San Francisco for an all-Mozilla company meeting.

This week the team is focusing on adding new features, making improvements and fixing bugs.

Browsers

We are all hands on deck building more components and adding new UI across Firefox Reality:

  • Improve keyboard visibility detection
  • Added special characters to the keyboard
  • Added some features & research some issues in the VRB renderer, required to properly implement focus mode

Here is a preview that we showed off of the support for skybox and some of the new UX/UI:

Social

We are continuing to provide a better experience across Hubs by Mozilla:

  • Added better flow for iOS webviews
  • Added support for VM development and fast entry flow for developers
  • Began work on image proxying for sharing 2d images
  • Continuing development on 2d/3d object spawning, space editor, and pen tool

Join our public WebVR Slack #social channel to participate in on the discussion!

Content ecosystem

Found a critical bug? File it in our public GitHub repo or let us know on the public WebVR Slack #unity channel and as always, join us in our discussion!

Stay tuned for new features and improvements across our three areas!

Categorieën: Mozilla-nl planet

Mozilla Open Policy & Advocacy Blog: Parliament adopts dangerous copyright proposal – but the battle continues

Mozilla planet - fr, 22/06/2018 - 12:18

On 20 June the European Parliament’s legal affairs committee (JURI) approved its report on the copyright directive, sending the controversial and dangerous copyright reform into its final stages of lawmaking.

 

Here is a statement from Raegan MacDonald, Mozilla’s Head of EU Public Policy:

“This is a sad day for the Internet in Europe. Lawmakers in the European Parliament have just voted for a new law that would effectively impose a universal monitoring obligation on users posting content online. As bad as that is, the Parliament’s vote would also introduce a ‘link tax’ that will undermine access to knowledge and the sharing of information in Europe.

It is especially disappointing that just a few weeks after the entry into force of the GDPR – a law that made Europe a global regulatory standard bearer – Parliamentarians have approved a law that will fundamentally damage the Internet in Europe, with global ramifications. But it’s not over yet – the final text still needs to be signed off by the Parliament plenary on 4 July. We call on Parliamentarians, and all those who care for an internet that fosters creativity and competition in Europe, to overturn these regressive provisions in July.”

 

Article 11 – where press publishers can demand a license fee for snippets of text online – passed by a slim majority of 13 to 12. The provision mandating upload filters for copyright content, Article 13, was adopted 15 to 10.

Mozilla will continue to fight for copyright that suits the 21st century and fosters creativity and competition online. We encourage anyone who shares these concerns to reach out to members of the European Parliament – you can call them directly via changecopyright.org, or tweet and email them at saveyourinternet.eu.

The post Parliament adopts dangerous copyright proposal – but the battle continues appeared first on Open Policy & Advocacy.

Categorieën: Mozilla-nl planet

Mozilla Reps Community: Rep of the Month – May 2018

Mozilla planet - fr, 22/06/2018 - 10:30

Please join us in congratulating Prathamesh Chavan, our Rep of the Month for May 2018!

Prathamesh is from Pune, India and works as a Technical Support Engineer at Red Hat. From his very early days in the Mozilla community, Prathamesh used his excellect people skills to spread the community to different colleges and to evangelise many of the upcoming projects, products and Mozilla initiatives. Prathamesh is also a very resourceful person. Due to this, he did a great job at organizing some great events at Pune and creare many new Mozilla Clubs across the city there.

ad79ac8cab406498f2d8168484b3525b

 

As a Mozilla Reps Council member, Prathamesh has done some great work and has shown great leadership skills. He is always proactive in sharing important updates with the bigger community as well as raising his hand at every new initiative.

Thanks Prathamesh, keep rocking the Open Web!

Please congratulate him by heading over to the Discourse topic.

Categorieën: Mozilla-nl planet

The Firefox Frontier: Open source isn’t just for software: Opensourcery recipe

Mozilla planet - fr, 22/06/2018 - 01:43

Firefox is one of the world’s most successful open source software projects. This means we make the code that runs Firefox available for anyone to modify and use so long … Read more

The post Open source isn’t just for software: Opensourcery recipe appeared first on The Firefox Frontier.

Categorieën: Mozilla-nl planet

K Lars Lohn: Things Gateway - the RESTful API and the Tide Light

Mozilla planet - to, 21/06/2018 - 23:58
In each of my previous postings about the Things Gateway from Mozilla, I've shown how to either attach an existing thing or create a new virtual thing.  Today, I'm going to talk about controlling things from outside the Things Gateway.

One of the most important aspects of Project Things is the concept of giving each IoT device a URL on the local area network.  This is includes the devices that natively cannot speak HTTP like all those Z-Wave & Zigbee lights and plugs.  The Things Gateway gives these devices a voice on your IP network.  That's a powerful idea.  It enables the ability to write software that can control individual devices in a home using Web standards and Web tools.   Is the Things Gateway's rule system not quite sophisticated enough to accomplish what you want?  You can use any language capable of communicating with a RESTful API to control devices in your home.

My project today is to control the color of a Philips HUE bulb to tell me at a glance the level and trend for the tide at the Pacific Coast west of my home.  When the tide is low, I want the light to be green.  When the tide is high, the light will be red.  During the transition from low to high, I want the light to slowly transition from green to yellow to orange to red.   For the transition from high tide to low tide, the light should go from red to magenta to blue to green.

So how do you tell a Philips HUE bulb to change its color?  It's done with an HTTP PUT command to the Things Gateway.  It's really pretty simple in any language.  Here's an asynchronous implementation in Python:

async with aiohttp.ClientSession() as session:
async with async_timeout.timeout(seconds_for_timeout):
async with session.put(
"http://gateway.local/things/{}/properties/color".format(thing_id),
headers={
'Accept': 'application/json',
'Authorization': 'Bearer {}'.format(things_gateway_auth_key),
'Content-Type': 'application/json'
},
data='{{"color": "{}"}}'.format(a_color)
) as response:
return await response.text()

Most of this code can be treated as boilerplate.  There are only three data items that come from outside: thing_id, things_gateway_auth_key, a_color.  The value of color is obvious, it's the color that you want to set in the Philips HUE bulb in the form a HEX string: '#FF0000' for red, '#FF00FF' for yellow, ...  The other two, thing_id and things_gateway_auth_key, are not obvious and you have do some mining in the Things Gateway to determine the appropriate values.

The Things Gateway will generate an authorization key for you by going to Settings -> Authorizations -> Create New Local Authorization -> Allow

You can even create authorizations that allow access only to specific things within your gateway.   Once you've pressed "Allow", the next screen gives you the Authorization Token as well as examples of its use in various languages:


Copy your Authorization Token to someplace that you can get reference to it again in the future.

The next task is to find the thing_id for the thing that you want to control.  For me, this was the Philips HUE light that I named "Tide Light".  The thing_id was the default name that the Things Gateway tried to give it when it was first paired.  If you didn't take note of that, you can fetch it again by using the command line Curl example from the example on the Local Token Service page shown above.  Unfortunately, that will return a rather dense block of unformatted json text.  I piped the output through json_pp and then into an editor to make it easier to search for my device called "Tide Light".  Once found, then I looked for the associated color property and found the href entry under the color property.

$ curl -H "Authorization: Bearer XDZkRTVK2fLw...IVEMZiZ9Z" \
-H "Accept: application/json" --insecure \
http://gateway.local/things | json_pp | vim -



{
"properties" : {
"color" : {
"type" : "string",
"href" : "/things/zb-0017880103415d70/properties/color"
},
"on" : {
"type" : "boolean",
"href" : "/things/zb-0017880103415d70/properties/on"
}
},
"type" : "onOffColorLight",
"name" : "Tide Light",
"links" : [
{
"rel" : "properties",
"href" : "/things/zb-0017880103415d70/properties"
},
{
"href" : "/things/zb-0017880103415d70/actions",
"rel" : "actions"
},
{
"href" : "/things/zb-0017880103415d70/events",
"rel" : "events"
},
{
"rel" : "alternate",
"href" : "/things/zb-0017880103415d70",
"mediaType" : "text/html"
},
{
"rel" : "alternate",
"href" : "ws://gateway.local/things/zb-0017880103415d70"
}
],
"description" : "",
"href" : "/things/zb-0017880103415d70",
"actions" : {},
"events" : {}
}


Now that we can see how to get and change information on a device controlled by the Things Gateway, we can start having fun with it.  To run my code, the table below shows the prerequisites.


Requirements & Parts List:
ItemWhat's it for?Where I got itA Raspberry Pi running the Things Gateway with the associated hardware from Part 2 of this series.This is the base platform that we'll be adding ontoFrom Part 2 of this seriesDIGI XStickThis allows the Raspberry Pi to talk the ZigBee protocol - there are several models, make sure you get the XU-Z11 model.The only place that I could find this was Mouser ElectronicsPhilips Hue White & Color Ambiance bulbThis will be the Tide Light.  Set up one with a HUE Bridge with instructions from Part 4 of this series or independently from Part 5 of this series.Home DepotWeather Underground developer accountThis is where the tide data comes from.  The developer account is free and you can get one directly from Weather Underground.a computer with Python 3.6My tide_light.py code was written with Python 3.6.  The  RPi that runs the Things Gateway has only 3.5.  To run my code, you'll need to either install 3.6 on the RPi or run the tide light on another machine.My workstation has Python 3.6 by default
Step 1: Install the software modules required to run tide_light.py:

$ sudo pip3 install configman
$ sudo pip3 install webthing
$ git clone https://github.com/twobraids/pywot.git
$ cd pywot
$ export PYTHONPATH=$PYTHONPATH:$PWD
$ cd demo
$ cp tide_light_sample.ini tide_light.ini

Step 2: Recall that the Tide Light is to reflect the real time tide level at some configurable location.  You need to select a location.  Weather Underground doesn't supply tide data for every location.  Unfortunately, I can't find a list of the locations for which they do supply information. You may have to do some trial and error.  I was lucky and found good information on my first try: Waldport, OR.

Edit the tide_light.ini file to reflect your selection of city & state, as well as your Weather Underground access key, the thing_id of your Philips HUE bulb, and the Things Gateway auth key:

# the name of the city (use _ for spaces)
city_name=INSERT CITY NAME HERE

# the two letter state code
state_code=INSERT STATE CODE HERE

# the id of the color bulb to control
thing_id=INSERT THING ID HERE

# the api key to access the Things Gateway
things_gateway_auth_key=INSERT THINGS GATEWAY AUTH KEY HERE

# the api key to access Weather Underground data
weather_underground_api_key=INSERT WEATHER UNDERGROUND KEY TOKEN HERE


Step 3: Run the tide light like this:

$ ./tide_light.py --admin.conf=tide_light.ini


You can alway override the values in the ini file with command line switches:

$ ./tide_light.py --admin.conf=tide_light.ini --city=crescent_city --state_code=CA


So how does the Tide Light actually work? Check out the source code. It starts with downloading tide tables in the manner shown in the code quote near the top of this page. The program takes the time of the next high/low tide cycle and divides it into 120 time increments. These increments correspond to the 120 colors in the low-to-high and high-to-low color tables. As each time increment passes, the next color is selected from the currently active color table.

Epilogue: I learned a lot about tides this week.  Having spent much of my life in the Rocky Mountains, I've never really had to pay attention to tides.  While I live in Oregon now, I just don't go to the coast that much. I didn't realize that tides were so variable.  I drove out to the coast to take photos of high and low tides to accompany this blog post, but surprisingly found that on average days the tides of the Oregon coast aren't all that dramatic.  In fact, my high and low tide photos from Nye Beach in Newport, OR were nearly indistinguishable.  My timing was poor, the tides are more interesting at the new moon and full moon times.  This week is not the right week for drama.   The photos that I published here are, for low tide, from May 17, 2017 and, for high tide, January 18, 2018.  The high tide photo was from an extreme event where a high tide and an offshore storm conspired for record-breaking waves.
low tide 2017-05-17
high tide 2018-01-18
Categorieën: Mozilla-nl planet

Mozilla Addons Blog: Add-ons at the San Francisco All Hands Meeting

Mozilla planet - to, 21/06/2018 - 22:01

Last week, more than 1,200 Mozillians from around the globe converged on San Francisco, California, for Mozilla’s biannual All Hands meeting to celebrate recent successes, learn more about products from around the company, and collaborate on projects currently in flight.

For the add-ons team, this meant discussing tooling improvements for extension developers, reviewing upcoming changes to addons.mozilla.org (AMO), sharing what’s in store for the WebExtensions API, and checking in on initiatives that help users discover extensions. Here are some highlights:

Developer Tools

During a recent survey, participating extension developers noted two stand-out tools for development: web-ext, a command line tool that can run, lint, package, and sign an extension; and about:debugging, a page where developers can temporarily install their extensions for manual testing. There are improvements coming to both of these tools in the coming months.

In the immediate future, we want to add a feature to web-ext that would let developers submit their extensions to AMO. Our ability to add this feature is currently blocked by how AMO handles extension metadata. Once that issue is resolved, you can expect to see web-ext support a submit command. We also discussed implementing a create command that would generate a standard extension template for developers to start from.

Developers can currently test their extensions manually by installing them through about:debugging. Unfortunately, these installations do not persist once the browser is closed or restarted. Making these installations persistent is on our radar, and now that we are back from the All Hands, we will be looking at developing a plan and finding resources for implementation.

Addons.mozilla.org (AMO)

During the next three months, the AMO engineering team will prioritize work around improving user rating and review flows, improving the code review tools for add-on reviewers, and converting dictionaries to WebExtensions.

Engineers will also tackle a project to ensure that users who download Firefox because they want to install a particular extension or theme from AMO are able to successfully complete the installation process. Currently, users who download Firefox from a listing on AMO are not returned to AMO when they start Firefox for the first time, making it hard for them to finish installing the extension they want. By closing this loop, we expect to see an increase in extension and/or theme installations.

WebExtensions APIs

Several new and enhanced APIs have landed in Firefox since January, and more are on their way. In the next six months, we anticipate landing WebExtensions APIs for clipboard support, bookmarks and session management (including bookmark tags and further expansions of the theming API).

Additionally, we’ll be working towards supporting visual overlays (like notification bars, floating panels, popups, and toolbars) by the end of the year.

Help Users Find Great Extensions Faster

This year, we are focusing on helping Firefox users find and discover great extensions quickly. We have made a few bets on how we can better meet user needs by recommending specific add-ons. In San Francisco, we checked in on the status of projects currently underway:

Recommending extensions to users on AMO

In May, we started testing recommendations on listing pages for extensions commonly co-installed by other users.

A screenshot of the recommender feature on AMO.

Results so far have shown that people are discovering and installing more relevant extensions from these recommendations than the control group, who only sees generally popular extensions. We will continue to make refinements and fully graduate it into AMO in the second half of the year.

(For our privacy-minded friends: you can learn more about how Firefox uses data to improve its products by reading the Firefox Privacy Notice.)

Adding extensions to the onboarding tour for new Firefox users.

We want to make users aware of the benefits of customizing their browser soon after installing Firefox. We’re currently testing a few prototypes of a new onboarding flow.

And more!

We have more projects to improve extension discovery and user satisfaction on our Trello.

Join Us

Are you interested in contributing to the add-ons ecosystem? Check out our wiki to see a list of current contribution opportunities.

 

The post Add-ons at the San Francisco All Hands Meeting appeared first on Mozilla Add-ons Blog.

Categorieën: Mozilla-nl planet

Air Mozilla: Reps Weekly Meeting, 21 Jun 2018

Mozilla planet - to, 21/06/2018 - 18:00

Reps Weekly Meeting This is a weekly call with some of the Reps to discuss all matters about/affecting Reps and invite Reps to share their work with everyone.

Categorieën: Mozilla-nl planet

The Rust Programming Language Blog: Announcing Rust 1.27

Mozilla planet - to, 21/06/2018 - 02:00

The Rust team is happy to announce a new version of Rust, 1.27.0. Rust is a systems programming language focused on safety, speed, and concurrency.

If you have a previous version of Rust installed via rustup, getting Rust 1.27.0 is as easy as:

$ rustup update stable

If you don’t have it already, you can get rustup from the appropriate page on our website, and check out the detailed release notes for 1.27.0 on GitHub.

Additionally, we would like to draw attention to something: just before the release of 1.27.0, we found a bug in the ‘default match bindings’ feature introduced in 1.26.0 that can possibly introduce unsoundness. Since it was discovered very late in the release process, and has been present since 1.26.0, we decided to stick to our release train model. We expect to put out a 1.27.1 with that fix applied soon, and if there’s demand, possibly a 1.26.3 as well. More information about the specifics here will come in that release announcement.

What’s in 1.27.0 stable

This release has two big language features that people have been waiting for. But first, a small comment on documentation: All books in the Rust Bookshelf are now searchable! For example, here’s a search of “The Rust Programming Language” for ‘borrow’. This will hopefully make it much easier to find what you’re looking for. Additionally, there’s one new book: the rustc Book. This book explains how to use rustc directly, as well as some other useful information, like a list of all lints.

SIMD

Okay, now for the big news: the basics of SIMD are now available! SIMD stands for “single instruction, multiple data.” Consider a function like this:

pub fn foo(a: &[u8], b: &[u8], c: &mut [u8]) { for ((a, b), c) in a.iter().zip(b).zip(c) { *c = *a + *b; } }

Here, we’re taking two slices, and adding the numbers together, placing the result in a third slice. The simplest possible way to do this would be to do exactly what the code does, and loop through each set of elements, add them together, and store it in the result. However, compilers can often do better. LLVM will often “autovectorize” code like this, which is a fancy term for “use SIMD.” Imagine that a and b were both 16 elements long. Each element is a u8, and so that means that each slice would be 128 bits of data. Using SIMD, we could put both a and b into 128 bit registers, add them together in a *single* instruction, and then copy the resulting 128 bits into c. That’d be much faster!

While stable Rust has always been able to take advantage of autovectorization, sometimes, the compiler just isn’t smart enough to realize that we can do something like this. Additionally, not every CPU has these features, and so LLVM may not use them so your program can be used on a wide variety of hardware. So, in Rust 1.27, the addtion of the std::arch module allows us to use these kinds of instructions directly, which means we don’t need to rely on a smart compiler. Additionally, it includes some features that allow us to choose a particular implementation based on various criteria. For example:

#[cfg(all(any(target_arch = "x86", target_arch = "x86_64"), target_feature = "avx2"))] fn foo() { #[cfg(target_arch = "x86")] use std::arch::x86::_mm256_add_epi64; #[cfg(target_arch = "x86_64")] use std::arch::x86_64::_mm256_add_epi64; unsafe { _mm256_add_epi64(...); } }

Here, we use cfg flags to choose the correct version based on the machine we’re targetting; on x86 we use that version, and on x86_64 we use its version. We can also choose at runtime:

fn foo() { #[cfg(any(target_arch = "x86", target_arch = "x86_64"))] { if is_x86_feature_detected!("avx2") { return unsafe { foo_avx2() }; } } foo_fallback(); }

Here, we have two versions of the function: one which uses AVX2, a specific kind of SIMD feature that lets you do 256-bit operations. The is_x86_feature_detected! macro will generate code that detects if your CPU supports AVX2, and if so, calls the foo_avx2 function. If not, then we fall back to a non-AVX implementation, foo_fallback. This means that our code will run super fast on CPUs that support AVX2, but still work on ones that don’t, albeit slower.

If all of this seems a bit low-level and fiddly, well, it is! std::arch is specifically primitives for building these kinds of things. We hope to eventually stabilize a std::simd module with higher-level stuff in the future. But landing the basics now lets the ecosystem experiment with higher level libraries starting today. For example, check out the faster crate. Here’s a code snippet with no SIMD:

let lots_of_3s = (&[-123.456f32; 128][..]).iter() .map(|v| { 9.0 * v.abs().sqrt().sqrt().recip().ceil().sqrt() - 4.0 - 2.0 }) .collect::<Vec<f32>>();

To use SIMD with this code via faster, you’d change it to this:

let lots_of_3s = (&[-123.456f32; 128][..]).simd_iter() .simd_map(f32s(0.0), |v| { f32s(9.0) * v.abs().sqrt().rsqrt().ceil().sqrt() - f32s(4.0) - f32s(2.0) }) .scalar_collect();

It looks almost the same: simd_iter instead of iter, simd_map instead of map, f32s(2.0) instead of 2.0. But you get a SIMD-ified version generated for you.

Beyond that, you may never write any of this yourself, but as always, the libraries you depend on may. For example, the regex crate has already added support, and a new release will contain these SIMD speedups without you needing to do anything at all!

dyn Trait

Rust’s trait object syntax is one that we ultimately regret. If you’ll recall, given a trait Foo, this is a trait object:

Box<Foo>

However, if Foo were a struct, it’d just be a normal struct placed inside a Box<T>. When designing the language, we thought that the similarity here was a good thing, but experience has demonstrated that it is confusing. And it’s not just for the Box<Trait> case; impl SomeTrait for SomeOtherTrait is also technically valid syntax, but you almost always want to write impl<T> SomeTrait for T where T: SomeOtherTrait instead. Same with impl SomeTrait, which looks like it would add methods or possibly default implementations but in fact adds inherent methods to a trait object. Finally, with the recent addition of impl Trait syntax, it’s impl Trait vs Trait when explaining things, and so that feels like Trait is what you should use, given that it’s shorter, but in reality, that’s not always true.

As such, in Rust 1.27, we have stabilized a new syntax, dyn Trait. A trait object now looks like this:

// old => new Box<Foo> => Box<dyn Foo> &Foo => &dyn Foo &mut Foo => &mut dyn Foo

And similarly for other pointer types, Arc<Foo> is now Arc<dyn Foo>, etc. Due to backwards compatibility, we cannot remove the old syntax, but we have included a lint, which is set to allow by default, called bare-trait-object. If you want to lint against the older syntax, you can turn it on. We thought that it would throw far too many warnings to turn on by default at present.

Incidentally, we’re working on a tool called rustfix that can automatically upgrade your code to newer idioms. It uses these sorts of lints to do so. Expect to hear more about rustfix in a future announcement.

#[must_use] on functions

Finally, the #[must_use] attribute is getting an upgrade: it can now be used on functions.

Previously, it only applied to types, like Result<T, E>. But now, you can do this:

#[must_use] fn double(x: i32) -> i32 { 2 * x } fn main() { double(4); // warning: unused return value of `double` which must be used let _ = double(4); // (no warning) }

We’ve also enhanced several bits of the standard library to make use of this; Clone::clone, Iterator::collect, and ToOwned::to_owned will all start warning if you don’t use their results, helping you notice expensive operations you may be throwing away by accident.

See the detailed release notes for more.

Library stabilizations

Several new APIs were stabilized this release:

See the detailed release notes for more.

Cargo features

Cargo has two small upgrades this release. First, it now takes a --target-dir flag if you’d like to change the target directory for a given invocation.

Additionally, a tweak to the way Cargo deals with targets has landed. Cargo will attempt to automatically discover tests, examples, and binaries within your project. However, sometimes explicit configuration is needed. But the initial implementation had a problem: let’s say that you have two examples, and Cargo is discovering them both. You want to tweak one of them, and so you add a [[example]] to your Cargo.toml to configure its settings. Cargo currently sees that you’ve set one explicitly, and therefore, doesn’t attempt to do any autodetection for the others. That’s quite surprising.

As such, we’ve added several ‘auto’ keys to Cargo.toml We can’t fix this behavior without possibly breaking projects that may have inadvertently been relying on it, and so, if you’d like to configure some targets, but not others, you can set the autoexamples key to true in the [package] section.

See the detailed release notes for more.

Contributors to 1.27.0

Many people came together to create Rust 1.27. We couldn’t have done it without all of you. Thanks!

Categorieën: Mozilla-nl planet

Air Mozilla: The Joy of Coding - Episode 142

Mozilla planet - wo, 20/06/2018 - 19:00

The Joy of Coding - Episode 142 mconley livehacks on real Firefox bugs while thinking aloud.

Categorieën: Mozilla-nl planet

Air Mozilla: Weekly SUMO Community Meeting, 20 Jun 2018

Mozilla planet - wo, 20/06/2018 - 18:00

Weekly SUMO Community Meeting This is the SUMO weekly call

Categorieën: Mozilla-nl planet

Botond Ballo: Trip Report: C++ Standards Meeting in Rapperswil, June 2018

Mozilla planet - wo, 20/06/2018 - 16:00
Summary / TL;DR

Project What’s in it? Status C++17 See list Published! C++20 See below On track Library Fundamentals TS v2 source code information capture and various utilities Published! Parts of it merged into C++17 Concepts TS Constrained templates Merged into C++20 with some modifications Parallelism TS v2 Task blocks, library vector types and algorithms, and more Approved for publication! Transactional Memory TS Transaction support Published! Not headed towards C++20 Concurrency TS v1 future.then(), latches and barriers, atomic smart pointers Published! Parts of it merged into C++20, more on the way Executors Abstraction for where/how code runs in a concurrent context Final design being hashed out. Ship vehicle not decided yet. Concurrency TS v2 See below Under development. Depends on Executors. Networking TS Sockets library based on Boost.ASIO Published! Ranges TS Range-based algorithms and views Published! Headed towards C++20 Coroutines TS Resumable functions, based on Microsoft’s await design Published! C++20 merge uncertain Modules v1 A component system to supersede the textual header file inclusion model Published as a TS Modules v2 Improvements to Modules v1, including a better transition path Under active development Numerics TS Various numerical facilities Under active development Graphics TS 2D drawing API No consensus to move forward Reflection TS Static code reflection mechanisms Send out for PDTS ballot Contracts Preconditions, postconditions, and assertions Merged into C++20

A few links in this blog post may not resolve until the committee’s post-meeting mailing is published (expected within a few days of June 25, 2018). If you encounter such a link, please check back in a few days.

Introduction

A couple of weeks ago I attended a meeting of the ISO C++ Standards Committee (also known as WG21) in Rapperswil, Switzerland. This was the second committee meeting in 2018; you can find my reports on preceding meetings here (March 2018, Jacksonville) and here (November 2017, Albuquerque), and earlier ones linked from those. These reports, particularly the Jacksonville one, provide useful context for this post.

At this meeting, the committee was focused full-steam on C++20, including advancing several significant features — such as Ranges, Modules, Coroutines, and Executors — for possible inclusion in C++20, with a secondary focus on in-flight Technical Specifications such as the Parallelism TS v2, and the Reflection TS.

C++20

C++20 continues to be under active development. A number of new changes have been voted into its Working Draft at this meeting, which I list here. For a list of changes voted in at previous meetings, see my Jacksonville report.

Technical Specifications

In addition to the C++ International Standard (IS), the committee publishes Technical Specifications (TS) which can be thought of experimental “feature branches”, where provisional specifications for new language or library features are published and the C++ community is invited to try them out and provide feedback before final standardization.

At this meeting, the committee voted to publish the second version of the Parallelism TS, and to send out the Reflection TS for its PDTS (“Proposed Draft TS”) ballot. Several other TSes remain under development.

Parallelism TS v2

The Parallelism TS v2 was sent out for its PDTS ballot at the last meeting. As described in previous reports, this is a process where a draft specification is circulated to national standards bodies, who have an opportunity to provide feedback on it. The committee can then make revisions based on the feedback, prior to final publication.

The results of the PDTS ballot had arrived just in time for the beginning of this meeting, and the relevant subgroups (primarily the Concurrency Study Group) worked diligently during the meeting to go through the comments and address them. This led to the adoption of several changes into the TS working draft:

The working draft, as modified by these changes, was then approved for publication!

Reflection TS

The Reflection TS, based on the reflexpr static reflection proposal, picked up one new feature, static reflection of functions, and was subsequently sent out for its PDTS ballot! I’m quite excited to see efficient progress on this (in my opinion) very important feature.

Meanwhile, the committee has also been planning ahead for the next generation of reflection and metaprogramming facilities for C++, which will be based on value-based constexpr programming rather than template metaprogramming, allowing users to reap expressiveness and compile-time performance gains. In the list of proposals reviewed by the Evolution Working Group (EWG) below, you’ll see quite a few of them are extensions related to constexpr; that’s largely motivated by this direction.

Concurrency TS v2

The Concurrency TS v2 (no working draft yet), whose notable contents include revamped versions of async() and future::then(), among other things, continues to be blocked on Executors. Efforts at this meeting focused on moving Executors forward.

Library Fundamentals TS v3

The Library Fundementals TS v3 is now “open for business” (has an initial working draft based on the portions of v2 that have not been merged into the IS yet), but no new proposals have been merged to it yet. I expect that to start happening in the coming meetings, as proposals targeting it progress through the Library groups.

Future Technical Specifications

There are (or were, in the case of the Graphics TS) some planned future Technical Specifications that don’t have an official project or working draft at this point:

Graphics

At the last meeting, the Graphics TS, set to contain 2D graphics primitives with an interface inspired by cairo, ran into some controversy. A number of people started to become convinced that, since this was something that professional graphics programmers / game developers were unlikely to use, the large amount of time that a detailed wording review would require was not a good use of committee time.

As a result of these concerns, an evening session was held at this meeting to decide the future of the proposal. A paper arguing we should stay course was presented, as was an alternative proposal for a much lighter-weight “diet” graphics library. After extensive discussion, however, neither the current proposal nor the alternative had consensus to move forward.

As a result – while nothing is ever set in stone and the committee can always change in mind – the Graphics TS is abandoned for the time being.

(That said, I’ve heard rumours that the folks working on the proposal and its reference implementation plan to continue working on it all the same, just not with standardization as the end goal. Rather, they might continue iterating on the library with the goal of distributing it as a third-party library/package of some sort (possibly tying into the committee’s exploration of improving C++’s package management ecosystem).)

Executors

SG 1 (the Concurrency Study Group) achieved design consensus on a unified executors proposal (see the proposal and accompanying design paper) at the last meeting.

At this meeting, another executors proposal was brought forward, and SG 1 has been trying to reconcile it with / absorb it into the unified proposal.

As executors are blocking a number of dependent items, including the Concurrency TS v2 and merging the Networking TS, SG 1 hopes to progress them forward as soon as possible. Some members remain hopeful that it can be merged into C++20 directly, but going with the backup plan of publishing it as a TS is also a possibility (which is why I’m listing it here).

Merging Technical Specifications into C++20

Turning now to Technical Specifications that have already been published, but not yet merged into the IS, the C++ community is eager to see some of these merge into C++20, thereby officially standardizing the features they contain.

Ranges TS

The Ranges TS, which modernizes and Conceptifies significant parts of the standard library (the parts related to algorithms and iterators), has been making really good progress towards merging into C++20.

The first part of the TS, containing foundational Concepts that a large spectrum of future library proposals may want to make use of, has just been merged into the C++20 working draft at this meeting. The second part, the range-based algorithms and utilities themselves, is well on its way: the Library Evolution Working Group has finished ironing out how the range-based facilities will integrate with the existing facilities in the standard library, and forwarded the revised merge proposal for wording review.

Coroutines TS

The Coroutines TS was proposed for merger into C++20 at the last meeting, but ran into pushback from adopters who tried it out and had several concerns with it (which were subsequently responded to, with additional follow-up regarding optimization possibilities).

Said adopters were invited to bring forward a proposal for an alternative / modified design that addressed their concerns, no later than at this meeting, and so they did; their proposal is called Core Coroutines.

Core Coroutines was reviewed by the Evolution Working Group (I summarize the technical discussion below), which encouraged further iteration on this design, but also felt that such iteration should not hold up the proposal to merge the Coroutines TS into C++20. (What’s the point in iterating on one design if another is being merged into the IS draft, you ask? I believe the thinking was that further exploration of the Core Coroutines design could inspire some modifications to the Coroutines TS that could be merged at a later meeting, still before C++20’s publication.)

As a result, the merge of the Coroutines TS came to a plenary vote at the end of the week. However, it did not garner consensus; a significant minority of the committee at large felt that the Core Coroutines design deserved more exploration before enshrining the TS design into the standard. (At least, I assume that was the rationale of those voting against. Regrettably, due to procedural changes, there is very little discussion before plenary votes these days to shed light on why people have the positions they do.)

The window for merging a TS into C++20 remains open for approximately one more meeting. I expect the proponents of the Coroutines TS will try the merge again at the next meeting, while the authors of Core Coroutines will refine their design further. Hopefully, the additional time and refinement will allow us to make a better-informed final decision.

Networking TS

The Networking TS is in a situation where the technical content of the TS itself is in a fairly good shape and ripe for merging into the IS, but its dependency on Executors makes a merger in the C++20 timeframe uncertain.

Ideas have been floated around of coming up with a subset of Executors that would be sufficient for the Networking TS to be based on, and that could get agreement in time for C++20. Multiple proposals on this front are expected at the next meeting.

Modules

Modules is one of the most-anticipated new features in C++. While the Modules TS was published fairly recently, and thus merging it into C++20 is a rather ambitious timeline (especially since there are design changes relative to the TS that we know we want to make), there is a fairly widespread desire to get it into C++20 nonetheless.

I described in my last report that there was a potential path forward to accomplishing this, which involved merging a subset of a revised Modules design into C++20, with the rest of the revised design to follow (likely in the form of a Modules TS v2, and a subsequent merge into C++23).

The challenge with this plan is that we haven’t fully worked out the revised design yet, never mind agreed on a subset of it that’s safe for merging into C++20. (By safe I mean forwards-compatible with the complete design, since we don’t want breaking changes to a feature we put into the IS.)

There was extensive discussion of Modules in the Evolution Working Group, which I summarize below. The procedural outcome was that there was no consensus to move forward with the “subset” plan, but we are moving forward with the revised design at full speed, and some remain hopeful that the entire revised design (or perhaps a larger subset) can still be merged into C++20.

What’s happening with Concepts?

The Concepts TS was merged into the C++20 working draft previously, but excluding certain controversial parts (notably, abbreviated function templates (AFTs)).

As AFTs remain quite popular, the committee has been trying to find an alternative design for them that could get consensus for C++20. Several proposals were heard by EWG at the last meeting, and some refined ones at this meeting. I summarize their discussion below, but in brief, while there is general support for two possible approaches, there still isn’t final agreement on one direction.

The Role of Technical Specifications

We are now about 6 years into the committee’s procedural experiment of using Technical Specifications as a vehicle for gathering feedback based on implementation and use experience prior to standardization of significant features. Opinions differ on how successful this experiment has been so far, with some lauding the TS process as leading to higher-quality, better-baked features, while others feel the process has in some cases just added unnecessary delays.

The committee has recently formed a Direction Group, a small group composed of five senior committee members with extensive experience, which advises the Working Group chairs and the Convenor on matters related to priority and direction. One of the topics the Direction Group has been tasked with giving feedback on is the TS process, and there was evening session at this meeting to relay and discuss this advice.

The Direction Group’s main piece of advice was that while the TS process is still appropriate for sufficiently large features, it’s not to be embarked on lightly; in each case, a specific set of topics / questions on which the committee would like feedback should be articulated, and success criteria for a TS “graduating” and being merged into the IS should be clearly specified at the outset.

Evolution Working Group

I’ll now write in a bit more detail about the technical discussions that took place in the Evolution Working Group, the subgroup that I sat in for the duration of the week.

Unless otherwise indicated, proposals discussed here are targeting C++20. I’ve categorized them into the usual “accepted”, “further work encouraged”, and “rejected” categories:

Accepted proposals:

  • Standard library compatibility promises. EWG looked at this at the last meeting, and asked that it be revised to only list the types of changes the standard library reserves to make; a second list, of code patterns that should be avoided if you want a guarantee of future library updates not breaking your code, was to be removed as it follows from the first list. The revised version was approved and will be published as a Standing Document (pending a plenary vote).
  • A couple of minor tweaks to the contracts proposal:
    • In response to implementer feedback, the always checking level was removed, and the source location reported for precondition violations was made implementation-defined (previously, it had to be a source location in the function’s caller).
    • Virtual functions currently require that overrides repeat the base function’s pre- and postconditions. We can run into trouble in cases where the base function’s pre- or postcondition, interpreted in the context of the derived class, has a different meaning (e.g. because the derived class shadows a base member’s name, or due to covariant return types). Such cases were made undefined behaviour, with the understanding that this is a placeholder for a more principled solution to forthcome at a future meeting.
  • try/catch blocks in constexpr functions. Throwing an exception is still not allowed during constant evaluation, but the try/catch construct itself can be present as long as only the non-throwing codepaths as exercised at compile time.
  • More constexpr containers. EWG previously approved basic support for using dynamic allocation during constant evaluation, with the intention of allowing containers like std::vector to be used in a constexpr context (which is now happening). This is an extension to that, which allows storage that was dynamically allocated at compile time to survive to runtime, in the form of a static (or automatic) storage duration variable.
  • Allowing virtual destructors to be “trivial”. This lifts an unnecessary restriction that prevented some commonly used types like std::error_code from being used at compile time.
  • Immediate functions. These are a stronger form of constexpr functions, spelt constexpr!, which not only can run at compile time, but have to. This is motivated by several use cases, one of them being value-based reflection, where you need to be able to write functions that manipulate information that only exists at compile-time (like handles to compiler data structures used to implement reflection primitives).
  • std::is_constant_evaluated(). This allows you to check whether a constexpr function is being invoked at compile time or at runtime. Again there are numerous use cases for this, but a notable one is related to allowing std::string to be used in a constexpr context. Most implementations of std::string use a “small string optimization” (SSO) where sufficiently small strings are stored inline in the string object rather than in a dynamically allocated block. Unfortunately, SSO cannot be used in a constexpr context because it requires using reinterpret_cast (and in any case, the motivation for SSO is runtime performance), so we need a way to make the SSO conditional on the string being created at runtime.
  • Signed integers are two’s complement. This standardizes existing practice that has been the case for all modern C++ implementations for quite a while.
  • Nested inline namespaces. In C++17, you can shorten namespace foo { namespace bar { namespace baz { to namespace foo::bar::baz {, but there is no way to shorten namespace foo { inline namespace bar { namespace baz {. This proposal allows writing namespace foo::inline bar::baz. The single-name version, namespace inline foo { is also valid, and equivalent to inline namespace foo {.

There were also a few that, after being accepted by EWG, were reviewed by CWG and merged into the C++20 working draft the same week, and thus I already mentioned them in the C++20 section above:


Proposals for which further work is encouraged:

  • Generalizing alias declarations. The idea here is to generalize C++’s alias declarations (using a = b;) so that you can alias not only types, but also other entities like namespaces or functions. EWG was generally favourable to the idea, but felt that aliases for different kinds of entities should use different syntaxes. (Among other considerations, using the same syntax would mean having to reinstate the recently-removed requirement to use typename in front of a dependent type in an alias declaration.) The author will explore alternative syntaxes for non-type aliases and return with a revised proposal.
  • Allow initializing aggregates from a parenthesized list of values. This idea was discussed at the last meeting and EWG was in favour, but people got distracted by the quasi-related topic of aggregates with deleted constructors. There was a suggestion that perhaps the two problems could be addressed by the same proposal, but in fact the issue of deleted constructors inspired independent proposals, and this proposal returned more or less unchanged. EWG liked the idea and initially approved it, but during Core Working Group review it came to light that there are a number of subtle differences in behaviour between constructor initialization and aggregate initialization (e.g. evaluation order of arguments, lifetime extension, narrowing conversions) that need to be addressed. The suggested guidance was to have the behaviour with parentheses match the behaviour of constructor calls, by having the compiler (notionally) synthesize a constructor to call when this notation is used. The proposal will return with these details fleshed out.
  • Extensions to class template argument deduction. This paper proposed seven different extensions to this popular C++17 feature. EWG didn’t make individual decisions on them yet. Rather, the general guidance was to motivate the extensions a bit better, choose a subset of the more important ones to pursue for C++20, perhaps gather some implementation experience, and come back with a revised proposal.
  • Deducing this. The type of the implicit object parameter (the “this” parameter) of a member function can vary in the same ways as the types of other parameters: lvalue vs. rvalue, const vs. non-const. C++ provides ways to overload member functions to capture this variation (trailing const, ref-qualifiers), but sometimes it would be more convenient to just template over the type of the this parameter. This proposal aims to allow that, with a syntax like this:

    template <typename Self>
    R foo(this Self&& self, /* other parameters */);

    EWG agreed with the motivation, but expressed a preference for keeping information related to the implicit object parameter at the end of the function declaration, (where the trailing const and ref-qualifiers are now), leading to a syntax more like this:

    template <typename Self>
    R foo(/* other parameters */) Self&& self

    (the exact syntax remains to be nailed down as the end of a function declaration is a syntactically busy area, and parsing issues have to be worked out).
    EWG also opined that in such a function, you should only be able to access the object via the declared object parameter (self in the above example), and not also using this (as that would lead to confusion in cases where e.g. this has the base type while self has a derived type).
  • constexpr function parameters. The most ambitious constexpr-related proposal brought forward at this meeting, this aimed to allow function parameters to be marked as constexpr, and accordingly act as constant expressions inside the function body (e.g. it would be valid to use the value of one as a non-type template parameter or array bound). It was quickly pointed out that, while the proposal is implementable, it doesn’t fit into the language’s current model of constant evaluation; rather, functions with constexpr parameters would have to be implemented as templates, with a different instantiation for every combination of parameter values. Since this amounts to being a syntactic shorthand for non-type template parameters, EWG suggested that the proposal be reformulated in those terms.
  • Binding returned/initialized objects to the lifetime of parameters. This proposal aims to improve C++’s lifetime safety (and perhaps take one step towards being more like Rust, though that’s a long road) by allowing programmers to mark function parameters with an annotation that tells the compiler that the lifetime of the function’s return value should be “bound” to the lifetime of the parameter (that is, the return value should not outlive the parameter).
    There are several options for the associated semantics if the compiler detects that the lifetime of a return value would, in fact, exceed the lifetime of a parameter:

    • issue a warning
    • issue an error
    • extend the lifetime of the returned object



    In the first case, the annotation could take the form of an attribute (e.g. [[lifetimebound]]). In the second or third case, it would have to be something else, like a context-sensitive keyword (since attributes aren’t supposed to have semantic effects). The proposal authors suggested initially going with the first option in the C++20 timeframe, while leaving the door open for the second or third option later on.
    EWG agreed that mitigating lifetime hazards is an important area of focus, and something we’d like to deliver on in the C++20 timeframe. There was some concern about the proposed annotation being too noisy / viral. People asked whether the annotations could be deduced (not if the function is compiled separately, unless we rely on link-time processing), or if we could just lifetime-extend by default (not without causing undue memory pressure and risking resource exhaustion and deadlocks by not releasing expensive resources or locks in time). The authors will investigate the problem space further, including exploring ways to avoid the attribute being viral, and comparing their approach to Rust’s, and report back.

  • Nameless parameters and unutterable specializations. In some corner cases, the current language rules do not give you a way to express a partial or explicit specialization of a constrained template (because a specialization requires repeating the constraint with the specialized parameter values substituted in, which does not always result in valid syntax). This proposal invents some syntax to allow expressing such specializations. EWG felt the proposed syntax was scary, and suggested coming back with better motivating examples before pursuing the idea further.
  • How to catch an exception_ptr without even trying. This aims to allow getting at the exception inside an exception_ptr without having to throw it (which is expensive). As a side effect, it would also allow handling exception_ptrs in code compiled with -fno-exceptions. EWG felt the idea had merit, even though performance shouldn’t be the guiding principle (since the slowness of throw is technically a quality-of-implementation issue, although implementations seem to have agreed to not optimize it).
  • Allowing class template specializations in associated namespaces. This allows specializing e.g. std::hash for your own type, in your type’s namespace, instead of having to close that namespace, open namespace std, and then reopen your namespace. EWG liked the idea, but the issue of which names — names in your namespace, names in std, or both — would be visible without qualification inside the specialization, was contentious.

Rejected proposals:

  • Define basic_string_view(nullptr). This paper argued that since it’s common to represent empty strings as a const char* with value nullptr, the constructor of string_view which takes a const char* argument should allow a nullptr value and interpret it as an empty string. Another paper convincingly argued that conflating “a zero-sized string” with “not-a-string” does more harm than good, and this proposal was accordingly rejected.
  • Explicit concept expressions. This paper pointed out that if constrained-type-specifiers (the language machinery underlying abbreviated function templates) are added to C++ without some extra per-parameter syntax, certain constructs can become ambiguous (see the paper for an example). The ambiguity involves “concept expressions”, that is, the use of a concept (applied to some arguments) as a boolean expression, such as CopyConstructible<T>, outside of a requires-clause. The authors proposed removing the ambiguity by requiring the keyword requires to introduce a concept expression, as in requires CopyConstructible<T>. EWG felt this was too much syntactic clutter, given that concept expressions are expected to be used in places like static_assert and if constexpr, and given that the ambiguity is, at this point, hypothetical (pending what hapens to AFTs) and there would be options to resolve it if necessary.
Concepts

EWG had another evening session on Concepts at this meeting, to try to resolve the matter of abbreviated function templates (AFTs).

Recall that the main issue here is that, given an AFT written using the Concepts TS syntax, like void sort(Sortable& s);, it’s not clear that this is a template (you need to know that Sortable is a concept, not a type).

The four different proposals in play at the last meeting have been whittled down to two:

  • An updated version of Herb’s in-place syntax proposal, with which the above AFT would be written void sort(Sortable{}& s); or void sort(Sortable{S}& s); (with S in the second form naming the concrete type deduced for this parameter). The proposal also aims to change the constrained-parameter syntax (with which the same function could be written template <Sortable S> void sort(S& s);) to require braces for type parameters, so that you’d instead write template <Sortable{S}> void sort(S& s);. (The motivation for this latter change is to make it so that ConceptName C consistently makes C a value, whether it be a function parameter or a non-type template parameter, while ConceptName{C] consistently makes C a type.)
  • Bjarne’s minimal solution to the concepts syntax problems, which adds a single leading template keyword to announce that an AFT is a template: template void sort(Sortable& s);. (This is visually ambiguous with one of the explicit specialization syntaxes, but the compiler can disambiguate based on name lookup, and programmers can use the other explicit specialization syntax to avoid visual confusion.) This proposal leaves the constrained-parameter syntax alone.

Both proposals allow a reader to tell at a glance that an AFT is a template and not a regular function. At the same time, each proposal has downsides as well. Bjarne’s approach annotates the whole function rather than individual parameters, so in a function with multiple parameters you still don’t know at a glance which parameters are concepts (and so e.g. in a case of a Foo&& parameter, you don’t know if it’s an rvalue reference or a forwarding reference). Herb’s proposal messes with the well-loved constrained-parameter syntax.

After an extensive discussion, it turned out that both proposals had enough support to pass, with each retaining a vocal minority of opponents. Neither proposal was progressed at this time, in the hope that some further analysis or convergence can lead to a stronger consensus at the next meeting, but it’s quite clear that folks want something to be done in this space for C++20, and so I’m fairly optimistic we’ll end up getting one of these solutions (or a compromise / variation).

In addition to the evening session on AFTs, EWG looked at a proposal to alter the way name lookup works inside constrained templates. The original motivation for this was to resolve the AFT impasse by making name lookup inside AFTs work more like name lookup inside non-template functions. However, it became apparent that (1) that alone will not resolve the AFT issue, since name lookup is just one of several differences between template and non-template code; but (2) the suggested modification to name lookup rules may be desirable (not just in AFTs but in all constrained templates) anyways. The main idea behind the new rules is that when performing name lookup for a function call that has a constrained type as an argument, only functions that appear in the concept definition should be found; the motivation is to avoid surprising extra results that might creep in through ADL. EWG was supportive of making a change along these lines for C++20, but some of the details still need to be worked out; among them, whether constraints should be propagated through auto variables and into nested templates for the purpose of applying this rule.

Coroutines

As mentioned above, EWG reviewed a modified Coroutines design called Core Coroutines, that was inspired by various concerns that some early adopters of the Coroutines TS had with its design.

Core Coroutines makes a number of changes to the Coroutines TS design:

  • The most significant change, in my opinion, is that it exposes the “coroutine frame” (the piece of memory that stores the compiler’s transformed representation of the coroutine function, where e.g. stack variables that persist across a suspension point are stored) as a first-class object, thereby allowing the user to control where this memory is stored (and, importantly, whether or not it is dynamically allocated).
  • Syntax changes:
    • To how you define a coroutine. Among other motivations, the changes emphasize that parameters to the coroutine act more like lambda captures than regular function parameters (e.g. for reference parameters, you need to be careful that the referred-to objects persist even after a suspension/resumption).
    • To how you call a coroutine. The new syntax is an operator (the initial proposal being [<-]), to reflect that coroutines can be used for a variety of purposes, not just asynchrony (which is what co_await suggests).
  • A more compact API for defining your own coroutine types, with fewer library customiztion points (basically, instead of specializing numerous library traits that are invoked by compiler-generated code, you overload operator [<-] for your type, with more of the logic going into the definition of that function).

EWG recognized the benefits of these modifications, although there were a variety of opinions as to how compelling they are. At the same time, there were also a few concerns with Core Coroutines:

  • While having the coroutine frame exposed as a first-class object means you are guaranteed no dynamic memory allocations unless you place it on the heap yourself, it still has a compiler-generated type (much like a lambda closure), so passing it across a translation unit boundary requires type erasure (and therefore a dynamic allocation). With the Coroutines TS, the type erasure was more under the compiler’s control, and it was argued that this allows eliding the allocation in more cases.
  • There were concerns about being able to take the sizeof of the coroutine object, as that requires the size being known by the compiler’s front-end, while with the Coroutines TS it’s sufficient for the size to be computed during the optimization phase.
  • While making the customization API smaller, this formulation relies on more new core-language features. In addition to introducing a new overloadable operator, the feature requires tail calls (which could also be useful for the language in general), and lazy function parameters, which have been proposed separately. (The latter is not a hard requirement, but the syntax would be more verbose without them.)

As mentioned, the procedural outcome of the discussion was to encourage further work on the Core Coroutines, while not blocking the merger of the Coroutines TS into C++20 on such work.

While in the end there was no consensus to merge the Coroutines TS into C++20 at this meeting, there remains fairly strong demand for having coroutines in some form in C++20, and I am therefore hopeful that some sort of joint proposal that combines elements of Core Coroutines into the Coroutines TS will surface at the next meeting.

Modules

As of the last meeting, there were two alternative Modules designs before the committee: the recently-published Modules TS, and the alternative proposal from the Clang Modules implementers called Another Take On Modules (“Atom”).

Since the last meeting, the authors of the two proposals have been collaborating to produce a merged proposal that combines elements from both proposals.

The merged proposal accomplishes Atom’s goal of providing a better mechanism for existing codebases to transition to Modules via modularized legacy headers (called legacy header imports in the merged proposal) – basically, existing headers that are not modules, but are treated as-if they were modules by the compiler. It retains the Modules TS mechanism of global module fragments, with some important restrictions, such as only allowing #includes and other preprocessor directives in the global module fragment.

Other aspects of Atom that are part of the the merged proposal include module partitions (a way of breaking up the interface of a module into multiple files), and some changes to export and template instantiation semantics.

EWG reviewed the merged proposal favourably, with a strong consensus for putting these changes into a second iteration of the Modules TS. Design guidance was provided on a few aspects, including tweaks to export behaviour for namespaces, and making export be “inherited”, such that e.g. if the declaration of a structure is exported, then its definition is too by default. (A follow-up proposal is expected for a syntax to explicitly make a structure definition not exported without having to move it into another module partition.) A proposal to make the lexing rules for the names of legacy header units be different from the existing rules for #includes failed to gain consensus.

One notable remaining point of contention about the merged proposal is that module is a hard keyword in it, thereby breaking existing code that uses that word as an identifier. There remains widespread concern about this in multiple user communities, including the graphics community where the name “module” is used in existing published specifications (such as Vulkan). These concerns would be addressed if module were made a context-sensitive keyword instead. There was a proposal to do so at the last meeting, which failed to gain consensus (I suspect because the author focused on various disambiguation edge cases, which scared some EWG members). I expect a fresh proposal will prompt EWG to reconsider this choice at the next meeting.

As mentioned above, there was also a suggestion to take a subset of the merged proposal and put it directly into C++20. The subset included neither legacy header imports nor global module fragments (in any useful form), thereby not providing any meaningful transition mechanism for existing codebases, but it was hoped that it would still be well-received and useful for new codebases. However, there was no consensus to proceed with this subset, because it would have meant having a new set of semantics different from anything that’s implemented today, and that was deemed to be risky.

It’s important to underscore that not proceeding with the “subset” approach does not necessarily mean the committee has given up on having any form of Modules in C++20 (although the chances of that have probably decreased). There remains some hope that the development of the merged proposal might proceed sufficiently quickly that the entire proposal — or at least a larger subset that includes a transition mechanism like legacy header imports — can make it into C++20.

Finally, EWG briefly heard from the authors of a proposal for modular macros, who basically said they are withdrawing their proposal because they are satisfied with Atom’s facility for selectively exporting macros via #export directives, which is being treated as a future extension to the merged proposal.

Papers not discussed

With the continued focus on large proposals that might target C++20 like Modules and Coroutines, EWG has a growing backlog of smaller proposals that haven’t been discussed, in some cases stretching back to two meetings ago (see the the committee mailings for a list). A notable item on the backlog is a proposal by Herb Sutter to bridge the two worlds of C++ users — those who use exceptions and those who not — by extending the exception model in a way that (hopefully) makes it palatable to everyone.

Other Working Groups Library Groups

Having sat in EWG all week, I can’t report on technical discussions of library proposals, but I’ll mention where some proposals are in the processing queue.

I’ve already listed the library proposals that passed wording review and were voted into the C++20 working draft above.

The following are among the proposals have passed design review and are undergoing (or awaiting) wording review:

The following proposals are still undergoing design review, and are being treated with priority:

The following proposals are also undergoing design review:

As usual, there is a fairly long queue of library proposals that haven’t started design review yet. See the committee’s website for a full list of proposals.

(These lists are incomplete; see the post-meeting mailing when it’s published for complete lists.)

Study Groups SG 1 (Concurrency)

I’ve already talked about some of the Concurrency Study Group’s work above, related to the Parallelism TS v2, and Executors.

The group has also reviewed some proposals targeting C++20. These are at various stages of the review pipeline:

Proposals before the Library Evolution Working Group include latches and barriers, C atomics in C++, and a joining thread.

Proposals before the Library Working Group include improvements to atomic_flag, efficient concurrent waiting, and fixing atomic initialization.

Proposls before the Core Working Group include revising the C++ memory model. A proposal to weaken release sequences has been put on hold.

SG 7 (Compile-Time Programming)

It was a relatively quiet week for SG 7, with the Reflection TS having undergone and passed wording review, and extensions to constexpr that will unlock the next generation of reflection facilities being handled in EWG. The only major proposal currently on SG 7’s plate is metaclasses, and that did not have an update at this meeting.

That said, SG 7 did meet briefly to discuss two other papers:

  • PFA: A Generic, Extendable and Efficient Solution for Polymorphic Programming. This aims to make value-based polymorphism easier, using an approach similar to type erasure; a parallel was drawn to the Dyno library. SG 7 observed that this could be accomplished with a pure library approach on top of existing reflection facilities and/or metaclasses (and if it can’t, that would signal holes in the reflection facilities that we’d want to fill).
  • Adding support for type-based metaprogramming to the standard library. This aims to standardize template metaprogramming facilities based on Boost.Mp11, a modernized version of Boost.MPL. SG 7 was reluctant to proceed with this, given that it has previously issued guidance for moving in the direction of constexpr value-based metaprogramming rather than template metaprogramming. At the same time, SG 7 recognized the desire for having metaprogramming facilities in the standard, and urged proponents on the constexpr approach to bring forward a library proposal built on that soon.
SG 12 (Undefined and Unspecified Behaviour)

SG 12 met to discuss several topics this week:

  • Reviewed a proposal to allow implicit creation of objects for low-level object manipulation (basically the way malloc() is used), which aims to standardize existing practice that the current standard wording makes undefined behaviour.
  • Reviewed a proposed policy around preserving undefined behaviour, which argues that in some cases, defining behaviour that was previously undefined can be a breaking change in some sense. SG 12 felt that imposing a requirement to preserve undefined behaviour wouldn’t be realistic, but that proposal authors should be encouraged to identify cases where proposals “break” undefined behaviour so that the tradeoffs can be considered.
  • Held a joint meeting with WG 23 (Programming Language Vulnerabilities) to collaborate further on a document describing C++ vulnerabilities. This meeting’s discussion focused on buffer boundary conditions and type conversions between pointers.
SG 15 (Tooling)

The Tooling Study Group (SG 15) held its second meeting during an evening session this week.

The meeting was heavily focused on dependency / package mangement in C++, an area that has been getting an increased amount of attention of late in the C++ community.

SG 15 heard a presentation on package consumption vs. development, whose author showcased the Build2 build / package management system and its abilities. Much of the rest of the evening was spent discussing what requirements various segments of the user community have for such a system.

The relationship between SG 15 and the committee is somewhat unusual; actually standardizing a package management system is beyond the committee’s purview, so the SG serves more as a place for innovators in this area to come together and hash out what will hopefully become a de facto standard, rather than advancing any proposals to change the standards text itself.

It was observed that the heavy focus on package management has been crowding out other areas of focus for SG 15, such as tooling related to static analysis and refactoring; it was suggested that perhaps those topics should be split out into another Study Group. As someone whose primary interest in tooling lies in these latter areas, I would welcome such a move.

Next Meetings

The next full meeting of the Committee will be in San Diego, California, the week of November 8th, 2018.

However, in an effort to work through some of the committee’s accumulated backlog, as well as to try to make a push for getting some features into C++20, three smaller, more targeted meetings have been scheduled before then:

  • A meeting of the Library Working Group in Batavia, Illinois, the week of August 20th, 2018, to work through its backlog of wording review for library proposals.
  • A meeting of the Evolution Working Group in Seattle, Washington, from September 20-21, 2018, to iterate on the merged Modules proposal.
  • A meeting of the Concurrecy Study Group (with Library Evolution Working Group attendance also encouraged) in Seattle, Washington, from September 22-23, 2018, to iterate on Executors.

(The last two meetings are timed and located so that CppCon attendees don’t have to make an extra trip for them.)

Conclusion

I think this was an exciting meeting, and am pretty happy with the progress made. Highlights included:

  • The entire Ranges TS being on track to be merged into C++20.
  • C++20 gaining standard facilities for contract programming.
  • Important progress on Modules, with a merged proposal that was very well-received.
  • A pivot towards package management, including as a way to make graphical progamming in C++ more accessible.

Stay tuned for future reports from me!

Other Trip Reports

Some other trip reports about this meeting include Bryce Lelbach’s, Timur Doumler’s, and Guy Davidson’s. I encourage you to check them out as well!

Categorieën: Mozilla-nl planet

Gregory Szorc: Deterministic Firefox Builds

Mozilla planet - wo, 20/06/2018 - 13:10

As of Firefox 60, the build environment for official Firefox Linux builds switched from CentOS to Debian.

As part of the transition, we overhauled how the build environment for Firefox is constructed. We now populate the environment from deterministic package snapshots and are much more stringent about dependencies and operations being deterministic and reproducible. The end result is that the build environment for Firefox is deterministic enough to enable Firefox itself to be built deterministically.

Changing the underlying operating system environment used for builds was a risky change. Differences in the resulting build could result in new bugs or some users not being able to run the official builds. We figured a good way to mitigate that risk was to make the old and new builds as bit-identical as possible. After all, if the environments produce the same bits, then nothing has effectively changed and there should be no new risk for end-users.

Employing the diffoscope tool, we identified areas where Firefox builds weren't deterministic in the same environment and where there was variance across build environments. We iterated on differences and changed systems so variance would no longer occur. By the end of the process, we had bit-identical Firefox builds across environments.

So, as of Firefox 60, Firefox builds on Linux are deterministic in our official build environment!

That being said, the builds we ship to users are using PGO. And an end-to-end build involving PGO is intrinsically not deterministic because it relies on timing data that varies from one run to the next. And we don't yet have continuous automated end-to-end testing that determinism holds. But the underlying infrastructure to support deterministic and reproducible Firefox builds is there and is not going away. I think that's a milestone worth celebrating.

This milestone required the effort of many people, often working indirectly toward it. Debian's reproducible builds effort gave us an operating system that provided deterministic and reproducible guarantees. Switching Firefox CI to Taskcluster enabled us to switch to Debian relatively easily. Many were involved with non-determinism fixes in Firefox over the years. But Mike Hommey drove the transition of the build environment to Debian and he deserves recognition for his individual contribution. Thanks to all these efforts - and especially Mike Hommey's - we can now say Firefox builds deterministically!

The fx-reproducible-build bug tracks ongoing efforts to further improve the reproducibility story of Firefox. (~300 bugs in its dependency tree have already been resolved!)

Categorieën: Mozilla-nl planet

Niko Matsakis: Proposal for a staged RFC process

Mozilla planet - wo, 20/06/2018 - 06:00

I consider Rust’s RFC process one of our great accomplishments, but it’s no secret that it has a few flaws. At its best, the RFC offers an opportunity for collaborative design that is really exciting to be a part of. At its worst, it can devolve into bickering without any real motion towards consensus. If you’ve not done so already, I strongly recommend reading aturon’s excellent blog posts on this topic.

The RFC process has also evolved somewhat organically over time. What began as “just open a pull request on GitHub” has moved into a process with a number of formal and informal stages (described below). I think it’s a good time for us to take a step back and see if we can refine those stages into something that works better for everyone.

This blog post describes a proposal that arose over some discussions at the Mozilla All Hands. This proposal represents an alternate take on the RFC process, drawing on some ideas from the TC39 process, but adapting them to Rust’s needs. I’m pretty excited about it.

Important: This blog post is meant to advertise a proposal about the RFC process, not a final decision. I’d love to get feedback on this proposal and I expect further iteration on the details. In any case, until the Rust 2018 Edition work is complete, we don’t really have the bandwidth to make a change like this. (And, indeed, most of my personal attention remains on NLL at the moment.) If you’d like to discuss the ideas here, I opened an internals thread.

TL;DR

The TL;DR of the proposal is as follows:

  • Explicit RFC stages. Each proposal moves through a series of explicit stages.
  • Each RFC gets its own repository. These are automatically created by a bot. This permits us to use GitHub issues and pull requests to split up conversation. It also permits a RFC to have multiple documents (e.g., a FAQ).
  • The repository tracks the proposal from the early days until stabilization. Right now, discussions about a particular proposal are scattered across internals, RFC pull requests, and the Rust issue tracker. Under this new proposal, a single repository would serve as the home for the proposal. In the case of more complex proposals, such as impl Trait, the repository could even serve as the home multiple layered RFCs.
  • Prioritization is now an explicit part of the process. The new process includes an explicit step to move from the “spitballing” stage (roughly “Pre-RFC” today) to the “designing” stage (roughly “RFC” today). This step requires both a team champion, who agrees to work on moving the proposal through implementation and towards stabilization, and general agreement from the team. The aim here is two-fold. First, the teams get a chance to provide early feedback and introduce key constraints (e.g., “this may interact with feature X”). Second, it provides room for a discussion about prioritization: there are often RFCs which are good ideas, but which are not a good idea right now, and the current process doesn’t give us a way to specify that.
  • There is more room for feedback on the final, implemented design. In the new process, once implementation is complete, there is another phase where we (a) write an explainer describing how the feature works and (b) issue a general call for evaluation. We’ve done this before – such as cramertj’s call for feedback on impl Trait, aturon’s call to benchmark incremental compilation, or alexcrichton’s push to stabilize some subset of procedural macros – but each of those was an informal effort, rather than an explicit part of the RFC process.
The current process

Before diving into the new process, I want to give my view of the current process by which an idea becomes a stable feature. This goes beyond just the RFC itself. In fact, there are a number of stages, though some of them are informal or sometimes skipped:

  • Pre-RFC (informal): Discussions take place – often on internals – about the shape of the problem to be solved and possible proposals.
  • RFC: A specific proposal is written and debated. It may be changed during this debate as a result of points that are raised.
    • Steady state: At some point, the discussion reaches a “steady state”. This implies a kind of consensus – not necessarily a consensus about what to do, but a consensus on the pros and cons of the feature and the various alternatives.
      • Note that reaching a steady state does not imply that no new comments are being posted. It just implies that the content of those comments is not new.
    • Move to merge: Once the steady state is reached, the relevant team(s) can move to merge the RFC. This begins with a bunch of checkboxes, where each team member indicates that they agree that the RFC should be merged; in some cases, blocking concerns are raised (and resolved) during this process.
    • FCP: Finally, once the team has assented to the merge, the RFC enters the Final Comment Period (FCP). This means that we wait for 10 days to give time for any final arguments to arise.
  • Implementation: At this point, a tracking issue on the Rust repo is created. This will be the new home for discussion about the feature. We can also start writing code, which lands under a feature gate.
    • Refinement: Sometimes, after implementation the feature, we find that the original design was inconsistent, in which case we might opt to alter the spec. Such alterations are discussed on the tracking issue – for significant changes, we will typically open a dedicated issue and do an FCP process, just like with the original RFC. A similar procedure happens for resolving unresolved questions.
  • Stabilization: The final step is to move to stabilize. This is always an FCP decision, though the precise protocol varies. What I consider Best Practice is to create a dedicated issue for the stabilization: this issue should describe what is being stabilized, with an emphasis on (a) what has changed since the RFC, (b) tests that show the behavior in practice, and (c) what remains to be stabilized. (An example of such an issue is #48453, which proposed to stabilize the ? in main feature.)
Proposal for a new process

The heart of the new proposal is that each proposal should go through a series of explicit stages, depicted graphically here (you can also view this directly on Gooogle drawings, where the oh-so-important emojis work better):

You’ll notice that the stages are divided into two groups. The stages on the left represent phases where significant work is being done: they are given “active” names that end in “ing”, like spitballing, designing, etc. The bullet points below describe the work that is to be done. As will be described shortly, this work is done on a dedicated repository, by the community at large, in conjunction with at least one team champion.

The stages on the right represent decision points, where the relevant team(s) must decide whether to advance the RFC to the next stage. The bullet points below represent the questions that the team must answer. If the answer is Yes, then the RFC can proceed to the next stage – note that sometimes the RFC can proceed, but unresolved questions are added as well, to be addressed at a later stage.

Repository per RFC

Today, the “home” for an RFC changes over the course of the process. It may start in an internals thread, then move to the RFC repo, then to a tracking issue, etc. Under the new process, we would instead create a dedicated repository for each RFC. Once created, the RFC would serve as the “main home” for the new proposal from start to finish.

The repositories will live in the rust-rfcs organization. There will be a convenient webpage for creating them; it will create a repo that has an appropriate template and which is owned by the appropriate Rust team, with the creator also having full permissions. These repositories would naturally be subject to Rust’s Code of Conduct and other guidelines.

Note that you do not have to seek approval from the team to create a RFC repository. Just like opening a PR, creating a repository is something that anyone can do. The expectation is that the team will be tracking new repositories that are created (as well as those seeing a lot of discussion) and that members of the team will get involved when the time is right.

The goal here is to create the repository early – even before the RFC text is drafted, and perhaps before there exists a specific proposal. This allows joint authorship of RFCs and iteration in the repository.

In addition to create a “single home” for each proposal, having a dedicated RFC allows for a number of new patterns to emerge:

  • One can create a FAQ.md that answers common questions and summarizes points that have already reached consensus.
  • One can create an explainer.md that documents the feature and explains how it works – in fact, creating such docs is mandatory during the “implementing” phase of the process.
  • We can put more than one RFC into a single repository. Often, there are complex features with inter-related (but distinct) aspects, and this allows those different parts to move through the stabilization process at a different pace.
The main RFC repository

The main RFC repository (named rust-rfcs/rfcs or something like that)
would no longer contain content on its own, except possibly the final draft of each RFC text. Instead, it would primarily serve as an index into the other repositories, organized by stage (similar to the TC39 proposals repository).

The purpose of this repository is to make it easy to see “what’s coming” when it comes to Rust. I also hope it can serve as a kind of “jumping off point” for people contributing to Rust, whether that be through design input, implementation work, or other things.

Team champions and the mechanics of moving an RFC between stages

One crucial role in the new process is that of the team champion. The team champion is someone from the Rust team who is working to drive this RFC to completion. Procedurally speaking, the team champion has two main jobs. First, they will give periodic updates to the Rust team at large of the latest developments, which will hopefully identify conflicts or concerns early on.

The second job is that team champions decide when to try and move the RFC between stages. The idea is that it is time to move between stages when two conditions are met:

  • The discussion on the repository has reached a “steady state”, meaning that there do not seem to be new arguments or counterarguments emerging. This sometimes also implies a general consensus on the design, but not always: it does however imply general agreement on the contours of the design space and the trade-offs involved.
  • There are good answers to the questions listed for that stage.

The actual mechanics of moving an RFC between stages are as follows. First, although not strictly required, the team champion should open an issue on the RFC repository proposing that it is time to move between stages. This issue should contain a draft of the report that will be given to the team at large, which should include summary of the key points (pro and con) around the design. Think of like a summary comment today. This issue can go through an FCP period in the same way as today (though without the need for checkmarks) to give people a chance to review the summary.

At that point, the team champion will open a PR on the main repository (rust-rfcs/rfcs). This PR itself will not have a lot of content: it will mostly edit the index, moving the PR to a new stage, and – where appropriate – linking to a specific revision of the text in the RFC repository (this revision then serves as “the draft” that was accepted, though of course further edits can and will occur). It should also link to the issue where the champion proposed moving to the next stage, so that the team can review the comments found there.

The PRs that move an RFC between stages are primarily intended for the Rust team to discuss – they are not meant to be the source of sigificant discussion, which ought to be taking place on the repository. If one looks at the current RFC process, they might consist of roughly the set of comments that typically occur once FCP is proposed. The teams should ensure that a decision (yay or nay) is reached in a timely fashion.

Finding the best way for teams to govern themselves to ensure prompt feedback remains a work in progress. The TC39 process is all based around regular meetings, but we are hoping to achieve something more asynchronous, in part so that we can be more friendly to people from all time zones, and to ease language barriers. But there is still a need to ensure that progress is made. I expect that weekly meetings will continue to play a role here, if only to nag people.

Making implicit stages explicit

There are two new points in the process that I want to highlight. Both of these represents an attempt to take “implicit” decision points that we used to have and make them more explicit and observable.

The Proposal point and the change from Spitballing to Designing

The very first stage in the RFC is going from the Spitballing phase to the Designing phase – this is done by presenting a Proposal. One crucial point is that there doesn’t have to be a primary design in order to present a proposal. It is ok to say “here are two or three designs that all seem to have advantages, and further design is needed to find the best approach” (often, that approach will be some form of synthesis of those designs anyway).

The main questions to be answered at the proposal have to do with motivation and prioritization. There are a few questions to answer:

  • Is this a problem we want to solve?
    • And, specifically, is this a problem we want to solve now?
  • Do we think we have some realistic ideas for solving it?
    • Are there major things that we ought to dig into?
  • Are there cross-cutting concerns and interactions with other features?
    • It may be that two features which are individually quite good, but which – taken together – blow the language complexity budget. We should always try to judge how a new feature might affect the language (or libraries) as a whole.
    • We may want to extend the process in other ways to make identification of such “cross-cutting” or “global” concerns more first class.

The expectation is that all major proposals need to be connected to the roadmap. This should help to keep us focused on the work we are supposed to be doing. (I think it is possible for RFCs to advance that are not connected to the roadmap, but they need to be simple extensions that could effectively work at any time.)

There is another way that having an explicit Proposal step addresses problems around prioritization. Creating a Proposal requires a Team Champion, which implies that there is enough team bandwidth to see the project through to the end (presuming that people don’t become champions for more than a few projects at a time). If we find that there aren’t enough champions to go around (and there aren’t), then this is a sign we need to grow the teams (something we’ve been trying hard to do).

The Proposal point also offers a chance for other team members to point out constraints that may have been overlooked. These constraints don’t necessarily have to derail the proposal, they may just add new points to be addressed during the Designing phase.

The Candidate point and the Evaluating phase

Another new addition to the process here is the Evaluation phase. The idea here is that, once implementation is complete, we should do two things:

  • Write up an explainer that describes how the feature works in terms suitable for end users. This is a kind of “preliminary” documentation for the feature. It should explain how to enable the feature, what it’s good for, and give some examples of how to use it.
    • For libraries, the explainer may not be needed, as the API docs serve the same purpose.
    • We should in particular cover points where the design has changed significantly since the “Draft” phase.
  • Propose the RFC for Candidate status. If accepted, we will also issue a general call for evaluation. This serves as a kind of “pre-stabilization” notice. It means that people should go take the new feature for a spin, kick the tires, etc. This will hopefully uncover bugs, but also surprising failure modes, ergonomic hazards, or other pitfalls with the design. If any significant problems are found, we can correct them, update the explainer, and repeat until we are satisfied (or until we decide the idea isn’t going to work out).

As I noted earlier, we’ve done this before, but always informally:

Once the evaluation phase seems to have reached a conclusion, we would move to stabilize the feature. The explainer docs would then become the preliminary documentation and be added to a kind of addendum in the Rust book. The docs would be expected to integrate the docs into the book in smoother form sometime after synchronization.

Conclusion

As I wrote before, this is only a preliminary proposal, and I fully expect us to make changes to it. Timing wise, I don’t think it makes sense to pursue this change immediately anyway: we’ve too much going on with the edition. But I’m pretty excited about revamping our RFC processes both by making stages explicit and adding explicit repositories.

I have hopes that we will find ways to use explicit repositories to drive discussions towards consensus faster. It seems that having the ability, for example, to document “auxiliary” documents, such as lists of constraints and rationale, can help to ensure that people’s concerns are both heard and met.

In general, I would also like to start trying to foster a culture of “joint ownership” of in-progress RFCs. Maintaining a good RFC repository is going to be a fair amount of work, which is a great opportunity for people at large to pitch in. This can then serve as a kind of “mentoring on ramp” getting people more involved in the lang team. Similarly, I think that having a list of RFCs that are in the “implementation” phase might be a way to help engage people who’d like to hack on the compiler.

Comments?

Please leave comments in the internals thread for this post.

Credit where credit is due

This proposal is heavily shaped by the TC39 process. This particular version was largely drafted in a big group discussion with wycats, aturon, ag_dubs, steveklabnik, nrc, jntrnr, erickt, and oli-obk, though earlier proposals also involved a few others.

Updates

(I made various simplifications shortly after publishing, aiming to keep the length of this blog post under control and remove what seemed to be somewhat duplicated content.)

Categorieën: Mozilla-nl planet

Air Mozilla: Guidance for H1 Merit and Bonus Award Cycle

Mozilla planet - wo, 20/06/2018 - 01:12

Guidance for H1 Merit and Bonus Award Cycle In part 2 of this two-part video, managers can use this Playbook to assess their employees' performance and make recommendations about bonus and merit.

Categorieën: Mozilla-nl planet

Air Mozilla: Manager Playbook_Performance Assessment

Mozilla planet - wo, 20/06/2018 - 01:01

Manager Playbook_Performance Assessment In part 1 of this two-part video, managers can use this Playbook to help assess their employees' performance and make bonus and merit recommendations.

Categorieën: Mozilla-nl planet

Bryce Van Dyk: Setting up Arcanist for Mozilla development on Windows

Mozilla planet - ti, 19/06/2018 - 22:51

Mozilla is rolling out Phabricator as part of our tooling. However, at the time of writing I was unable to find a straight forward setup to get the Phabricator tooling playing nice on Windows with MozillaBuild.

Right now there are a couple of separate threads around how to interact with Phabricator on Windows:

However, I have stuff waiting for me on Phabricator that I'd like to interact with now, so let's get a work around in place! I started with the Arcanist windows steps, but have adapted them to a MozillaBuild specific environment.

PHP

Arcanist requires PHP. Grab a build from here. The docs for Arcanist indicate the type of build doesn't really matter, but I opted for a thread safe one because that seems like a nice thing to have.

I installed PHP outside of my MozillaBuild directory, but you can put it anywhere. For the sake of example, my install is in C:\Patches\Php\php-7.2.6-Win32-VC15-x64.

We need to enable the curl extension: in the PHP install dir copy php.ini-development to php.ini and uncomment (by removing the ;) the extension=curl line.

Finally, enable PHP to find its extension by uncommenting the extension_dir = "ext" line. The Arcanist instructions suggest setting a fully qualified path, but I found a relative path worked fine.

Arcanist

Create somewhere to store Arcanist and libphutil. Note, these need to be located in the same directory for arc to work.

$ mkdir somewhere/ $ cd somewhere/ somewhere/ $ git clone https://github.com/phacility/libphutil.git somewhere/ $ git clone https://github.com/phacility/arcanist.git

For me this is C:\Patches\phacility\.

Wire everything into MozillaBuild

Since I want arc to be usable in MozillaBuild until this work around is no longer required, we're going to modify start up settings. We can do this by changing ...mozilla-build/msys/etc/profile.d and adding to the PATH already being constructed. In my case I've added that paths mentioned earlier, but with MSYS syntax: /c/Patches/Php/php-7.2.6-Win32-VC15-x64:/c/Patches/phacility/arcanist/bin:.

Now arc should run inside newly started MozillaBuild shells.

Credentials

We still need credentials in order to use arc with mozilla-central. For this to work we need a Phabricator account, see here for that. After that's done, in order to get your credentials run arc install-certificate, navigate to the page as instructed and copy your API key back to the command line.

Other problems

There was an issue with the evolve Mercurial extension the would cuase Unknown Mercurial log field 'instability'!. This should now be fixed in Arcanist. See this bug for more info.

Finally, I had some issues with arc diff based on my Mercurial config. Updating my extensions and running a ./mach bootsrap seemed to help.

Ready to go

Hopefully everything is ready to go at this point. I found working through the Mozilla docs for how to use arc after setup helpful. If have any comments, please let me know either via email or on IRC.

Categorieën: Mozilla-nl planet

Emma Irwin: Call for Feedback! Draft of Goal-Metrics for Diversity & Inclusion in Open Source (CHAOSS)

Mozilla planet - ti, 19/06/2018 - 20:18

<figcaption class="wp-caption-text">open source stars — opensource.com CC BY-SA 2.0</figcaption>

 

In the last few months, Mozilla has invested in collaboration with other open source project leaders and academics who care about improving diversity & inclusion in Open Source through the CHAOSS D&I working group. Contributors so far include:

Alexander Serebrenik (Eindhoven University of Technology) , Akshita Gupta (Outreachy), Amy Marrich (OpenStack), Anita Sarma (Oregon State University), Bhagashree Uday (Fedora), Daniel Izquierdo (Bitergia), Emma Irwin (Mozilla), Georg Link (University of Nebraska at Omaha), Gina Helfrich (NumFOCUS), Nicole Huesman (Intel) and Sean Goggins ((University of Missouri).

Our goals are to first establish a set of peer-validated goal-metrics, for understanding diversity & inclusion in FOSS ; Second, to identify technology, and research methodologies for understanding the success of our interventions in ways that keep ethics of privacy, and consent at center. And finally, that we document this work in ways that communities can reproduce the report for themselves.

For Mozilla this follows the recommendations coming out of our D&I research to create Metrics that Matter, and to work across Open Source with others projects trying to solve the same problems. I am very excited to share our first draft of goal-metrics for your feedback.

<figcaption class="wp-caption-text">D&I Working Group — Initial set of Goal — Metrics</figcaption>

Demographics

Communication

Contribution

Events

Governance

Leadership

Project Places

Recognition

Ethics

Please note that we know these are incomplete, we know there are likely existing resources that can improve, or even disprove some of these — and that is the point of this blog post! Please review and provide feedback either — via a Github issue, pull request, or by reaching out to someone in the working group, or by joining our working group call (next one July 20th, 9am PST) — which you can find the video link here.

You can find one or more of us at the following events as well:

FacebookTwitterGoogle+Share

Categorieën: Mozilla-nl planet

Mozilla VR Blog: Introducing A-Terrain - a cartography component for A-Frame

Mozilla planet - ti, 19/06/2018 - 20:00
Introducing A-Terrain - a cartography component for A-Frame

Have you ever wanted to make a small web app to share your favorite places with your friends? For example your favorite photographs attached to a hike, or just a view of your favorite peak, or your favorite places downtown, or a suggested itinerary for friends visiting?

Right now it is difficult to easily incorporate third party map data into your own projects. Creating 3d games or VR experiences with real world maps requires access to proprietary software or closed data ecosystems. To do it from scratch requires pulling data from multiple sources, such as image servers, and elevation servers. It also requires substantial math expertise. As well, often you may want to stylize the rendering to suit your own specific use cases. You may have a tron like video game aesthetic for your project and yet the building geometry you're forced to work with doesn't allow you to change colors. While there are many map providers, such as Apple, Google Maps and suchlike, and there are many viewers - most of these tools are specialized around showing high fidelity maps that are as true to reality as possible. What's missing is a middle ground - where you can take map data and easily put it in your own projects - creating your own mash ups.

We see A-Terrain as a starting point or demo for how the web can be different. With this component you can build whatever 3D experience you want and use real world data.

We’ve arranged for Cesium ion (see http://cesium.com) to make the data set available for free for people to try out. Currently the dataset includes world elevation, satellite images and 3d buildings for San Francisco.

For example here is a stylized view of San Francisco as viewed from ground level on the Embarcadero:

Introducing A-Terrain - a cartography component for A-Frame

You can try this example yourself in your browser here (use AWSD or arrow keys to move around):

https://anselm.github.io/aterrain/examples/helloworld/tile.html .

This component can also be used as a quick and dirty globe renderer (although if you're really interested in that specific use case then Cesium itself may be more suitable):

Introducing A-Terrain - a cartography component for A-Frame

I have added some rudimentary navigation controls using hash arguments on the URL. For example here is a piece of Mt Whitney:

https://anselm.github.io/aterrain/examples/place/index.html#lat=36.57850&lon=-118.29226&elev=1000

Introducing A-Terrain - a cartography component for A-Frame

The real strength of a tool like this is composability — to be able to mix different components together. For example here is A-Terrain and Mozilla Hubs being used for a collaborative hiking trip planning scenario to the Grand Canyon:

Introducing A-Terrain - a cartography component for A-Frame

Here is the URL for the above. This will take you to a random room ID - share that room ID with your friends to join the same room:




As another example of lightweight composability I place a tiny duck on the earths surface above Oregon. This is just a few lines of scripting:

Introducing A-Terrain - a cartography component for A-Frame

This example can be visited here:

https://anselm.github.io/aterrain/examples/helloworld/duck.html

To accomplish all this we leverage A-Frame — a browser based framework that lets users build 3d environments easily. The A-Frame philosophy is to take complicated behaviors and wrap them up html tags. If you can write ordinary HTML you can build 3d environments.

A-Frame is part of a Mozilla initiative to foster the open web —to raise the bar on what people can create on the web. Using A-Frame anybody can make 3d, virtual or augmented reality experiences on the web. These experiences can be shared instantly with anybody else in the world — running in the browser, on mobile phones, tablets and high end head mounted displays such as the Oculus Rift and the HTC Vive. You don’t need to buy a 3d authoring tool, you don’t need to ask somebody else permission if you can publish your app, you don’t publish your apps through an app store, you don’t need a special viewer to view the experience — it just runs — just like any ordinary web page.

I want to mention just a few of the folks who’ve helped bring this to this point — this includes Lars Bergstrom at Mozilla, Patrick Cozzi at Cesium, Shehzan especially (who was tireless in answering my dumb questions about coordinate re-projections), Blair MacIntyre (who had the initial idea) and Joshua Marinacci (who has been suggesting improvements and acting as a sounding board as well as testing this work).

The source code for this project is here:

https://github.com/anselm/aterrain

We’re all especially interested in seeing what kinds of experiences people build, and what directions this goes in. I'm especially interested in seeing AR use cases that combine this component with Augmented Reality frameworks such as recent Mozilla initiatives here : https://www.roadtovr.com/mozilla-launches-ios-app-experiment-webar/ . Please keep us posted on your work!

Categorieën: Mozilla-nl planet

Pages