mozilla

Mozilla Nederland LogoDe Nederlandse
Mozilla-gemeenschap

Abonneren op feed Mozilla planet
Planet Mozilla - http://planet.mozilla.org/
Bijgewerkt: 16 uur 48 min geleden

Air Mozilla: Web QA Weekly Meeting, 21 Apr 2016

do, 21/04/2016 - 18:00

Web QA Weekly Meeting This is our weekly gathering of Mozilla'a Web QA team filled with discussion on our current and future projects, ideas, demos, and fun facts.

Categorieën: Mozilla-nl planet

Liz Henry: That Zarro Boogs feeling

do, 21/04/2016 - 06:28

This is my third Firefox release as release manager, and the fifth that I’ve followed closely from the beginning to the end of the release cycle. (31 and 36 as QA lead; 39, 43, and 46 as release manager.) This time I felt more than usually okay with things, even while there was a lot of change in our infrastructure and while we started triaging and following even more bugs than usual. No matter how on top of things I get, there is still chaos and things still come up at the last minute. Stuff breaks, and we never stop finding new issues!

I’m not going into all the details because that would take forever and would mostly be me complaining or blaming myself for things. Save it for the post-mortem meeting. This post is to record my feeling of accomplishment from today.

During the approximately 6 week beta cycle of Firefox development we release around 2 beta versions per week. I read through many bugs nominated as possibly important regressions, and many that need review and assessment to decide if the benefit of backporting warrants the risk of breaking something else.

During this 7 week beta cycle I have made some sort of decision about at least 480 bugs. That usually means that I’ve read many more bugs, since figuring out what’s going on in one may mean reading through its dependencies, duplicates, and see-alsos, or whatever someone randomly mentions in comment 45 of 96.

And today I got to a point I’ve never been at near the end of a beta cycle: Zarro Boogs found!

list of zero bugs

This is what Bugzilla says when you do a query and it returns 0. I think everyone likes saying (and seeing) “Zarro Boogs”. Its silliness expresses the happy feeling you get when you have burned down a giant list of bugs.

This particular query is for bugs that anyone at all has nominated for the release management team to pay attention to.

Here is the list of requests for uplift (or backporting, same thing) to the mozilla-beta repo:

more zero pending requests

Yes!! Also zarro boogs.

Since we build our release candidate a week (or a few days) from the mozilla-release repo, I check up on requests to uplift there too:

list of zero pending requests

PEAK ZARRO BOOGS.

For the bugs that are unresolved and that I’m still tracking into the 46 release next week, it’s down to 4: Two fairly high volume crashes that may not be actionable yet, one minor issue in a system addon that will be resolved in a planned out-of-band upgrade, and one web compatibility issue that should be resolved soon by an external site. Really not bad!

Our overall regression tracking has a release health dashboard on displays in many Mozilla offices. Blockers, 0. Known new regressions that we are still working on and haven’t explicitly decided to wontfix: 1. (But this will be fixed by the system addon update once 46 ships.) Carryover regressions: 41; about 15 of them are actually fixed but not marked up correctly yet. The rest are known regressions we shipped with already that still aren’t fixed. Some of those are missed uplift opportunities. We will do better in the next release!

In context, I approved 196 bugs for uplift during beta, and 329 bugs for aurora. And, we fix several thousands of issues in every release during the approx. 12 week development cycle. Which ones of those should we pay the most attention to, and which of those can be backported? Release managers act as a sort of Maxwell’s Demon to let in only particular patches …

Will this grim activity level for the past 7 weeks and my current smug feeling of being on top of regression burndown translate to noticeably better “quality”… for Firefox users? That is hard to tell, but I feel hopeful that it will over time. I like the feeling of being caught up, even temporarily.

liz in sunglasses with a drink in hand

Here I am with drink in hand on a sunny afternoon, toasting all the hard working developers, QA testers, beta users, release engineers, PMs, managers and product folks who did most of the actual work to fix this stuff and get it firmly into place in this excellent, free, open source browser. Cheers!

Related posts:Kiva lending and people with disabilitiesBugzilla hijinks, Tuesday March 5
Categorieën: Mozilla-nl planet

Ehsan Akhgari: Project SpiderNode

wo, 20/04/2016 - 20:07

Some time around 4 weeks ago, a few of us got together to investigate what it would take to implement the Electron API on top of Gecko.  Electron consists of two parts: a Node environment with a few additional Node modules, and a lightweight embedding API for opening windows that point to a local or remote web page in order to display UI.  Project Positron tries to create an Electron compatible runtime built on Mozilla technology stack, that is, Gecko and SpiderMonkey.

While a few of my colleagues are busy working on Positron itself, I have been working on SpiderNode, which is intended to be used in Positron to implement the Node part of the Electron API.  SpiderNode has been changing rapidly since 3 weeks ago when I made the initial commit.

SpiderNode is loosely based on node-chakracore, which is a port of Node running on top of ChakraCore, the JavaScript engine used in Edge.  We have adopted the node-chakracore build system modifications to support building Node against a different backend.  We’re following the overall structure of the chakrashim module, which implements enough of the V8 API used by Node on top of ChakraCore.  Similarly, SpiderNode has a spidershim module which implements the V8 API on top of SpiderMonkey.

SpiderNode is still in its early days, and is not yet complete.  As such, we still can’t link the Node binary successfully since we’re missing quite a few V8 APIs, but we’re making rapid progress towards finishing the V8 APIs used in Node.  If you’re curious to look at the parts of the V8 API that have been implemented so far, check out the existing tests for spidershim.

I have tried to fix the issues that new contributors to SpiderNode may face.  As things stand right now, you should be able to clone the repository and build it on Linux and OS X (note that as I said earlier we still can’t link the node binary, so the build won’t finish successfully, see README.md for more details).  We have continuous integration set up so that we don’t regress the current state of the builds and tests.  I have also written some documentation that should help you get started!

Please see the current list of issues if you’re interested to contribute to SpiderNode.  Note that SpiderNode is under active development, so if you’re considering to contribute, it may be a good idea to get in touch with me to avoid working on something that is already being worked on!

Categorieën: Mozilla-nl planet

Air Mozilla: The Joy of Coding - Episode 54

wo, 20/04/2016 - 19:00

The Joy of Coding - Episode 54 mconley livehacks on real Firefox bugs while thinking aloud.

Categorieën: Mozilla-nl planet

Mozilla Addons Blog: Add-ons Update – Week of 2016/04/20

wo, 20/04/2016 - 18:55

I post these updates every 3 weeks to inform add-on developers about the status of the review queues, add-on compatibility, and other happenings in the add-ons world.

The Review Queues

In the past 3 weeks, 902 add-ons were reviewed:

  • 846 (94%) were reviewed in fewer than 5 days.
  • 27 (3%) were reviewed between 5 and 10 days.
  • 29 (3%) were reviewed after more than 10 days.

There are 73 listed add-ons awaiting review.

You can read about the recent improvements in the review queues here.

If you’re an add-on developer and are looking for contribution opportunities, please consider joining us. Add-on reviewers get invited to Mozilla events and earn cool gear with their work. Visit our wiki page for more information.

Compatibility Communications

Most of you should have received an email from us about the future compatibility of your add-ons. You can use the compatibility tool to enter your add-on ID and get some info on what we think is the best path forward for your add-on.

To ensure long-term compatibility, we suggest you start looking into WebExtensions, or use the Add-ons SDK and try to stick to the high-level APIs. There are many XUL add-ons that require APIs that aren’t available in either of these options, which is why we’re also asking you to fill out this survey, so we know which APIs we should look into adding to WebExtensions.

We’re holding regular office hours for Multiprocess Firefox compatibility, to help you work on your add-ons, so please drop in on Tuesdays and chat with us!

Firefox 47 Compatibility

The compatibility blog post for 47 is up. The bulk validation will be run soon. Make sure that the compatibility metadata for your add-on is up to date, so you don’t miss these checks.

As always, we recommend that you test your add-ons on Beta and Firefox Developer Edition to make sure that they continue to work correctly. End users can install the Add-on Compatibility Reporter to identify and report any add-ons that aren’t working anymore.

Extension Signing

The wiki page on Extension Signing has information about the timeline, as well as responses to some frequently asked questions. The current plan is to remove the signing override preference in Firefox 47 (updated from 46).

Categorieën: Mozilla-nl planet

Air Mozilla: SuMo Community Call 20th April 2016

wo, 20/04/2016 - 18:00

SuMo Community Call 20th April 2016 This is the sumo weekly call We meet as a community every Wednesday 17:00 - 17:30 UTC The etherpad is here: https://public.etherpad-mozilla.org/p/sumo-2016-04-20

Categorieën: Mozilla-nl planet

Wladimir Palant: Security considerations for password generators

wo, 20/04/2016 - 14:42

When I started writing my very own password generation extension I didn’t know much about the security aspects. In theory, any hash function should do in order to derive the password because hash functions cannot be reversed, right? Then I started reading and discovered that one is supposed to use PBKDF2. And not just that, you had to use a large number of iterations. But why?

Primary threat scenario: Giving away your master password

That’s the major threat with password generators: some website manages to deduce your master password from the password you used there. And once they have the master password they know all your other passwords as well. But how can this happen if hash functions cannot be reversed? Problem is, one can still guess your master password. They will try “password” as master password first — nope, this produces a different password for their site. Then they will try “password1” and get a match. Ok, now they know that your master password is most likely “password1” (it could still be something else but that’s quite unlikely).

Of course, a number of conditions have to be met for this scenario. First, a website where you have an account should be malicious — or simply leak its users database which isn’t too unlikely. Second, they need to know the algorithm you used to generate your password. However, in my case everybody knows now that I’m using Easy Passwords, no need to guess. And even for you it’s generally better if you don’t assume that they won’t figure out. And third, your master password has to be guessable within “finite” time. Problem is, if people start guessing passwords with GPUs most passwords fall way too quickly.

So, how does one address this issue? First, the master password clearly needs to be a strong one. But choosing the right hashing algorithm is also important. PBKDF2 makes guessing hard because it is computationally expensive — depending on the number of iterations generating a single password might take a second. A legitimate user won’t notice this delay, somebody who wants to test millions of guesses however will run out of time pretty quickly.

There are more algorithms, e.g. bcrypt and scrypt are even better. However, none of them found its way into Firefox so far. Since Easy Passwords is using the native (fast) PBKDF2 implementation in Firefox it can use a very high number of iterations without creating noticeable delays for the users. That makes guessing master passwords impractical on current hardware as long as the master password isn’t completely trivial.

To be precise, Easy Passwords is using PBKDF2-HMAC-SHA1 with 262,144 iterations. I can already hear some people exclaiming: “SHA1??? Old and busted!” Luckily, all the attacks against SHA1 and even MD5 are all about producing hash collisions which are completely irrelevant for password generation. Still, I would have preferred using SHA256, yet Firefox doesn’t support PBKDF2 with SHA256 yet. So it’s either SHA1 or a JavaScript-based implementation which will require a significantly reduced iteration count and result in a less secure solution.

Finally, it’s a good measure to use a random salt when hashing passwords — different salts would result in different generated passwords. A truly random salt would usually be unknown to potential attackers and make guessing master passwords impossible. However, that salt would also make recreating passwords on a different device complicated, one would need to back up the salt from the original device and transfer it to the new one. So for Easy Passwords I chose a compromise: the salt isn’t really random, instead the user-defined password name is used as salt. While an attacker will normally be able to guess the password’s name, it still makes his job significantly more complicated.

What about other password generators?

In order to check my assumptions I looked into what the other password generators were doing. I found more than twenty password generator extensions for Firefox, and most of them apparently didn’t think much about hashing functions. You have to keep in mind that none of them gained significant traction, most likely due to usability issues. The results outlined in the table below should be correct but I didn’t spend much time figuring out how these extensions work. For a few of them I noticed issues beyond their choice of a hashing algorithm, for others I might have missed these issues.

Extension User count Hashing algorithm Security Password Hasher 2491 SHA1 Very weak PwdHash 2325 HMAC+MD5 Very weak1 Hash Password Generator 291 Custom (same as Magic Password Generator) Very weak Password Maker X 276 SHA256/SHA1/MD4/MD5/RIPEMD160, optionally with HMAC Very weak masterpassword for Firefox 155 scrypt, cost parameter 32768, user-defined salt Medium2 uPassword 115 SHA1 Very weak vPass Password Generator 88 TEA, 10 iterations Weak Passwordgen For Firefox 1 77 SHA256 Very weak Phashword 57 SHA-1 Very weak Passera 52 SHA-512 Very weak My Password 51 MD5 Very weak HashPass Firefox 48 MD5/SHA1/SHA256/SHA512 Very weak UniPass 33 SHA-256, 4,096 iterations Weak RndPhrase 29 CubeHash Very weak PasswordProtect 28 SHA1, 10,000 iterations Weak PswGen Toolbar v2.0 24 SHA512 Very weak UniquePasswordBuilder Addon 13 scrypt, cost factor 1024 by default Strong3 hash0 9 PBKDF2+HMAC+SHA256, 100,000 iterations, random salt Very strong4 MS Password Generator 9 SHA1 Very weak Vault 9 PBKDF2+HMAC+SHA1, 8 iterations, fixed salt Weak BPasswd2 8 bcrypt, 64 iterations by default, user-defined salt Weak5 Persistent "Magic" Password Generator 8 MurmurHash Very weak BPasswd 7 bcrypt, 64 iterations Weak SecPassGen 2 PBKDF2+HMAC+SHA1, 10,000 iterations by default Weak6 Magic Password Generator ? Custom Very weak

1 The very weak hash function isn’t even the worst issue with PwdHash. It also requires you to enter the master password into a field on the web page. The half-hearted attempts to prevent the website from stealing that password are easily circumvented.

2 Security rating for masterpassword downgraded because (assuming that I understand the approach correctly) scrypt isn’t being applied correctly. The initial scrypt hash calculation only depends on the username and master password. The resulting key is combined with the site name via SHA-256 hashing then. This means that a website only needs to break the SHA-256 hashing and deduce the intermediate key — as long as the username doesn’t change this key can be used to generate passwords for other websites. This makes breaking scrypt unnecessary, security rating is still “medium” however because the intermediate key shouldn’t be as guessable as the master password itself.

3 Security rating for UniquePasswordBuilder downgraded because of low default cost factor which it mistakenly labels as “rounds.” Users can select cost factor 16384 manually which is very recommendable.

4 hash0 actually went as far as paying for a security audit. Most of the conclusions just reinforced what I already came up with by myself, others were new (e.g. the pointer to window.crypto.getRandomValues() which I didn’t know before).

5 BPasswd2 allows changing the number of iterations, anything up to 2100 goes (the Sun will die sooner than this calculation completes). However, the default is merely 26 iterations which is a weak protection, and the extension neither indicates that changing the default is required nor does it give useful hints towards choosing a better value.

6 Security rating for SecPassGen downgraded because the master password is stored in Firefox preferences as clear text.

Additional threats: Shoulder surfing & Co.

Websites aren’t the only threat however, one classic being somebody looking over your shoulder and noting your password. Easy Passwords addresses this by never showing your passwords: it’s either filling in automatically or copying to clipboard so that you can paste it into the password field yourself. In both scenarios the password never become visible.

And what if you leave your computer unattended? Easy Password remembers your master password once it has been entered, this is an important usability feature. The security concerns are addressed by “forgetting” the master password again after a given time, 10 minutes by default. And, of course, the master password is never saved to disk.

Usability vs. security: Validating master password

There is one more usability feature in Easy Password with the potential to compromise security. When you mistype your master password Easy Passwords will notify you about it. That’s important because otherwise wrong passwords will get generated and you won’t know why. But how does one validate the master password without storing it?

My initial idea was storing a SHA hash of the master password. Then I realized that it opens the primary threat scenario again: somebody who can get their hands on this SHA hash (e.g. by walking past your computer when it is unattended) can use it to guess your master password. Only store a few characters of the SHA hash? Better but it will still allow an attacker who has both this SHA hash and a generated password to throw away a large number of guesses without having to spend time on calculating the expensive PBKDF2 hash. Wait, why treat this hash differently from other passwords at all?

And that’s the solution I went with. When the master password is set initially it is used to generate a new password with a random salt, using the usual PBKDF2 algorithm. Then this salt and the first two characters of the password are stored. The two characters are sufficient to recognize typos in most cases. They are not sufficient to guess the master password however. And they won’t even provide a shortcut when guessing based on a known generated password — checking the master password hash is just as expensive as checking the generated password itself.

Encrypting legacy passwords

One requirement for Easy Passwords was dealing with “legacy passwords,” meaning existing passwords that cannot be changed for some reason. Instead of generating, these passwords would have to be stored securely. Luckily, there is a very straightforward solution: the PBKDF2 algorithm can be used to generate an encryption key. The password is then encrypted with AES-256.

My understanding is that AES-encrypted data currently cannot be decrypted without knowing the encryption key. And the encryption key is derived using the same algorithm as Easy Passwords uses for generating passwords, so the security of stored passwords is identical to that of generated ones. The only drawback of such legacy passwords currently seems to be a more complicated backup approach, also moving the password from one device to another is no longer trivial.

Phishing & Co.

Password generators will generally protect you nicely against phishing: a phishing website can look exactly like the original, a password generator will still produce a different password for it. But what about malicious scripts injected into a legitimate site? These will still be able to steal your password. On the bright side, they will only compromise your password for a single website.

Question is, how do malicious scripts get to run there in the first place? One option are XSS vulnerabilities, not much can be done about those. But there are also plenty of websites showing password fields on pages that are transmitted unencrypted (plain HTTP, not HTTPS). These can then be manipulated by an attacker who is in the same network as you. The idea is that Easy Passwords could warn in such cases in future. It should be possible to disable this warning for websites that absolutely don’t support HTTPS, but for others it will hopefully be helpful. Oh, and did I recommend using Enforce Encryption extension already?

Finally, there is the worst-case scenario: your computer could be infected with a password sniffer. This is really bad because it could intercept your master password. Then again, it could also intercept all the individual passwords as you log into the respective websites, it will merely take a bit longer. I think that there is only one effective solution here: just don’t get infected.

Other threats?

There are probably more threats to consider that I didn’t think of. It might also be that I made a mistake in my conclusions somewhere. So feel free to post your own thoughts in the comments.

Categorieën: Mozilla-nl planet

Ludovic Hirlimann: Financing Openstreetmap in Africa

wo, 20/04/2016 - 12:01

The local osm community in Benin is trying to buy a high-res image of the capital to better map it. They need around 2500€ and have reached 50%. 1′“ days left 5,10, 20 euros would help.

Details http://www.ulule.com/imagerie-cotonou/ .

Categorieën: Mozilla-nl planet

Chris H-C: Firefox’s Windows XP Users’ Upgrade Path

di, 19/04/2016 - 20:00

We’re still trying to figure out what to do with Firefox users on Windows XP.

One option I’ve heard is: Can we just send a Mozillian to each of these users’ houses with a fresh laptop and training in how to migrate apps and data?

( No, we can’t. For one, we can’t uniquely identify who and where these users are (this is by design). For two, even if we could, the Firefox Windows XP userbase is too geographically diverse (as I explained in earlier posts) for “meatspace” activities like these to be effective or efficient. For three, this could be kinda expensive… though, so is supporting extra Operating Systems in our products. )

We don’t have the advertising spend to reach all of these users in the real world, but we do have access to their computers in their houses… so maybe we can inform them that way?

Well, we know we can inform people through their browsers. We have plenty of data from our fundraising drives to that effect… but what do we say?

Can we tell them that their computer is unsafe? Would they believe us if we did?

Can we tell them that their Firefox will stop updating? Will they understand what we mean if we did?

Do these users have the basic level of technical literacy necessary to understand what we have to tell them? And if we somehow manage to get the message across about what is wrong and why,  what actions can we recommend they take to fix this?

This last part is the first thing I’m thinking about, as it’s the most engineer-like question: what is the optimal upgrade strategy for these users? Much more concrete to me than trying to figure out wording, appearance, and legality across dozens of languages and cultures.

Well, we could instruct them to upgrade to Linux. Except that it wouldn’t be an upgrade, it’d be a clean wipe and reinstall from scratch: all the applications would be gone and all of their settings would reset to default. All the data on their machines would be gone unless they could save it somewhere else, and if you imagine a user who is running Windows XP, you can easily imagine that they might not have access to a “somewhere else”. Also, given the average level of technical expertise, I don’t think we can make a Linux migration simple enough for most of these users to understand. These users have already bought into Windows, so switching them away is adding complexity no matter how simplistic we could make it for these users once the switch was over.

We could instruct them to upgrade to Windows 7. There is a clear upgrade path from XP to 7 and the system requirements of the two OSes are actually very similar. (Which is, in a sincere hat-tip to Microsoft, an amazing feat of engineering and commitment to users with lower-powered computers) Once there, if the user is eligible for the Windows 10 upgrade, they can take that upgrade if they desire (the system requirements for Windows 10 are only _slightly_ higher than Windows 7 (10 needs some CPU extensions that 7 doesn’t), which is another amazing feat). And from there, the users are in Microsoft’s upgrade path, and out of the clutches of the easiest of exploits, forever. There are a lot of benefits to using Windows 7 as an upgrade path.

There are a few problems with this:

  1. Finding copies of Windows 7: Microsoft stopped selling copies of Windows 7 years ago, and these days the most reliable way to find a copy is to buy a computer with it already installed. Mozilla likely isn’t above buying computers for everyone who wants them (if it has or can find the money to do so), but software is much easier to deliver than hardware, and is something we already know how to do.
  2. Paying for copies of Windows 7: Are we really going to encourage our users to spend money they may not have on upgrading a machine that still mostly-works? Or is Mozilla going to spend hard-earned dollarbucks purchasing licenses of out-of-date software for everyone who didn’t or couldn’t upgrade?
  3. Windows 7 has passed its mainstream support lifetime (extended support’s still good until 2020). Aren’t we just replacing one problem with another?
  4. Windows 7 System Requirements: Windows XP only needed a 233MHz processor, 64MB of RAM, and 1.5GB of HDD. Windows 7 needs 1GHz, 1GB, and 16GB.

All of these points are problematic, but that last point is at least one I can get some hard numbers for.

We don’t bother asking users how big their disk drives are, so I can’t detect how many users are cannot meet Windows 7’s HDD requirements. However, we do measure users’ CPU speeds and RAM sizes (as these are important for sectioning performance-related metrics. If we want to see if a particular perf improvement is even better on lower-spec hardware, we need to be able to divvy users up by their computers’ specifications).

So, at first this seems like a breeze: the question is simply stated and is about two variables that we measure. “How many Windows XP Firefox users are Stuck because they have CPUs slower than 1GHZ or RAM smaller than 1GB?”

But if you thought that for more than a moment, you should probably go back and read my posts about how Data Science is hard. It turns out that getting the CPU speed on Windows involves asking the registry for data, which can fail. So we have a certain amount of uncertainty.

windowsXPStuck

So, after crunching the data and making some simplifying assumptions (like how I don’t expect the amount of RAM or the speed of a user’s CPU to ever decrease over time) we have the following:

Between 40% and 53% of Firefox users running Windows XP are Stuck (which is to say, they can’t be upgraded past Windows XP because they fail at least one of the requirements).

That’s some millions of users who are Stuck no matter what we do about education, advocacy, and software.

Maybe we should revisit the “Mozillians with free laptops” idea, after all?

:chutten

 


Categorieën: Mozilla-nl planet

Allen Wirfs-Brock: Slide Bite: Grassroots Innovation

di, 19/04/2016 - 19:14

grassroots

How do we know when we are entering a new computing era? One signal is a reemergence of grassroots innovation. Early in a computing era most technical development resources are still focused on sustaining the mature applications and use cases from the waning era or on exploiting attractive transitional technologies.

The first explores of the technologies of a new era are rebels and visionaries operating at the fringes. These explores naturally form grassroots organizations for sharing and socializing their ideas and accomplishments. Such grassroots organizations serve as incubators for the the technologies and leaders of the next era.

The HomeBrew Computing Club was a grassroots group out of which emerged many leaders of the Personal Computing Era. Now, as the Ambient Computing Era progresses, we see grassroots organizations such as the Nodebots movement and numerous collaborative GitHub projects serving a similar role.

Categorieën: Mozilla-nl planet

Air Mozilla: Connected Devices Weekly Program Review, 19 Apr 2016

di, 19/04/2016 - 19:00

Connected Devices Weekly Program Review Weekly project updates from the Mozilla Connected Devices team.

Categorieën: Mozilla-nl planet

Eric Shepherd: Smart people + open discussion + epiphany = a better Web for everyone

di, 19/04/2016 - 17:00

One great thing about watching the future of the Web being planned in the open is that you can see how smart people having open discussions, combined with the occasional sudden realization, epiphany, or unexpected spark of creative genius can make the Web a better place for everyone.

This is something I’m reminded of regularly when I read mailing list discussions about plans to implement new Web APIs or browser features. There are a number of different kinds of discussion that take place on these mailing lists, but the ones that have fascinated me the most lately have been the “Intent to…” threads.

There are three classes of “Intent to…” thread:

  • Intent to implement. This thread begins with an announcement that someone plans to begin work on implementing a new feature. This could be an entire API, or a single new function, or anything in between. It could be a change to how an existing technology behaves, for that matter.
  • Intent to ship. This thread starts with the announcement that a feature or technology which has been implemented, or is in the process of being implemented, will be shipped in a particular upcoming version of the browser.
  • Intent to unship. This thread starts by announcing that a previously shipped feature will be removed in a given release of the software. This usually means rolling back a change that had unexpected consequences.

In each of these cases, discussion and debate may arise. Sometimes the discussion is very short, with a few people agreeing that it’s a good (or bad) idea, and that’s that. Other times, the discussion becomes very lengthy and complicated, with proposals and counter-proposals and debates (and, yes, sometimes arguments) about whether it’s a good idea or how to go about doing it the best way possible.

You know… I just realized that this change could be why the following sites aren’t working on nightly builds… maybe we need to figure out a different way to do this.

This sounds great, but what if we add a parameter to this function so we can make it more useful to a wider variety of content by…

The conversation frequently starts innocuously enough, with general agreement or minor suggestions that might improve the implementation, and then, sometimes, out of nowhere someone points out a devastating and how-the-heck-did-we-miss-that flaw in the design that causes the conversation to shift into a debate about the best way to fix the design. Result: a better design that works for more people with fewer side effects.

These discussions are part of what makes the process of inventing the Web in the open great. Anyone who has an interest can offer a suggestion or insight that might totally change the shape of things to come. And by announcing upcoming changes in threads such as these, developers make it easier than ever to get involved in the design of the Web as a platform.

Mozilla is largely responsible for the design process of the Web being an open one. Before our global community became a force to be reckoned with, development crawled along inside the walls of one or two corporate offices. Now, dozens of companies and millions of people are active participants in the design of the Web and its APIs. It’s a legacy that every Mozillian—past, present, and future—can be very proud of.

Categorieën: Mozilla-nl planet

Wladimir Palant: Introducing Easy Passwords: the new best way to juggle all those passwords

di, 19/04/2016 - 13:13

“The password system is broken” – I don’t know how often I’ve heard that phrase already. Yes, passwords suck. Nobody can be expected to remember passwords for dozens of websites. Websites enforcing arbitrary complexity rules (“between 5 and 7 characters, containing at least two-upper case letters and a dog’s name”) doesn’t make it any better. So far I’ve heard of three common strategies to deal with passwords: write them down, use the same one everywhere or just hit “forgot password” every time you access the website. None of these are particularly secure or recommendable, and IMHO neither are the suggestions to derive passwords via more or less complicated manual algorithms.

As none of the password killing solutions gained significant traction so far, password managers still seem to be the best choice for now. However, these often have the disadvantage of relying on a third-party service which you have to trust or storing your passwords on disk so that you have to trust their crypto. But there is also this ancient idea to derive individual passwords from a single master password via one-way hashing functions. This is great as the only sensitive piece of data is your master password, and this one you can hopefully just remember.

Now all the existing password generators have significant usability issues. What if I want to have multiple passwords on a single website? What if different websites share the same login credentials (e.g. all the WordPress blogs)? What if you are required to change your password every few months? What if there is some password which I have to use as is rather than replace it by a generated one? How to deal with that crazy website that doesn’t accept special characters in passwords? Do I have to remember all the websites that I generated passwords for? I haven’t found any solution that would answer all these questions. And I’m not even starting about security, this is a topic for a separate blog post (spoiler: only one out of twenty password generator extensions for Firefox got crypto right).

Easy Passwords login prompt

So last summer I decided to roll my own: Easy Passwords. I’m working on it in my spare time so it took a while until I considered it ready for general use but now you can finally go and install it. You set your master password and then you can generate named passwords for any website. You can adjust password length and character set to match the requirements of the website. And if the generated password absolutely won’t do, you can still store your existing password — it will be encrypted securely, only to be decrypted with your master password.

On most websites your password can be filled in with a single click. And Easy Passwords supports website aliases: for some WordPress blog you can edit the site name into “wordpress.com” — done, you will get WordPress passwords there now. And it can show you all your passwords on a single page, you can even print them as a paper backup. This piece of paper has enough information to recreate all your passwords should your hard drive crash, but it will be useless to anybody who doesn’t know your master password.

It’s not perfect of course. For example, the aliasing functionality isn’t very intuitive and could be improved. I also have a few issues listed in the GitHub project, e.g. I’d like to warn about filling in passwords if the website doesn’t use HTTPS. Also, a secure master password is very important so it would be nice to implement some kind of security indicator when the master password is set. I wonder what other issues people come up with, we’ll see.

Categorieën: Mozilla-nl planet

Karl Dubost: Looking at summary details in HTML5

di, 19/04/2016 - 05:01

On the dev-platform mailing-list, Ting-Yu Lin has sent an Intent to Ship: HTML5 <details> and <summary> tags. So what about it?

HTML 5.1 specification describes details as:

The details element represents a disclosure widget from which the user can obtain additional information or controls.

which is not that clear, luckily enough the specification has some examples. I put one on codepen (you need Firefox Nightly at this time or Chrome/Opera or Safari dev edition to see it). At least the rendering seems to be pretty much the same.

But as usual evil is in the details (pun not intended at first). In case, the developer would want to hide the triangle, the possibilities are for now not interoperable. Think here possible Web compatibility issues. I created another codepen for testing the different scenarios.

In Blink/WebKit world:

summary::-webkit-details-marker { display: none; }

In Gecko world:

summary::-moz-list-bullet { list-style-type: none; }

or

summary { display: block; }

These work, though the summary {display: block;} is a call for catastrophes.

Then on the thread there was the proposal of

summary { list-style-type: none; }

which is indeed working for hiding the arrow, but doesn't do anything whatsoever in Blink and WebKit. So it's not really a reliable solution from a Web compatibility point of view.

Then usually I like to look at what people do on GitHub for their projects. So these are a collection of things on the usage of -webkit-details-marker:

details summary::-webkit-details-marker { display:none; } /* to change the pointer on hover */ details summary { cursor: pointer; } /* to style the arrow widget on opening and closing */ details[open] summary::-webkit-details-marker { color: #00F; background: #0FF;} /* to replace the marker with an image */ details summary::-webkit-details-marker:after { content: icon('file.png'); /* using content this time for a unicode character */ summary::-webkit-details-marker {display: none; } details summary::before { content:"►"; } details[open] summary::before { content:"▼" } JavaScript

On JavaScript side, it seems there is a popular shim used by a lot of people: details.js

More reading

Otsukare!

Categorieën: Mozilla-nl planet

The Rust Programming Language Blog: Introducing MIR

di, 19/04/2016 - 02:00

We are in the final stages of a grand transformation on the Rust compiler internals. Over the past year or so, we have been steadily working on a plan to change our internal compiler pipeline, as shown here:

Compiler Flowchart

That is, we are introducing a new intermediate representation (IR) of your program that we call MIR: MIR stands for mid-level IR, because the MIR comes between the existing HIR (“high-level IR”, roughly an abstract syntax tree) and LLVM (the “low-level” IR). Previously, the “translation” phase in the compiler would convert from full-blown Rust into machine-code-like LLVM in one rather large step. But now, it will do its work in two phases, with a vastly simplified version of Rust – MIR – standing in the middle.

If you’re not a compiler enthusiast, this all might seem arcane and unlikely to affect you directly. But in reality, MIR is the key to ticking off a number of our highest priorities for Rust:

  • Faster compilation time. We are working to make Rust’s compilation incremental, so that when you re-compile code, the compiler recomputes only what it has to. MIR has been designed from the start with this use-case in mind, so it’s much easier for us to save and reload, even if other parts of the program have changed in the meantime.

    MIR also provides a foundation for more efficient data structures and removal of redundant work in the compiler, both of which should speed up compilation across the board.

  • Faster execution time. You may have noticed that in the new compiler pipeline, optimization appears twice. That’s no accident: previously, the compiler relied solely on LLVM to perform optimizations, but with MIR, we can do some Rust-specific optimizations before ever hitting LLVM – or, for that matter, before monomorphizing code. Rust’s rich type system should provide fertile ground for going beyond LLVM’s optimizations.

    In addition, MIR will uncork some longstanding performance improvements to the code Rust generates, like “non-zeroing” drop.

  • More precise type checking. Today’s Rust compiler imposes some artificial restrictions on borrowing, restrictions which largely stem from the way the compiler currently represents programs. MIR will enable much more flexible borrowing, which will in turn improve Rust’s ergonomics and learning curve.

Beyond these banner user-facing improvements, MIR also has substantial engineering benefits for the compiler:

  • Eliminating redundancy. Currently, because we write all of our passes in terms of the full Rust language, there is quite a lot of duplication. For example, both the safety analyses and the backend which produces LLVM IR must agree about how to translate drops, or the precise order in which match expression arms will be tested and executed (which can get quite complex). With MIR, all of that logic is centralized in MIR construction, and the later passes can just rely on that.

  • Raising ambitions. In addition to being more DRY, working with MIR is just plain easier, because it contains a much more primitive set of operations than ordinary Rust. This simplification enables us to do a lot of things that were forbodingly complex before. We’ll look at one such case in this post – non-zeroing drop – but as we’ll see at the end, there are already many others in the pipeline.

Needless to say, we’re excited, and the Rust community has stepped up in a big way to make MIR a reality. The compiler can bootstrap and run its test suite using MIR, and these tests have to pass on every new commit. Once we’re able to run Crater with MIR enabled and see no regressions across the entire crates.io ecosystem, we’ll turn it on by default (or, you’ll forgive a terrible (wonderful) pun, launch MIR into orbit).

This blog post begins with an overview of MIR’s design, demonstrating some of the ways that MIR is able to abstract away the full details of the Rust language. Next, we look at how MIR will help with implementing non-zeroing drops, a long-desired optimization. If after this post you find you are hungry for more, have a look at the RFC that introduced MIR, or jump right into the code. (Compiler buffs may be particularly interested in the alternatives section, which discusses certain design choices in detail, such as why MIR does not currently use SSA.)

Reducing Rust to a simple core

MIR reduces Rust down to a simple core, removing almost all of the Rust syntax that you use every day, such as for loops, match expressions, and even method calls. Instead, those constructs are translated to a small set of primitives. This does not mean that MIR is a subset of Rust. As we’ll see, many of these primitives operations are not available in real Rust. This is because those primitives could be misused to write unsafe or undesirable programs.

The simple core language that MIR supports is not something you would want to program in. In fact, it makes things almost painfully explicit. But it’s great if you want to write a type-checker or generate assembly code, as you now only have to handle the core operations that remain after MIR translation.

To see what I mean, let’s start by simplifying a fragment of Rust code. At first, we’ll just break the Rust down into “simpler Rust”, but eventually we’ll step away from Rust altogether and into MIR code.

Our Rust example starts out as this simple for loop, which iterates over all the elements in a vector and processes them one by one:

for elem in vec { process(elem); }

Rust itself offers three kinds of loops: for loops, like this one; while and while let loops, that iterate until some condition is met; and finally the simple loop, which just iterates until you break out of it. Each of these kinds of loops encapsulates a particular pattern, so they are quite useful when writing code. But for MIR, we’d like to reduce all of these into one core concept.

A for loop in Rust works by converting a value into an iterator and then repeatedly calling next on that iterator. That means that we can rewrite the for loop we saw before into a while let loop that looks like this:

let mut iterator = vec.into_iter(); while let Some(elem) = iterator.next() { process(elem); }

By applying this rewriting, we can remove all for loops, but that still leaves multiple kinds of loops. So next we can imagine rewrite all while let loops into a simple loop combined with a match:

let mut iterator = vec.into_iter(); loop { match iterator.next() { Some(elem) => process(elem), None => break, } }

We’ve already eliminated two constructs (for loops and while loops), but we can go further still. Let’s turn from loops for a bit to look at the method calls that we see. In Rust, method calls like vec.into_iter() and iterator.next() are also a kind of syntactic sugar. These particular methods are defined in traits, which are basically pre-defined interfaces. For example, into_iter is a method in the IntoIterator trait. Types which can be converted into iterators implement that trait and define how the into_iter method works for them. Similarly, next is defined in the Iterator trait. When you write a method call like iterator.next(), the Rust compiler automatically figures out which trait the method belongs to based on the type of the iterator and the set of traits in scope. But if we prefer to be more explicit, we could instead invoke the methods in the trait directly, using function call syntax:

// Rather than `vec.into_iter()`, we are calling // the function `IntoIterator::into_iter`. This is // exactly equivalent, just more explicit. let mut iterator = IntoIterator::into_iter(vec); loop { // Similarly, `iterator.next()` can be rewritten // to make clear which trait the `next` method // comes from. We see here that the `.` notation // was also adding an implicit mutable reference, // which is now made explicit. match Iterator::next(&mut iterator) { Some(elem) => process(elem), None => break, } }

At this point, we’ve managed to reduce the set of language features for our little fragment quite a bit: we now only use loop loops and we don’t use method calls. But we could reduce the set of concepts further if were moved away from loop and break and towards something more fundamental: goto. Using goto we could transform the previous code example into something like this:

let mut iterator = IntoIterator::into_iter(vec); loop: match Iterator::next(&mut iterator) { Some(elem) => { process(elem); goto loop; } None => { goto break; } } break: ...

We’ve gotten pretty far in breaking our example down into simpler constructs. We’re not quite done yet, but before we go further it’s worth stepping back a second to make a few observations:

Some MIR primitives are more powerful than the structured construct they replace. Introducing the goto keyword is a big simplification in one sense: it unifies and replaces a large number of control-flow keywords. goto completely replaces loop, break, continue, but it also allows us to simplify if and match as well (we’ll see more on match in particular in a bit). However, this simplification is only possible because goto is a more general construct than loop, and it’s something we would not want to introduce into the language proper, because we don’t want people to be able to write spaghetti-like-code with complex control-flow that is hard to read and follow later. But it’s fine to have such a construct in MIR, because we know that it will only be used in particular ways, such as to express a loop or a break.

MIR construction is type-driven. We saw that all method calls like iterator.next() can be desugared into fully qualified function calls like Iterator::next(&mut iterator). However, doing this rewrite is only possible with full type information, since we must (for example) know the type of iterator to determine which trait the next method comes from. In general, constructing MIR is only possible after type-checking is done.

MIR makes all types explicit. Since we are constructing MIR after the main type-checking is done, MIR can include full type information. This is useful for analyses like the borrow checker, which require the types of local variables and so forth to operate, but also means we can run the type-checker periodically as a kind of sanity check to ensure that the MIR is well-formed.

Control-flow graphs

In the previous section, I presented a gradual “deconstruction” of a Rust program into something resembling MIR, but we stayed in textual form. Internally to the compiler, though, we never “parse” MIR or have it in textual form. Instead, we represent MIR as a set of data structures encoding a control-flow graph (CFG). If you’ve ever used a flow-chart, then the concept of a control-flow graph will be pretty familiar to you. It’s a representation of your program that exposes the underlying control flow in a very clear way.

A control-flow graph is structured as a set of basic blocks connected by edges. Each basic block contains a sequence of statements and ends in a terminator, which defines how the blocks are connected to one another. When using a control-flow graph, a loop simply appears as a cycle in the graph, and the break keyword translates into a path out of that cycle.

Here is the running example from the previous section, expressed as a control-flow graph:

Control-flow-graph

Building a control-flow graph is typically a first step for any kind of flow-sensitive analysis. It’s also a natural match for LLVM IR, which is also structured into control-flow graph form. The fact that MIR and LLVM correspond to one another fairly closely makes translation quite straight-forward. It also eliminates a vector for bugs: in today’s compiler, the control-flow graph used for analyses is not necessarily the same as the one which results from LLVM construction, which can lead to incorrect programs being accepted.

Simplifying match expressions

The example in the previous section showed how we can reduce all of Rust’s loops into, effectively, gotos in the MIR and how we can remove methods calls in favor of calls to explicit calls to trait functions. But it glossed over one detail: match expressions.

One of the big goals in MIR was to simplify match expressions into a very small core of operations. We do this by introducing two constructs that the main language does not include: switches and variant downcasts. Like goto, these are things that we would not want in the base language, because they can be misused to write bad code; but they are perfectly fine in MIR.

It’s probably easiest to explain match handling by example. Let’s consider the match expression we saw in the previous section:

match Iterator::next(&mut iterator) { Some(elem) => process(elem), None => break, }

Here, the result of calling next is of type Option<T>, where T is the type of the elements. The match expression is thus doing two things: first, it is determining whether this Option was a value with the Some or None variant. Then, in the case of the Some variant, it is extracting the value elem out.

In normal Rust, these two operations are intentionally coupled, because we don’t want you to read the data from an Option unless it has the Some variant (to do otherwise would be effectively a C union, where reads are not checked for correctness).

In MIR, though, we separate the checking of the variant from the extracting of the data. I’m going to give the equivalent of MIR here first in a kind of pseudo-code, since there is no actual Rust syntax for these operations:

loop: // Put the value we are matching on into a temporary variable. let tmp = Iterator::next(&mut iterator); // Next, we "switch" on the value to determine which it has. switch tmp { Some => { // If this is a sum, we can extract the element out // by "downcasting", this effectively asserts that // the value `tmp` is of the Some variant. let elem = (tmp as Some).0; // The user's original code: process(elem); goto loop; } None => { goto break; } } break: ....

Of course, the actual MIR is based on a control-flow-graph, so it would look something like this:

Loop-break control-flow graph

Explicit drops and panics

So now we’ve seen how we can remove loops, method calls, and matches out of the MIR, and replace them with simpler equivalents. But there is still one key area that we can simplify. Interestingly, it’s something that happens almost invisibly in the code today: running destructors and cleanup in the case of a panic.

In the example control-flow-graph we saw before, we were assuming that all of the code would execute successfully. But in reality, we can’t know that. For example, any of the function calls that we see could panic, which would trigger the start of unwinding. As we unwind the stack, we would have to run destructors for any values we find. Figuring out precisely which local variables should be freed at each point of panic is actually somewhat complex, so we would like to make it explicit in the MIR: this way, MIR construction has to figure it out, but later passes can just rely on the MIR.

The way we do this is two-fold. First, we make drops explicit in the MIR. Drop is the term we use for running the destructor on a value. In MIR, whenever control-flow passes a point where a value should be dropped, we add in a special drop(...) operation. Second, we add explicit edges in the control-flow graph to represent potential panics, and the cleanup that we have to do.

Let’s look at the explicit drops first. If you recall, we started with an example that was just a for loop:

for elem in vec { process(elem); }

We then transformed this for loop to explicitly invoke IntoIterator::into_iter(vec), yielding a value iterator, from which we extract the various elements. Well, this value iterator actually has a destructor, and it will need to be freed (in this case, its job is to free the memory that was used by the vector vec; this memory is no longer needed, since we’ve finished iterating over the vector). Using the drop operation, we can adjust our MIR control-flow-graph to show explicitly where the iterator value gets freed. Take a look at the new graph, and in particular what happens when a None variant is found:

Drop control-flow graph

Here we see that, when the loop exits normally, we will drop the iterator once it has finished. But what about if a panic occurs? Any of the function calls we see here could panic, after all. To account for that, we introduce panic edges into the graph:

Panic control-flow graph

Here we have introduced panic edges onto each of the function calls. By looking at these edges, you can see that the if the call to next or process should panic, then we will drop the variable iterator; but if the call to into_iter panics, the the iterator hasn’t been initialized yet, so it should not be dropped.

One interesting wrinkle: we recently approved RFC 1513, which allows an application to specify that panics should be treated as calls to abort, rather than triggering unwinding. If the program is being compiled with “panic as abort” semantics, then this too would be reflected in the MIR, as the panic edges and handling would simply be absent from the graph.

Viewing MIR on play

At this point, we’ve reduced our example into something fairly close to what MIR actually looks like. If you’d like to see for yourself, you can view the MIR for our example on play.rust-lang.org. Just follow this link and then press the “MIR” button along the top. You’ll wind up seeing the MIR for several functions, so you have to search through to find the start of the example fn. (I won’t reproduce the output here, as it is fairly lengthy.) In the compiler itself, you can also enable graphviz output.

Drops and stack flags

By now I think you have a feeling for how MIR represents a simplified Rust. Let’s look at one example of where MIR will allow us to implement a long-awaited improvement to Rust: the shift to non-zeroing drop. This is a change to how we detect when destructors must execute, particularly when values are only sometimes moved. This move was proposed (and approved) in RFC 320, but it has yet to be implemented. This is primarily because doing it on the pre-MIR compiler was architecturally challenging.

To better understand what the feature is, consider this function send_if, which conditionally sends a vector to another thread:

fn send_if(data: Vec<Data>) { // If `some_condition` returns *true*, then ownership of `data` // moves into the `send_to_other_thread` function, and hence // we should not free it (the other thread will free it). if some_condition(&data) { send_to_other_thread(data); } post_send(); // If `some_condition` returned *false*, the ownership of `data` // remains with `send_if`, which means that the `data` vector // should be freed here, when we return. }

The key point, as indicated in the comments, is that we can’t know statically whether we ought to free data or not. It depends on whether we entered the if or not.

To handle this scenario today, the compiler uses zeroing. Or, more accurately, overwriting. What this means is that, if ownership of data is moved, we will overwrite the stack slot for data with a specific, distinctive bit pattern that is not a valid pointer (we used to use zeroes, so we usually call this zeroing, but we’ve since shifted to something different). Then, when it’s time to free data, we check whether it was overwritten. (As an aside, this is roughly the same thing that the equivalent C++ code would do.)

But we’d like to do better than that. What we would like to do is to use boolean flags on the stack that tell us what needs to be freed. So that might look something like this:

fn send_if(data: Vec<Data>) { let mut data_is_owned = true; if some_condition(&data) { send_to_other_thread(data); data_is_owned = false; } post_send(); // Free `data`, but only if we still own it: if data_is_owned { mem::drop(data); } }

Of course, you couldn’t write code like this in Rust. You’re not allowed to acccess the variable data after the if, since it might have been moved. (This is yet another example of where we can do things in MIR that we would not want to allow in full Rust.)

Using boolean stack flags like this has a lot of advantages. For one, it’s more efficient: instead of overwriting the entire vector, we only have to set the one flag. But also, it’s easier to optimize: imagine that, through inlining or some other means, the compiler was able to determine that some_condition would always be true. In that case, standard constant propagation techniques would tell us that data_is_owned is always false, and hence we can just optimize away the entire call to mem::drop, resulting in tighter code. See RFC 320 for more details on that.

However, implementing this optimization properly on the current compiler architecture is quite difficult. With MIR, it becomes relatively straightforward. The MIR control-flow-graph tells us explicitly where values will be dropped and when. When MIR is first generated, we assume that dropping moved data has no effect – roughly like the current overwriting semantics. So this means that the MIR for send_if might look like this (for simplicity, I’ll ignore unwinding edges).

Non-zeroing drop example

We can then transform this graph by identifying each place where data is moved or dropped and checking whether any of those places can reach one another. In this case, the send_to_other_thread(data) block can reach drop(data). This indicates that we will need to introduce a flag, which can be done rather mechanically:

Non-zeroing drop with flags

Finally, we can apply standard compiler techniques to optimize this flag (but in this case, the flag is needed, and so the final result would be the same).

Just to drive home why MIR is useful, let’s consider a variation on the send_if function called send_if2. This variation checks some condition and, if it is met, sends the data to another thread for processing. Otherwise, it processes it locally:

fn send_if2(data: Vec<Data>) { if some_condition(&data) { send_to_other_thread(data); return; } process(&data); }

This would generate MIR like:

Control-flow graph for send_if2

As before, we still generate the drops of data in all cases, at least to start. Since there are still moves that can later reach a drop, we could now introduce a stack flag variable, just as before:

send_if2 with flags

But in this case, if we apply constant propagation, we can see that at each point where we test data_is_owned, we know statically whether it is true or false, which would allow us to remove the stack flag and optimize the graph above, yielding this result:

Optimized send_if2

Conclusion

I expect the use of MIR to be quite transformative in terms of what the compiler can accomplish. By reducing the language to a core set of primitives, MIR opens the door to a number of language improvements. We looked at drop flags in this post. Another example is improving Rust’s lifetime system to leverage the control-flow-graph for better precision. But I think there will be many applications that we haven’t foreseen. In fact, one such example has already arisen: Scott Olson has been making great strides developing a MIR interpreter miri, and the techniques it is exploring may well form the basis for a more powerful constant evaluator in the compiler itself.

The transition to MIR in the compiler is not yet complete, but it’s getting quite close. Special thanks go out to Simonas Kazlauskas (nagisa) and Eduard-Mihai Burtescu (eddyb), who have both had a particularly large impact on pushing MIR towards the finish line. Our initial goal is to switch our LLVM generation to operate exclusively from the MIR. Work is also proceeding on porting the borrow checker. After that, I expect we will port a number of other pieces on the compiler that are currently using the HIR. If you’d be interested in contributing, look for issues tagged with A-mir or ask around in the #rustc channel on IRC.

Categorieën: Mozilla-nl planet

Chris Cooper: RelEng & RelOps Weekly highlights - April 18, 2016

ma, 18/04/2016 - 20:28

SF2 Balrog character select portrait“My update requests have your blood on them.”This is release candidate week, traditionally one of the busiest times for releng. Your patience is appreciated.

Improve Release Pipeline:

Varun began work on improving Balrog’s backend to make multifile responses (such as GMP) easier to understand and configure. Historically it has been hard for releng to enlist much help from the community due to the access restrictions inherent in our systems. Kudos to Ben for finding suitable community projects in the Balrog space, and then more importantly, finding the time to mentor Varun and others through the work.

Improve CI Pipeline:

With build promotion well underway for the upcoming Firefox 46 release, releng is switching gears and jumping into the TaskCluster migration with both feet. Kim and Mihai will be working full-time on migration efforts, and many others within releng have smaller roles. There is still a lot of work to do just to migrate all existing Linux workloads into TaskCluster, and that will be our focus for the next 3 months.

Release:

We started doing the uplifts for the Firefox 46 release cycle late last week. Release candidates builds should be starting soon. As mentioned above, this is the first non-beta release of Firefox to use the new build promotion process.

Last week, we shipped Firefox and Fennec 45.0.2 and 46.0b10, Firefox 45.0.2esr and Thunderbird 45.0. For further details, check out the release notes here:

See you next week!

Categorieën: Mozilla-nl planet

Air Mozilla: Mozilla Weekly Project Meeting, 18 Apr 2016

ma, 18/04/2016 - 20:00

Mozilla Weekly Project Meeting The Monday Project Meeting

Categorieën: Mozilla-nl planet

Nathan Froyd: rr talk post-mortem

ma, 18/04/2016 - 18:43

On Wednesday last week, I gave an invited talk on rr to a group of interested students and faculty at Rose-Hulman. The slides I used are available, though I doubt they make a lot of sense without the talk itself to go with them. Things I was pleased with:

  • I didn’t overrun my time limit, which was pretty satisfying.  I would have liked to have an hour (40 minutes talk/20 minutes for questions or overrun), but the slot was for a standard class period of 50 minutes.  I also wanted to leave some time for questions at the end, of which there were a few. Despite the talk being scheduled for the last class period of the day, it was well-attended.
  • The slides worked well  My slides are inspired by Lawrence Lessig’s style of presenting, which I also used for my lightning talk in Orlando.  It forces you to think about what you’re putting on each slide and make each slide count.  (I realize I didn’t use this for my Gecko onboarding presentation; I’m not sure if the Lessig method would work for things like that.  Maybe at the next onboarding…)
  • The level of sophistication was just about right, and I think the story approach to creating rr helped guide people through the presentation.  At least, it didn’t look as though many people were nodding off or completely confused, despite rr being a complex systems-heavy program.

Most of the above I credit to practicing the talk repeatedly.  I forget where I heard it, but a rule of thumb I use for presentations is 10 hours of prep time minimum (!) for every 1 hour of talk time.  The prep time always winds up helping: improving the material, refining the presentation, and boosting my confidence giving the presentation.  Despite all that practice, opportunities for improvement remain:

  • The talk could have used any amount of introduction on “here’s how debuggers work”.  This is kind of old hat to me, but I realized after the fact that to many students (perhaps even some faculty), blithely asserting that rr can start and stop threads at will, for instance, might seem mysterious.  A slide or two on the differences between how rr record works vs. how rr replay works and interacts with GDB would have been clarifying as well.
  • The above is an instance where a diagram or two might have been helpful.  I dislike putting diagrams in my talks because I dislike the thought of spending all that time to find a decent, simple app for drawing things, actually drawing them, and then exporting a non-awful version into a presentation.  It’s just a hurdle that I have to clear once, though, so I should just get over it.
  • Checkpointing and the actual mechanisms by which rr can run forwards or backwards in your program got short shrift and should have been explained in a little more detail.  (Diagrams again…)  Perhaps not surprisingly, the checkpointing material got added later during the talk prep and therefore didn’t get practiced as much.
  • The demo received very little practice (I’m sensing a theme here) and while it was able to show off a few of rr‘s capabilities, it wasn’t very polished or impressive.  Part of that is due to rr mysteriously deciding to cease working on my virtual machine, but part of that was just my own laziness and assuming things would work out just fine at the actual talk.  Always practice!
Categorieën: Mozilla-nl planet

Allen Wirfs-Brock: Slide Bit: From Chaos

ma, 18/04/2016 - 18:25

fromchaos

At the beginning of a new computing era, it’s fairly easy to sketch a long-term vision of the era. All it takes is knowledge of current technical trajectories and a bit of imagination. But it’s impossible to predict any of the essential details of how it will actually play out.

Technical, business, and social innovation is rampant in the early years of a new era. Chaotic interactions drive the churn of innovation. The winners that will emerge from this churn are unpredictable. Serendipity is as much a factor as merit. But eventually, the stable pillars of the new era will emerge from the chaos. There are no guarantees of success, but for innovators right now is your best opportunity for impacting the ultimate form of the Ambient Computing Era.

Categorieën: Mozilla-nl planet

Pagina's