mozilla

Mozilla Nederland LogoDe Nederlandse
Mozilla-gemeenschap

Abonneren op feed Mozilla planet
Planet Mozilla - http://planet.mozilla.org/
Bijgewerkt: 2 weken 3 dagen geleden

Anthony Hughes: GPU Process in Beta 53

di, 11/04/2017 - 00:09

A few months ago I reported on an experiment I had run in Firefox Nightly 53 where I compared stability (ie. crashes) between users with and without GPU Process. Since then we’ve fixed some bugs and are now days away from releasing GPU Process in Firefox 53. In anticipation of this moment I wanted to make sure we were ready so I organized a repeat of the previous experiment but this time on our Beta users.

As anticipated the results were different than we saw on Nightly but still favourable. I’d like to highlight some of those results in this post.

Anyone with Windows 7 Platform Update or later with a graphics card that has not previously been blacklisted, and who have multi-process enabled are supported. As it stands this represents approximately 25% of users. Since GPU Process is enabled by default for these users, the experiment randomly selects half of these users and turns off GPU Process by flipping a pref. There is a small percentage of noise in the data (+/- ~0.5%) since flipping the pref doesn’t actually turn off GPU Process until the first restart following the pref flip.

17% fewer driver related crashes

In the grand scheme of things graphics driver crashes represented about 3.17% of all crashes reported. For those with GPU Process enabled the percentage was much lower (2.81%) compared to those with GPU Process disabled (3.54%). While these numbers are low it’s worth noting that all users are not created equal. Some users may experience more driver-related crashes than others based on a multitude of factors (modernity of OS and driver updates, hardware, and mixture of third-party software, etc). It is conceivable, although not provable by this experiment, that the impact of this change would be more noticeable to users more prone to driver related issues. It’s also worth noting that we managed to make this change without introducing any new driver related crashes in the UI process which means Firefox should be much less prone to crashing entirely because of an interaction with the driver, although content may still be affected.

22% fewer Direct3D related crashes

Another category of crashes that sees improvements from GPU Process is D3D related crashes. This category of crashes typically involves hardware accelerated content on the web. In the past we’d see these occurring in the UI process which resulted in Firefox crashing completely. Now, with GPU Process we see about a 1/5th reduction in these crashes and those that remain tend not to happen in the UI process anymore. The end-user impact is that you might have to reload a page but Firefox dies less often.

11% fewer Direct3D accelerated video crashes

More stable hardware accelerated video another interesting benefit of GPU Process. We see about 11% fewer DXVA (DirectX Video Accelerator) related crashes in the test group with GPU Process enabled than the test group with it disabled. The end result of this should be slightly fewer crashes which take down the browser when viewing hardware accelerated video, think sites like YouTube.

Top crashes

Looking at the topcrash charts from Socorro shows expected movement in overall crash volumes. Browser topcrashes are down approximately 10% overall when GPU Process is enabled. Meanwhile, Content topcrashes are up approximately 8% and Plugin topcrashes are down 18%. Topcrashes in the GPU process only account for 0.13% of the overall topcrash volume. These numbers are in line with what we expected based on prior testing.

Telemetry

All of the previous metrics are based on anonymous data we receive from Socorro, ie. crash reports submitted by users. This data is extremely useful in digging into very specific details about crashes but it biases towards users who submit crash reports. Telemetry gives us less detailed information but is better at determining the broader impact of features since it represents a broader user population.

For the purposes of the experiment I compared the overall crash rate for each test group, where crash rate is defined as the number of crashes per 1,000 hours of aggregate browser usage across the entire test group population. The findings from the experiment are interesting in that we saw a slight increase in the Browser process crash rate (up 5.9% in the Enabled cohort), a smaller increase in the Content process crash rate (up 2.5% in the Enabled cohort), and a large decrease in the Plugin crash rate (down 20.6% in the Enabled cohort).

Overall, however, comparing the Enabled cohort to Firefox 52.0.2 does show more expected results with 0.51% lower browser crash rate, 18.1% higher content crash rate, and 18% lower plugin crash rate. It’s also worth noting that the crash rate for GPU Process crashes is very low, relatively, with just 0.07 crashes per 1,000 usage hours. Put in other terms that’s one GPU Process crash every 14,285 hours compared to one Browser process crash every 353 hours.

Conclusions

In the end I think we have accomplished our goal: introducing a GPU process (a foundational piece of the Quantum project) without regressing overall product stability. We’ve reduced the overall volume of some categories of graphics-related crashes while making others less prone to taking down the entire browser. One of the primary fears was that in doing so we’d introduce new ways to take down the browser but I’ve not yet found evidence that this has happened.

Of course the numbers themselves don’t tell the whole story. Now begins a deeper investigation of which crashes (ie. signatures) have changed significantly so that we can improve GPU Process further, but I think we have an excellent foundation to build from.

And of course, watching what happens as we roll out to users in Release with Firefox 53 in the coming days.

Categorieën: Mozilla-nl planet

Daniel Pocock: If Alan Turing was born today, would he be a Muslim?

ma, 10/04/2017 - 22:01

Alan Turing's name and his work are well known to anybody with a theoretical grounding in computer science. Turing developed his theories well before anybody invented file sharing, overclocking or mass surveillance. In fact, Turing was largely working in the absence of any computers at all: the transistor was only invented in 1947 and the microchip, the critical innovation that has made computing both affordable and portable, only came in 1960, four years after Turing's death. To this day, the Turing Test remains a well known challenge in the field of Artificial Intelligence. The most prestigious prize in computing, the A.M. Turing Award from the ACM, equivalent to the Nobel Prize in other fields of endeavour, is named in Turing's honour. (This year's award is another British scientist, Sir Tim Berners-Lee, inventor of the World Wide Web).


Potentially far more people know of Alan Turing for his groundbreaking work at Bletchley Park and the impact it had on cracking the Nazi's Enigma machines during World War 2, giving the allies an advantage against Hitler.

While in his lifetime, Turing exposed the secret communications of the Nazis, in his death, he exposed something manifestly repugnant about his own society. Turing's challenges with his sexuality (or Britain's challenge with it) are just as well documented as his greatest scientific achievements. The 2014 movie The Imitation Game tells Turing's story, bringing together the themes from his professional and personal life.

Had Turing chosen to flee British persecution by going abroad, he would be a refugee in the same sense as any person who crossed the seas to reach Europe today to avoid persecution elsewhere.

Please prove me wrong

In March, I blogged about the problem of racism that plagues Britain today. While some may have felt the tone of the blog was quite strong, I was in no way pleased to find my position affirmed by the events that occurred in the two days after the blog appeared.

Two days and two more human beings (both immigrants and both refugees) subjected to abhorrent and unnecessary acts of abuse in Great Britain. Both cases appear to be fuelled directly by the evil that has been oozing out of number 10 Downing Street since they decided to have a referendum on "Brexit".

What stands out about these latest crimes is not that they occurred (this type of thing has been going on for months now) but certain contrasts between their circumstances and to a lesser extent, the fact they occurred immediately after Theresa May formalized Britain's departure from the EU. One of the victims was almost beaten to death by a street gang, while the other was abused by men wearing uniforms. One was only a child, while the other is a mature adult who has been in the UK almost three decades, completely assimilated into British life, working and paying taxes. Both were doing nothing out of the ordinary at the time the abuse occurred: one had engaged in a conversation at a bus stop, the other was on a routine visit to a Government office. There is no evidence that either of them had done anything to provoke or invite the abhorrent treatment meted out to them by the followers of Theresa May and Nigel Farage.

The first victim, on 30 March, was Stojan Jankovic, a refugee from Yugoslavia who has been in the UK for 26 years. He had a routine meeting at an immigration department office where he was ambushed, thrown in the back of a van and sent to rot in a prison cell by Theresa May's gestapo. On Friday, 31 March, it was Reker Ahmed, a 17 year old Kurdish-Iranian beaten to the brink of death by a crowd in south London.

One of the more remarkable facts to emerge about these two cases is that while Stojan Jankovic was basically locked up for no reason at all, the street thugs who the police apprehended for the assault on Ahmed were kept in a cell for less than 48 hours and released again on bail. While the harmless and innocent Jankovic was eventually released after a massive public outcry, he spent more time locked up than that gang of violent criminals who beat Reker Ahmed.

In other words, Theresa May and Nigel Farage's Britain has more concern for the liberty of violent criminals than somebody like Jankovic who has been working and paying taxes in the UK since before any of those street thugs were born.

A deeper insight into Turing's fate

With gay marriage having been legal in the UK for a number of years now, the rainbow flag flying at the Tate and Sir Elton John achieving a knighthood, it becomes difficult for people to relate to the world in which Turing and many other victims were collectively classified by their sexuality, systematically persecuted by the state and ultimately died far sooner than they should have. (Turing was only 41 when he died).

In fact, the cruel and brutal forces that ripped Turing apart (and countless other victims too) haven't dissipated at all, they have simply shifted their target. The slanderous comments insinuating that immigrants "steal" jobs or that Islam is about terrorism are eerily reminiscent of suggestions that gay men abduct young boys or work as Soviet spies. None of these lies has any basis in fact, but repeat them often enough in certain types of newspaper and these ideas spread like weeds.

In an ironic twist, Turing's groundbreaking work at Bletchley Park was founded on the contributions of Polish mathematicians, their own country having been the first casualty to Hitler, they were also both immigrants and refugees in Britain. Today, under the Theresa May/Nigel Farage leadership, Polish citizens have been subjected to regular vilification by the media and some have even been killed in the street.

It is said that a picture is worth a thousand words. When you compare these two pieces of propaganda: a 1963 article in the Sunday Mirror advising people "How to spot a possible homo" and a UK Government billboard encouraging people to be on the lookout for people who look different, could you imagine the same type of small-minded and power-hungry tyrants crafting them, singling out a minority so as to keep the public's attention in the wrong place?


Many people have noticed that these latest UK Government posters portray foreigners, Muslims and basically anybody who is not white using a range of characteristics found in anti-semetic propaganda from the Third Reich:

Do the people who create such propaganda appear to have any concern whatsoever for the people they hurt? How would Alan Turing have felt when he encountered propaganda like that from the Sunday Mirror? Do posters like these encourage us to judge people by their gifts in science, the arts or sporting prowess or do they encourage us to lump them all together based on their physical appearance?

It is a basic expectation of scientific methodology that when you repeat the same experiment, you should get the same result. What type of experiment are Theresa May and Nigel Farage conducting and what type of result would you expect?

Playing ping-pong with children

If anybody has any doubt that this evil comes from the top, take a moment to contemplate the 3,000 children who were baited with the promise of resettlement from the Calais "jungle" camp into the UK under the Dubs amendment.

When French authorities closed the "jungle" in 2016, the children were lured out of the camp and left with nowhere to go as Theresa May and French authorities played ping-pong with them. Given that the UK parliament had already agreed they should be accepted, was there any reason for Theresa May to dig her heels in and make these children suffer? Or was she just trying to prove her credentials as somebody who can bastardize migrants just the way Nigel Farage would do it?

How do British politicians really view migrants?

Parliamentarian Keith Vaz, former chair of the Home Affairs Select Committee (responsible for security, crime, prostitution and similar things) was exposed with young men from eastern Europe, encouraging them to take drugs before he ordered them "Take your shirt off. I'm going to attack you.". How many British MP's see foreigners this way? Next time you are groped at an airport security checkpoint, remember it was people like Keith Vaz and his committee who oversee those abuses, writing among other things that "The wider introduction of full-body scanners is a welcome development". No need to "take your shirt off" when these machines can look through it as easily as they can look through your children's underwear.

According to the World Health Organization, HIV/AIDS kills as many people as the September 11 attacks every single day. Keith Vaz apparently had no concern for the possibility he might spread this disease any further: the media reported he doesn't use any protection in his extra-marital relationships.

While Britain's new management continue to round up foreigners like Stojan Jankovic who have done nothing wrong, they chose not to prosecute Keith Vaz for his antics with drugs and prostitution.

Who is Britain's next Alan Turing?

Britain's next Alan Turing may not be a homosexual. He or she may have been a child turned away by Theresa May's spat with the French at Calais, a migrant bundled into a deportation van by the gestapo (who are just following orders) or perhaps somebody of Muslim appearance who is set upon by thugs in the street who have been energized by Nigel Farage. If you still have any uncertainty about what Brexit really means, this is it. A country that denies itself the opportunity to be great by subjecting itself to be ruled under the "divide and conquer" mantra of the colonial era.

Throughout the centuries, Britain has produced some of the most brilliant scientists of their time. Newton, Darwin and Hawking are just some of those who are even more prominent than Turing, household names around the world. One can only wonder what the history books will have to say about Theresa May and Nigel Farage however.

Next time you see a British policeman accosting a Muslim, whether it is at an airport, in a shopping centre, keeping Manchester United souvenirs or simply taking a photograph, spare a thought for Alan Turing and the era when homosexuals were their target of choice.

Categorieën: Mozilla-nl planet

Mozilla VR Blog: glTF Workflow for A-Saturday-Night

ma, 10/04/2017 - 20:58
glTF Workflow for A-Saturday-Night

In A-Saturday-Night, we used the glTF format for all of the 3D content. glTF (gl Transmission Format) is a new 3D file format positioning itself as "the JPEG of 3D" for the Web. glTF has features such as JSON descriptions of entire scenes included binary-encoded data (e.g., vertex positions, UVs, normals) that requires no intermediate processing when uploading to GPU.

glTF exporters and converters are fairly stable, but there are still some loose ends and things that work better than other (by the way, Khronos just hired somebody to improve the Blender exporter). In this post, I will explain which workflow was the most satisfactory for me while producing the assets for A-Saturday-Night. And I’ll share some tips and tricks along the way. That's not to say this is the one way to work with glTF; it’s just the way we’re using it today.

We use Blender for creating the assets and COLLADA as an intermediate format, and then converting them to glTF using collada2gltf. You can grab the collada2gltf binary pre-release builds at https://github.com/KhronosGroup/glTF/releases. Note that glTF v2.0 is here! Khronos is urging everyone to migrate to v2.0 quickly as there is no backwards compatibility to v1.0. A v2.0 branch in collada2gltf for updating to glTF 2.0 is almost completed.

glTF Workflow for A-Saturday-Night

Once I have the COLLADA file, I can convert it to glTF with collada2gltf:

collada2gltf.exe -f <assetname>.dae -o <assetname> -k

This will generate two files: assetname.gltf and assetname.bin. Copy both of them to your assets folder.

The -k command-line flag defines the glTF output file to use standard materials (e.g., constant, lambert, phong) instead of translating them to GLSL shaders. This is important right now since three.js has trouble loading glTF shaders (for example, issue #8869, issue #10549, and issue #1110). It also does not make sense to use fragment and vertex shaders for standard Lambert or Phong materials.

Then in our A-Frame scene, we can import the glTF file:

<a-scene> <a-assets> <a-asset-item id="head" src="head.gltf"></a-asset-item> </a-assets> <a-entity gltf-model="#head"></a-entity> </a-scene>

I couldn’t find a way to export Constant Materials from Blender, found as the "Shadeless" checkbox in Blender’s "Material" tab. For now, the only way I know is to edit the .gltf file by hand.

Replace the material's "technique". This example replaces "PHONG" with "CONSTANT", but we could overwrite Lambert materials as well. Replace this:

"KHR_materials_common": { "doubleSided": false, "jointCount": 0, "technique": "PHONG", "transparent": false, "values": { "ambient": [ 0, 0, 0, 1 ], "diffuse": "texture_asset", "emission": [ 0, 0, 0, 1 ], "shininess": 50, "specular": [ 0, 0, 0, 1 ] } }

with this:

"KHR_materials_common": { "technique": "CONSTANT", "values": { "emission": "texture_asset" } }

If our constant material does not have any texture, we can put define a color as an [r,g,b,a] value instead.

Blender Tips

Here are some steps we can do in Blender before exporting to COLLADA that help to get everything okay:

  • Keep models and their textures in the same folder (we can separate different assets or kinds of assets in different folders)
  • Use relative paths in our textures: //texture.jpg instead of path/to/myproject/texture.jpg
  • In textures, specify the image nodes with the same name as the image file (without the extension)

glTF Workflow for A-Saturday-Night

To make sure our normals are exported okay and hard edges are preserved, in the "Object Data" tab, click on the "Add Custom Split Normals Data" button. Also make sure that the "Store Edge Crease" option is unchecked (as it is by default).

Before: glTF Workflow for A-Saturday-Night

After: glTF Workflow for A-Saturday-Night

  • In case something fails, we can try exporting to OBJ and importing it back to Blender:

  • Export the asset to OBJ

  • Create a new clean scene in Blender and import the OBJ
  • Export it to COLLADA

  • Below are my COLLADA exporter options, for simple assets (i.e., no animation, rigging nor hierarchies):

glTF Workflow for A-Saturday-Night

Batch Convert with batchgltf

If we have a lot of models to convert, perhaps in different folders, calling collada2gltf for each one is inconvenient. So I made a tiny tool for batch converting .dae files to .gltf using collada2gltf.

glTF Workflow for A-Saturday-Night

The batch converter will scan all input folders for .dae files, convert them to .glTF, and save the result in the specified output folders. We can either have .glTFs all saved in the same folder or in separate folders.

You can download the batchgltf converter from https://github.com/feiss/batchgltf. Take a look at the README for requirements and instructions. batchgltf.py can also work from the command line, so we could include it in a typical Webpack/Gulp workflow.

I will try to have it updated with the latest collada2gltf version while adding additional 3D formats. If you have any problem or would like to collaborate on this little tool, feel free to post an issue or send a pull request! I cannot guarantee it is free of bugs; use with care and keep a backup of your files ;)

By the way, you can find all the A-Saturday-Night assets on GitHub, in both glTF and OBJ formats.

Categorieën: Mozilla-nl planet

Air Mozilla: Mozilla Weekly Project Meeting, 10 Apr 2017

ma, 10/04/2017 - 20:00

Mozilla Weekly Project Meeting The Monday Project Meeting

Categorieën: Mozilla-nl planet

Gervase Markham: Buzzword Bingo

ma, 10/04/2017 - 14:39

This is a genuine question from a European Union public consultation:

Do you see the need for the definition of a reference architecture recommending a standardised high-level framework identifying interoperability interfaces and specific technical standards for facilitating seamless exchanges across data platforms?

Words fail me.

Categorieën: Mozilla-nl planet

Cameron Kaiser: TenFourFoxBox 1.0.1 available

ma, 10/04/2017 - 05:23
As Steven Tyler howls he's Done With Mirrors from my G5's CD, TenFourFoxBox 1.0.1 is available for testing from this totally s3kr1t download location. This is a minor custodial release that bumps the "stealth" user agent to Firefox 52 on macOS Sierra and changes the official map app to OpenStreetMap which has rather better performance. Assuming no problems, it will go live later this week, probably Wednesday Pacific time.
Categorieën: Mozilla-nl planet

The Servo Blog: This Week In Servo 97

ma, 10/04/2017 - 02:30

In the last week, we landed 119 PRs in the Servo organization’s repositories.

Planning and Status

Our overall roadmap is available online, including the overall plans for 2017. Q2 plans will appear soon; please check it out and provide feedback!

This week’s status updates are here.

Notable Additions
  • emilio improved the performance of parsing numeric CSS values.
  • emilio replaced many ad-hoc checks in the CSS selector matching with more structured and consistent logic.
  • rlhunt added support for tiling gradients in WebRender.
  • mrobinson improved the logic for deciding when to clip content.
  • stshine made text correctly inherit overflow properties from its parent element.
  • nox shared more code between the websocket handshake and regular HTTP connection.
  • gw improved the performance of more simple border rendering cases.
  • bholley replaced a hashmap with a vector when storing pseudo-element styles.
  • pyfisch implemented CSS serialization for transform functions.
  • emilio added codegen for C++ destructors in rust-bindgen.
  • jdm enabled a bunch of no-longer-intermittently-failing WebGL tests.
  • ferjm blocked scripts from being loaded when the wrong MIME type is present.
  • emilio improved the performance of selector matching by avoiding unnecessary hashing.
  • cbrewster reduced the amount of cloning requires for some session history code.
  • jdm added support for running web platform tests that require HTTPS.
New Contributors

Interested in helping build a web browser? Take a look at our curated list of issues that are good for new contributors!

Categorieën: Mozilla-nl planet

Mike Hoye: Planet: Secure For Now

za, 08/04/2017 - 04:25

Elevation

This is a followup to a followup – hopefully the last one for a while – about Planet. First of all, I apologize to the community for taking this long to resolve it. It turned out to have a lot more moving parts than were visible at first, and I didn’t know enough about the problem’s context to be able to solve it quickly. I owe a number of people an apology for that, first among them Ehsan who originally brought it to my attention.

The root cause of the problem was that HTTPlib2 in Python 2.x doesn’t – and apparently will never – support Server Name Indication, an important part of Transport Layer Security on shared hosts. This is probably not a big deal for anyone who doesn’t need to make legacy web-facing Python utilities interact securely with modernity, but… well. It me, as the kids say. Here we are.

For some context, our particular SSL problems manifested themselves with error messages like “Error urllib2 Python. SSL: TLSV1_ALERT_INTERNAL_ERROR ssl.c:590” behind the scenes and “internal error” in Planet proper, and I think it’s fair to feel like those messages are less than helpful. I also – no slight on my colleagues in this – don’t have a lot of say in the infrastructure Planet is running on, and it’s equally fair to say I’m not much of a programmer. Python feature-backporting is kind of a rodeo too, and I had a hard time mapping from “I’m using this version of Python on this OS” to “therefore, I have these tools available to me.” Ultimately this combination of OS constraints, library opacity and learning how (and if, where and when) SSL works (or doesn’t, and why) while working in the dated idioms of a language I only half-know didn’t add up to the smoothest experience.

I had a few options open to me, or at least I thought I did. Refactoring for Python 3.x was a non-starter, but I spent far more time than I should have trying to rewrite Planet to work directly with Requests. That turned out to be harder than I’d expected, largely because Planet code has a lot of expectations all over it about HTTPlib2 and how it behaves. I mistakenly thought re-engineering that behavior would be straightforward, and I definitely wasn’t expecting the surprising number of rusty edge cases I’d run into when my assumptions hit the real live web.

Partway through this exercise, in a curious set of coincidences, Mike Connor and I were talking about an old line – misquoted by John F. Kennedy as “Don’t ever take a fence down until you know the reason why it was put up” – by G. K. Chesterton, that went:

In the matter of reforming things, as distinct from deforming them, there is one plain and simple principle; a principle which will probably be called a paradox. There exists in such a case a certain institution or law; let us say, for the sake of simplicity, a fence or gate erected across a road. The more modern type of reformer goes gaily up to it and says, “I don’t see the use of this; let us clear it away.” To which the more intelligent type of reformer will do well to answer: “If you don’t see the use of it, I certainly won’t let you clear it away. Go away and think. Then, when you can come back and tell me that you do see the use of it, I may allow you to destroy it.”

Infrastructure

One nice thing about ancient software is that it builds up these fences; they look like cruft, like junk you should tear out and throw away, until you really, really understand that your code, and you, are being tested. That conversation reminded me of this blog post from Joel Spolsky, about The Worst Thing you can do with software, which smelled suspiciously like what I was right in the middle of doing.

There’s a subtle reason that programmers always want to throw away the code and start over. The reason is that they think the old code is a mess. And here is the interesting observation: they are probably wrong. The reason that they think the old code is a mess is because of a cardinal, fundamental law of programming:

It’s harder to read code than to write it.

This is why code reuse is so hard. This is why everybody on your team has a different function they like to use for splitting strings into arrays of strings. They write their own function because it’s easier and more fun than figuring out how the old function works.

As a corollary of this axiom, you can ask almost any programmer today about the code they are working on. “It’s a big hairy mess,” they will tell you. “I’d like nothing better than to throw it out and start over.”

Why is it a mess?

“Well,” they say, “look at this function. It is two pages long! None of this stuff belongs in there! I don’t know what half of these API calls are for.”

[…] I know, it’s just a simple function to display a window, but it has grown little hairs and stuff on it and nobody knows why. Well, I’ll tell you why: those are bug fixes. One of them fixes that bug that Nancy had when she tried to install the thing on a computer that didn’t have Internet Explorer. Another one fixes that bug that occurs in low memory conditions. Another one fixes that bug that occurred when the file is on a floppy disk and the user yanks out the disk in the middle. That LoadLibrary call is ugly but it makes the code work on old versions of Windows 95.

Each of these bugs took weeks of real-world usage before they were found. The programmer might have spent a couple of days reproducing the bug in the lab and fixing it. If it’s like a lot of bugs, the fix might be one line of code, or it might even be a couple of characters, but a lot of work and time went into those two characters.

When you throw away code and start from scratch, you are throwing away all that knowledge. All those collected bug fixes. Years of programming work.

The first one of these fences I hit was when I discovered that HTTPlib2.Response objects are (somehow) case-insensitive dictionaries because HTTP headers, per spec, are case-insensitive (though normal Python dictionaries very much not, even though examining Response objects with basic tools like “print” makes them look just like they’re a perfectly standard python dict(), nothing to see here move along. Which definitely has this kind of a vibe to it.) Another was hitting what might be a bug in Requests, where usually it gives you “200” as the HTTP “Everything’s Fine” response, which Python will happily and silently turn into the integer HTTPlib2 is expecting, but sometimes gives you “200 OK” which: kaboom.

On the bright side, I did get to spend a few minutes reminiscing fondly to myself about working with Dave Humphrey way back in the day; in hindsight he warned me about this kind of thing when we were working through a similar problem. “It’s the Web. You get whatever you get, whenever you get it, and you’ll like it.”

I was mulling over all of this earlier this week when I decided to take the best (and also worst, and also last) available option: I threw out everything I’d done up to that point and just started lying to the program until it did what I wanted.

This gist is the meat of that effort; the rest of it (swap out the HTTPlib2 calls for Requests and update your error handling) is straightforward, and running in production now. It boils down to taking a Requests object, giving it an imaginary friend, and then standing it on that imaginary friend’s shoulders, throwing a trenchcoat over it and telling it to act like a grownup. The content both calls returns is identical but the supplementary data – headers, response codes, etc – isn’t, so using this technique as a shim potentially makes Requests a drop-in replacement for HTTPlib2. On the off chance that you’re facing the same problems Planet was facing, I hope it’s useful to you.

Again, I apologize for the delay in sorting this out, and thank you for your patience.

Categorieën: Mozilla-nl planet

Aaron Klotz: Asynchronous Plugin Initialization: Requiem

vr, 07/04/2017 - 23:00

My colleague bsmedberg is going to be removing asynchronous plugin initialization in bug 1352575. Sadly the feature never became solid enough to remain enabled on release, so we cut our losses and cancelled the project early in 2016. Now that code is just a bunch of dead weight. With the deprecation of non-Flash NPAPI plugins in Firefox 52, our developers are now working on simplifying the remaining NPAPI code as much as possible.

Obviously the removal of that code does not prevent me from discussing some of the more interesting facets of that work.

Today I am going to talk about how async plugin init worked when web content attempted to access a property on a plugin’s scriptable object, when that plugin had not yet completed its asynchronous initialization.

As described on MDN, the DOM queries a plugin for scriptability by calling NPP_GetValue with the NPPVpluginScriptableNPObject constant. With async plugin init, we did not return the true NPAPI scriptable object back to the DOM. Instead we returned a surrogate object. This meant that we did not need to synchronously wait for the plugin to initialize before returning a result back to the DOM.

If the DOM subsequently called into that surrogate object, the surrogate would be forced to synchronize with the plugin. There was a limit on how much fakery the async surrogate could do once the DOM needed a definitive answer – after all, the NPAPI itself is entirely synchronous. While you may question whether the asynchronous surrogate actually bought us any responsiveness, performance profiles and measurements that I took at the time did indeed demonstrate that the asynchronous surrogate did buy us enough additional concurrency to make it worthwhile. A good number of plugin instantiations were able to complete in time before the DOM had made a single invocation on the surrogate.

Once the surrogate object had synchronized with the plugin, it would then mostly act as a pass-through to the plugin’s real NPAPI scriptable object, with one notable exception: property accesses.

The reason for this is not necessarily obvious, so allow me to elaborate:

The DOM usually sets up a scriptable object as follows:

this.__proto__.__proto__.__proto__
  • Where this is the WebIDL object (ie, content’s <embed> element);
  • Whose prototype is the NPAPI scriptable object;
  • Whose prototype is the shared WebIDL prototype;
  • Whose prototype is Object.prototype.

NPAPI is reentrant (some might say insanely reentrant). It is possible (and indeed common) for a plugin to set properties on the WebIDL object from within the plugin’s NPP_New.

Suppose that the DOM tries to access a property on the plugin’s WebIDL object that is normally set by the plugin’s NPP_New. In the asynchronous case, the plugin’s initialization might still be in progress, so that property might not yet exist.

In the case where the property does not yet exist on the WebIDL object, JavaScript fails to retrieve an “own” property. It then moves on to the first prototype and attempts to resolve the property on that. As outlined above, this prototype would actually be the async surrogate. The async surrogate would then be in a situation where it must absolutely produce a definitive result, so this would trigger synchronization with the plugin. At this point the plugin would be guaranteed to have finished initializing.

Now we have a problem: JS was already examining the NPAPI scriptable object when it blocked to synchronize with the plugin. Meanwhile, the plugin went ahead and set properties (including the one that we’re interested in) on the WebIDL object. By the time that JS execution resumes, it would already be looking too far up the prototype chain to see those new properties!

The surrogate needed to be aware of this when it synchronized with the plugin during a property access. If the plugin had already completed its initialization (thus rendering synchronization unnecessary), the surrogate would simply pass the property access on to the real NPAPI scriptable object. On the other hand, if a synchronization was performed, the surrogate would first retry the WebIDL object by querying for the WebIDL object’s “own” properties, and return the own property if it now existed. If no own property existed on the WebIDL object, then the surrogate would revert to its “pass through all the things” behaviour.

If I hadn’t made the asynchronous surrogate scriptable object do that, we would have ended up with a strange situation where the DOM’s initial property access on an embed could fail non-deterministically during page load.

That’s enough chatter for today. I enjoy blogging about my crazy hacks that make the impossible, umm… possible, so maybe I’ll write some more of these in the future.

Categorieën: Mozilla-nl planet

Andrew Overholt: Quantum work

vr, 07/04/2017 - 21:43

Quantum Curling

Last week we had a work week at Mozilla’s Toronto office for a bunch of different projects including Quantum DOM, Quantum Flow (performance), etc. It was great to have people from a variety of teams participate in discussions and solidify (and change!) plans for upcoming Firefox releases. There were lots of sessions going on in parallel and I wasn’t able to attend them all but some of the results were written up by the inimitable Ehsan in his fourth Quantum Flow newsletter.

Near the end of the week, Ehsan gave an impromptu walkthrough of the Gecko profiler. I’m planning on taking some of the tips he gave and that were discussed and put them onto the documentation for the profiler. If you’re interested in helping, please let me know!

The photo above is of us going curling at the High Park Curling Club. It was a lot of fun and I was happy that only one other person had ever curled before so it was a unique experience for almost everyone!

Categorieën: Mozilla-nl planet

Will Kahn-Greene: Everett v0.9 released and why you should use Everett

vr, 07/04/2017 - 16:00
What is it?

Everett is a Python configuration library.

Configuration with Everett:

  • is composeable and flexible
  • makes it easier to provide helpful error messages for users trying to configure your software
  • supports auto-documentation of configuration with a Sphinx autocomponent directive
  • supports easy testing with configuration override
  • can pull configuration from a variety of specified sources (environment, ini files, dict, write-your-own)
  • supports parsing values (bool, int, lists of things, classes, write-your-own)
  • supports key namespaces
  • supports component architectures
  • works with whatever you're writing--command line tools, web sites, system daemons, etc

Everett is inspired by python-decouple and configman.

v0.9 released!

This release focused on overhauling the Sphinx extension. It now:

  • has an Everett domain
  • supports roles
  • indexes Everett components and options
  • looks a lot better

This was the last big thing I wanted to do before doing a 1.0 release. I consider Everett 0.9 to be a solid beta. Next release will be a 1.0.

Why you should take a look at Everett

At Mozilla, I'm using Everett 0.9 for Antenna which is running in our -stage environment and will go to -prod very soon. Antenna is the edge of the crash ingestion pipeline for Mozilla Firefox.

When writing Antenna, I started out with python-decouple, but I didn't like the way python-decouple dealt with configuration errors (it's pretty hands-off) and I really wanted to automatically generate documentation from my configuration code. Why write the same stuff twice especially where it's a critical part of setting Antenna up and the part everyone will trip over first?

Here's the configuration documentation for Antenna:

http://antenna.readthedocs.io/en/latest/configuration.html#application

Here's the index which makes it easy to find things by component or by option (in this case, environment variables):

http://antenna.readthedocs.io/en/latest/genindex.html

When you configure Antenna incorrectly, it spits out an error message like this:

1 <traceback omitted, but it'd be here> 2 everett.InvalidValueError: ValueError: invalid literal for int() with base 10: 'foo' 3 namespace=None key=statsd_port requires a value parseable by int 4 Port for the statsd server 5 For configuration help, see https://antenna.readthedocs.io/en/latest/configuration.html

So what's here?:

  • Block 1 is the traceback so you can trace the code if you need to.
  • Line 2 is the exception type and message
  • Line 3 tells you the namespace, key, and parser used
  • Line 4 is the documentation for that specific configuration option
  • Line 5 is the "see also" documentation for the component with that configuration option

Is it beautiful? No. [1] But it gives you enough information to know what the problem is and where to go for more information.

Further, in Python 3, Everett will always raise a subclass of ConfigurationError so if you don't like the output, you can tailor it to your project's needs. [2]

First-class docs. First-class configuration error help. First-class testing. This is why I created Everett.

If this sounds useful to you, take it for a spin. It's almost a drop-in replacement for python-decouple [3] and os.environ.get('CONFIGVAR', 'default_value') style of configuration.

Enjoy!

[1]I would love some help here--making that information easier to parse would be great for a 1.0 release. [2]Python 2 doesn't support exception chaining and I didn't want to stomp on the original exception thrown, so in Python 2, Everett doesn't wrap exceptions. [3]python-decouple is a great project and does a good job at what it was built to do. I don't mean to demean it in any way. I have additional requirements that python-decouple doesn't do well and that's where I'm coming from. Where to go for more

For more specifics on this release, see here: http://everett.readthedocs.io/en/latest/history.html#april-7th-2017

Documentation and quickstart here: https://everett.readthedocs.org/en/v0.9/

Source code and issue tracker here: https://github.com/willkg/everett

Categorieën: Mozilla-nl planet

Karl Dubost: [worklog] Edition 062. AOL, Etsy and redbubble

vr, 07/04/2017 - 11:05
webcompat life
  • Working on a replacement for our minutes script.
  • Booking flights to June meeting in San Francisco and thinking about my strategy with regards to data. Each time I book flights I can't help thinking about the absurdity of airlines and booking strategy.
  • My server for this site and a couple of others have changed place. For the last 15 years, my 1U Dell was hosted at W3C MIT. Thank you. I moved to Gandi in the meantime, but I'm exploring also something else.
webcompat issues webcompat.com dev

Otsukare!

Categorieën: Mozilla-nl planet

Patrick Cloke: “Adding Two-Factor Authentication to Django (with django-allauth)” Lightning Talk

vr, 07/04/2017 - 03:14

A bit late on this article, but better late than never! Back on October 27th, 2016 I gave a talk at Django Boston entitled “Adding Two-Factor Authentication to Django (with django-allauth)”. It was a ~20 minute talk on integrating the django-allauth-2fa package into a Django project. The package (which I …

Categorieën: Mozilla-nl planet

Air Mozilla: Reps Weekly Meeting Apr. 06, 2017

do, 06/04/2017 - 18:00

Reps Weekly Meeting Apr. 06, 2017 This is a weekly call with some of the Reps to discuss all matters about/affecting Reps and invite Reps to share their work with everyone.

Categorieën: Mozilla-nl planet

Air Mozilla: Reps Weekly Meeting Apr. 06, 2017

do, 06/04/2017 - 18:00

Reps Weekly Meeting Apr. 06, 2017 This is a weekly call with some of the Reps to discuss all matters about/affecting Reps and invite Reps to share their work with everyone.

Categorieën: Mozilla-nl planet

Patrick McManus: On The Merits of QUIC for HTTP

do, 06/04/2017 - 15:19

I am often asked why the Internet HTTP community is working on an IETF based QUIC when HTTP/2 (RFC 7540) is less than 2 years old. There are good answers! This work is essentially part two of a planned evolution to improve speed, reliability, security, responsiveness, and architecture. These needs were understood when HTTP/2 was standardized but the community decided to defer larger architectural changes to the next version. That next version is arriving now in the form of QUIC.

Flashback to HTTP/2 Development
HTTP/2 development consciously constrained its own work in two ways. One was to preserve HTTP/1 semantics, and the other was to limit the work to what could be done within a traditional TLS/TCP stack.

The choice to use TLS/TCP in HTTP/2 was not a forgone conclusion. During the pre-HTTP/2 bakeoff phase there was quite a bit of side channel chatter about whether anyone would propose a different approach such as the SCTP/UDP/[D]TLS stack RTC DataChannels was contemporaneously considering for standardization.

In the end folks felt that experience and running code were top tier properties for the next revision of HTTP. That argued for TCP/TLS. Using the traditional stack lowered the risk of boiling the ocean, improved the latency of getting something deployed, and captured the meaningful low hanging fruit of multiplexing and priority that HTTP/2 was focused on.

Issues that could not be addressed with TCP/TLS 1.2 were deemed out of scope for that round of development. The mechanisms of ALPN and Alternative Services were created to facilitate successor protocols.

I think our operational experience has proven HTTP/2's choices to be a good decision. It is a big win for a lot of cases, it is now the majority HTTPS protocol, and I don't think we could have done too much better within the constraints applied.

The successor protocol turns out to be QUIC. It is a good sign for the vibrancy of HTTP/2 that the QUIC charter explicitly seeks to map HTTP/2 into its new ecosystem. QUIC is taking on exactly the items that were foreseen when scoping HTTP/2.

This post aims to highlight the benefits of QUIC as it applies to the HTTP ecosystem. I hope it is useful even for those that already understand the protocol mechanics. It, however, does not attempt to fully explain QUIC. The IETF editor's drafts or Jana's recent tutorial might be good references for that.

Fixing the TCP In-Order Penalty
The chief performance frustration with HTTP/2 happens during higher than normal packet loss. The in-order property of TCP spans multiplexed HTTP messages. A single packet loss in one message prevents subsequent unrelated messages from being delivered until the loss is repaired. This is because TCP delays received data in order to provide in-order delivery of the whole stream.

For a simple example imagine images A and B each delivered in two parts in this order: A1, A2, B1, and B2. If only A1 were to suffer a packet loss under TCP that would also delay A2, B1, and B2. While image A is unavoidably damaged by this loss, image B is also impacted even though all of its data was successfully transferred the first time.

This is something that was understood during the development of RFC 7540 and was correctly identified as a tradeoff favoring connections with lower loss rates. Our community has seen some good data on how this has played out "in the wild" recently from both Akamai and Fastly. For most HTTP/2 connections this strategy has been beneficial, but there is a tail population that actually regresses in performance compared to HTTP/1 under high levels of loss because of this.

QUIC fixes this problem through multistreaming onto one connection in a way very familiar to 7540 but with an added twist. It also gives each stream its own ordering context analagous to a TCP sequence number. These streams can be delivered independently to the application because in-order only applies to each stream instead of the whole connection in QUIC.
I believe fixing this issue is the highest impact feature of QUIC.
Starting Faster
HTTP/2 starts much faster than HTTP/1 due to its ability to send multiple requests in the first round trip and its superior connection management results in fewer connection establishments. However, new connections using the TCP/TLS stack still incur 2 or 3 round trips of delay before any HTTP/2 data can be sent.
In order to address this QUIC eschews layers in favor of a component architecture that allows sending encrypted application data immediately. QUIC still uses TLS for security establishment and has a transport connection concept, but these components are not forced into layers that require their own round trips to be initialized. Instead the transport session, the HTTP requests, and the TLS context are all combined into the first packet flight when doing session resumption (i.e. you are returning to a server you have seen before). A key part of this is integration with TLS 1.3  and in particular the 0-RTT (aka Early Data) handshake feature.

The HTTP/2 world, given enough time, will be able to capture some of the same benefits using both TLS 1.3 and TCP Fast Open. Some of that work is made impractical by dependencies on Operating System configurations and the occasional interference from middleboxes unhappy with TCP extensions.

However, even at full deployment of TLS 1.3 and TCP Fast Open, that approach will lag QUIC performance because QUIC can utilize the full flight of data in the first round trip while Fast Open limits the amount of data that can be carried to the roughly 1460 bytes available in a single TCP SYN packet. That packet also needs to include the TLS Client Hello and HTTP SETTINGS information along with any HTTP requests. That single packet runs out of room quickly if you need to encode more than one request or any message body. Any excess needs to wait a round trip.

Harmonizing with TLS
When used with HTTP/1 and HTTP/2, TLS generally operates as a simple pipe. During encryption cleartext streams of bytes go in one side and a stream of encrypted bytes come out the other and are then fed to TCP. The reverse happens when decrypting. Unfortunately, the TLS layer operates internally on multi-byte records instead of a byte stream and the mismatch creates a significant performance problem.

The records can be up to 64KB and a wide variety of sizes are used in practice. In order to enforce data integrity, one of the fundamental security properties of TLS, the entire record must be received before it can be decoded. When the record spans multiple packets a problem similar to the "TCP in-order penalty" discussed earlier appears.

A loss to any packet in the record delays decoding and delivery of the other correctly delivered packets while the loss is repaired. In this case the problem is actually a bit worse as any loss impacts the whole record not just the portion of the stream following the loss. Further, because application delivery of the first byte of the record is always dependent on the receipt of the last byte of the record simple serialization delays or common TCP congestion-control stalls add latency to application delivery even with 0% packet loss.

The 'obvious fix' of placing an independent record in each packet turns out to work much better with QUIC than TCP. This is because TCP's API is a simple byte stream. Applications, including TLS, have no sense of where packets begin or end and have no reliable control over it. Furthermore, TCP proxies or even HTTP proxies commonly rearrange TCP packets while leaving the byte stream in tact (a proof of the practical value of end to end integrity protection!).

Even the absurd-um solution of 1 byte records does not work because the record overhead creates multibyte sequences that will still span packet boundaries. Such a naive approach would also drown in its own overhead.

QUIC shines here by using its component architecture rather than the traditional layers. The QUIC transport layer receives plaintext from the application and consults its own transport information regarding packet numbers, PMTU, and the TLS keying information. It combines all of this to form the encrypted packets that can be decrypted atomically with the equivalent of one record per UDP packet. Intermediaries are unable to mess with the framing because even the transport layer is integrity protected in QUIC during this same process - a significant security bonus! Any loss events will only impact delivery of the lost packet.

No More TCP RST Data Loss
As many HTTP developers will tell you, TCP RST is one of the most painful parts of the existing ecosystem. Its pain comes in many forms, but the data loss is the worst.

The circumstances for an operating system generating a RST and how they respond to them can vary by implementation. One common scenario is a server close()ing a connection that has received another request that the HTTP server has not yet read and is unaware of. This is a routine case for HTTP/1 and HTTP/2 applications. Most kernels will react to the closing of a socket with unconsumed data by sending a RST to the peer.

That RST will be processed out of order when received. In practice this means if the original client does a recv() to consume ordinary data that was sent by the server before the server invoked close() the client will incur an unrecoverable failure if the RST has also already arrived and that data cannot ever be read. This is true even if the kernel has sent a TCP ack for it! The problem gets worse when combined with larger TLS record sizes as often the last bit of data is what is needed to decode the whole record and substantial data loss of up to 64KB occurs.

The QUIC RST equivalent is not part of the orderly shutdown of application streams and it is not expected to ever force the loss of already acknowledged data.
Better Responsiveness through Buffer Management
The primary goal of HTTP/2 was the introduction of multiplexing into a single connection and it was understood that you cannot have meaningful multiplexing without also introducing a priority scheme. HTTP/1 illustrates the problem well - it multiplexes the path through unprioritized TCP parallelism which routinely gives poor results. The final RFC contained both multiplexing and priority mechanisms which for the most part work well.

However, successful prioritization requires you to buffer before serializing the byte stream into TLS and TCP because once sent to TCP those messages cannot be reordered in the case of higher priority data presenting itself.  Unfortunately high latency TCP, requires a significant amount of buffering at the socket layer in order to run as fast as possible. These two competing interests make it difficult to judge how much buffering an HTTP/2 sender should use. While there are some Operating System specific oracles that give some clues, TCP itself does not provide any useful guidance to the application for reasonably sizing its socket buffers.

This combination has made it challenging for applications to determine the appropriate level of socket buffering and in turn they sometimes have overbuffered in order to make TCP run at line rate. This results in poor responsiveness to the priority schedule and the inability for a server to recognize individual streams being canceled (which happens more than you may think) because they have already been buffered.

The blending of transport and application components creates the opportunity for QUIC implementations to do a better job on priority. They do this by buffering application data with its priority information outside of the transmission layer. This allows the late binding of the packet transmission to the data that is highest priority at that moment.

Relatedly, whenever a retransmission is required QUIC retransmits the original data in one or more new packets (with new packet identifiers) instead of retransmitting a copy of the lost packet as TCP does. This creates an opportunity to reprioritize, or even drop canceled streams, during retransmission. This compares favorably to TCP which is sentenced to retransmitting the oldest (and perhaps now irrelevant) data first due to its single sequence number and in-order properties.
UDP means Universal DePloyment
QUIC is not inherently either a user space or kernel space protocol - it is quite possible to deploy it in either configuration. However, UDP based applications are often deployed in userspace configurations and do not require special configurations or permissions to run there. It is fair to expect a number of user space based QUIC implementations.

Time will tell exactly what that looks like, but I anticipate it will be a combination of self-updating evergreen applications such as web servers and browsers and also a small set of well maintained libraries akin to the role openssl plays in distributions.

Decoupling functionality traditionally performed by TCP from the operating system creates an opportunity for deploying software faster, updating it more regularly, and iterating on its algorithms in a tight loop. The long replacement and maintenance schedules of operating systems, sometimes measured in decades, create barriers to deploying new approaches to networking.

This new freedom applies both QUIC itself, but also to some of its embedded techniques that have equivalents in the TCP universe that have traditionally been difficult to deploy. Thanks to user space distribution packet pacing, fast open, and loss discovery improvements like RACK will see greater deployment than ever before.

Userspace will mean faster evolution in networking and greater distribution of the resulting best practices.
  ---

The IETF QUIC effort is informed by Google's efforts on its own preceding protocol. While it is not the same effort it does owe a debt to a number of Google folk. I'm not privy to all of the internal machinations at G but, at the inevitable risk of omitting an important contribution, it is worth calling out Jim Roskind, Jana Iyengar, Ian Swett, Adam Langley, and Ryan Hamilton both for their work and their willingness to evangelize and discuss it with me. Thanks! We're making a better Internet together.

This post was originally drafted as a post to the IETF HTTP Working Group by Patrick McManus ,
Categorieën: Mozilla-nl planet

The Mozilla Blog: It’s Time for Open Citations

do, 06/04/2017 - 14:26
Mozilla is signing on to the Initiative for Open Citations (I4OC). We believe open data is integral to a healthy Internet — and to a healthy society.

 

Today, Mozilla is announcing support for the Initiative for Open Citations (I4OC), an effort to make citation data from scholarly publications open and freely accessible.

We’re proud to stand alongside the Wikimedia Foundation, the Public Library of Science and a network of other like-minded institutions, publishers and researchers who believe knowledge should be free from restrictions. We want to create a global, public web of citation data — one that empowers teaching, learning, innovation and progress.

Currently, much of the citation data in scholarly publications is not easily accessible. From geology and chemistry journals to papers on psychology, the citations within are often subject to restrictive and confusing licenses which limit discovery and dissemination of published research. Further, citation data is often not machine readable — meaning we can’t use computer programs to parse the data.

Mozilla understands that in some cases, scholarly publications themselves must be protected or closed in order to respect proprietary ecosystems and business models. But citations are snippets of knowledge that allow everyone to engage with, evaluate and build upon ideas. When citations are inaccessible, the flow of knowledge stalls. Innovation is chilled. The results are damaging.

At Mozilla, we believe openness is a core component of a healthy Internet and a healthy society. Whether open citations or open source code, openness allows people to learn from each other, share ideas and foster collaboration.

I4OC details

I4OC seeks to uphold this openness in the realm of scholarly research. Specifically, I4OC calls for citation data that is structured, separable and open. That means:

Structured ensures citation data is presented in a universal, machine-readable format. This empowers computer programs to unpack and draw connections between different research areas.

Separable ensures citation data can be accessed and analyzed without the need to comb through entire journal articles or books. This allows people to navigate research easily, the same way people navigate the web.

Open ensures citation data is free to access and free to reuse. This allows anyone — students, teachers, entrepreneurs, autodidacts — to benefit.

I4OC is asking scholarly publishers to deposit their citations in Crossref and enable reference distribution. Crossref is a nonprofit organization developing infrastructure and services for scholarly publishers.

We’re already seeing progress. Since the start of I4OC in 2016, the percentage of publishers sharing citations has jumped from 1% of Crossref’s 35 million articles (that include citation data) to over 40%.

That’s heartening, but there’s still a long way to go. We want that number to reach 100%. We want openness to become the norm, not the exception. That’s why Mozilla is supporting I4OC. And that’s why Mozilla is committed to teaching and defending openness online. Want to learn more? Read Mozilla’s Internet Health Report to see how openness makes the Internet exceptional. And sign our petition to open up access to federally-funded research in the U.S.

Photo: Sofia University library // Anastas Tarpanov // CC BY-SA 2.0

The post It’s Time for Open Citations appeared first on The Mozilla Blog.

Categorieën: Mozilla-nl planet

Niko Matsakis: Rayon 0.7 released

do, 06/04/2017 - 06:00

We just released Rayon 0.7. This is a pretty exciting release, because it marks the official first step towards Rayon 1.0. In addition, it marks the first release where Rayon’s parallel iterators reach “feature parity” with the standard sequential iterators! To mark the moment, I thought I’d post the release notes here on the blog:

This release marks the first step towards Rayon 1.0. For best performance, it is important that all Rayon users update to at least Rayon 0.7. This is because, as of Rayon 0.7, we have taken steps to ensure that, no matter how many versions of rayon are actively in use, there will only be a single global scheduler. This is achieved via the rayon-core crate, which is being released at version 1.0, and which encapsulates the core schedule APIs like join(). (Note: the rayon-core crate is, to some degree, an implementation detail, and not intended to be imported directly; it’s entire API surface is mirrored through the rayon crate.)

We have also done a lot of work reorganizing the API for Rayon 0.7 in preparation for 1.0. The names of iterator types have been changed and reorganized (but few users are expected to be naming those types explicitly anyhow). In addition, a number of parallel iterator methods have been adjusted to match those in the standard iterator traits more closely. See the “Breaking Changes” section below for details.

Finally, Rayon 0.7 includes a number of new features and new parallel iterator methods. As of this release, Rayon’s parallel iterators have officially reached parity with sequential iterators – that is, every sequential iterator method that makes any sense in parallel is supported in some capacity.

New features and methods
  • The internal Producer trait now features fold_with, which enables better performance for some parallel iterators.
  • Strings now support par_split() and par_split_whitespace().
  • The Configuration API is expanded and simplified:
    • num_threads(0) no longer triggers an error
    • you can now supply a closure to name the Rayon threads that get created by using Configuration::thread_name.
    • you can now inject code when Rayon threads start up and finish
    • you can now set a custom panic handler to handle panics in various odd situations
  • Threadpools are now able to more gracefully put threads to sleep when not needed.
  • Parallel iterators now support find_first(), find_last(), position_first(), and position_last().
  • Parallel iterators now support rev(), which primarily affects subsequent calls to enumerate().
  • The scope() API is now considered stable (and part of rayon-core).
  • There is now a useful rayon::split function for creating custom Rayon parallel iterators.
  • Parallel iterators now allow you to customize the min/max number of items to be processed in a given thread. This mechanism replaces the older weight mechanism, which is deprecated.
  • sum() and friends now use the standard Sum traits
Breaking changes

In the move towards 1.0, there have been a number of minor breaking changes:

  • Configuration setters like Configuration::set_num_threads() lost the set_ prefix, and hence become something like Configuration::num_threads().
  • Configuration getters are removed
  • Iterator types have been shuffled around and exposed more consistently:
    • combinator types live in rayon::iter, e.g. rayon::iter::Filter
    • iterators over various types live in a module named after their type, e.g. rayon::slice::Windows
  • When doing a sum() or product(), type annotations are needed for the result since it is now possible to have the resulting sum be of a type other than the value you are iterating over (this mirrors sequential iterators).
Experimental features

Experimental features require the use of the unstable feature. Their APIs may change or disappear entirely in future releases (even minor releases) and hence they should be avoided for production code.

  • We now have (unstable) support for futures integration. You can use Scope::spawn_future or rayon::spawn_future_async().
  • There is now a rayon::spawn_async() function for using the Rayon threadpool to run tasks that do not have references to the stack.
Contributors

Thanks to the following people for their contributions to this release:

  • @Aaronepower
  • @ChristopherDavenport
  • @bluss
  • @cuviper
  • @froydnj
  • @gaurikholkar
  • @hniksic
  • @leodasvacas
  • @leshow
  • @martinhath
  • @mbrubeck
  • @nikomatsakis
  • @pegomes
  • @schuster
  • @torkleyy
Categorieën: Mozilla-nl planet

William Lachance: Easier reproduction of intermittent test failures in automation

wo, 05/04/2017 - 22:14

As part of the Stockwell project, I’ve been hacking on ways to make it easier for developers to diagnose failure of our tests in automation. It’s often very difficult to reproduce an intermittent we see in Treeherder locally since the environment is so different, so figuring out workflows that support debugging of problems remotely is essential.

One option that rolled out last year was the so-called one-click loaner, which enabled developers to sign out an virtual machine instance identical to the ones used to run unit tests (at least if the tests are running on Taskcluster, which is increasingly often the case), then execute their particular case with whatever extra debugging options they would find useful. This is a big step forward, but it’s still quite a bit of hassle, since it requires a bunch of manual work on the part of the developer to interact with the instance.

What if we could just re-run the particular test an arbitrary number of times with whatever options we wanted, simply by clicking on a few buttons on Treeherder? I’ve been exploring this for the first few months of 2017 and I’ve come up with a prototype which I think is ready for people to start playing with.

The user interface to this is pretty straightforward. Just find a job you want to retrigger in Treeherder:

Then select the ’…’ option in the panel below and press “Custom Action…”:

You should get a small piece of JSON to edit, which corresponds to the configuration for the retriggered job:

The main field to edit is “path”. You should set this to the name of the test you want to try retriggering. For example dom/animation/test/css-transitions/test_animation-ready.html. You can also set custom Firefox preferences and environment variables, to turn on different types of debugging.

Unfortunately as usual with a new feature at Mozilla, there are a bunch of limitations and caveats:

  • This depends on functionality that’s only in Taskcluster, so buildbot jobs are exempt.
  • No support for Android yet. In combination with the above limitation, this implies that this functionality only works on Linux (at least until other platforms are moved to Taskcluster, which probably isn’t that far off).
  • Browser chrome tests failing in mysterious ways if run repeatedly (bug 1347654)
  • Only reftest and mochitest are currently supported. XPCShell support is blocked by the lack of support in its harness for running a job repeatedly (bug 1347696). Web Platform Tests need the requisite support in mozharness for just setting up the tests without running them — the same issue that prevents us from debugging such tests with a one-click loaner (bug 1348833).

Aside from fixing the above limitations, the following features would also be really nifty to have:

  • Ability to trigger a custom job as part of a try push (i.e. not needing to retrigger off an existing job)
  • Run these jobs under rr, and provide a way to login and interactively debug when the problem is actually reproduced.

I am actually in the process of moving to another team @ Mozilla (more on that in another post), so I probably won’t have a ton of time to work on the above — but I’d be happy to help anyone who’s interested in developing this idea further.

A special shout out to the Taskcluster team for helping me with the development of this feature: in particular the action task implementation from Jonas Finnemann Jensen that made it possible to implement this in the first place.

Categorieën: Mozilla-nl planet

Air Mozilla: The Joy of Coding - Episode 96

wo, 05/04/2017 - 19:00

The Joy of Coding - Episode 96 mconley livehacks on real Firefox bugs while thinking aloud.

Categorieën: Mozilla-nl planet

Pagina's