Mozilla Nederland LogoDe Nederlandse

Yunier José Sosa Vázquez: Nueva versión de Firefox llega con mejoras en la reproducción de videos y mucho más

Mozilla planet - wo, 19/10/2016 - 00:24

El pasado martes 19 de septiembre Mozilla liberó una nueva versión de su navegador e inmediatamente compartimos con ustedes sus novedades y su descarga. Pedimos disculpa a todas las personas por las molestias que esto pudo causar.

Lo nuevo

El administrador de contraseñas ha sido actualizado para permitir a las páginas HTTPS emplear las credenciales HTTP almacenadas. Esta es una forma más para soportar Let’s Encrypt y ayudar a los usuarios en la transición hacia una web más segura.

El modo de lectura ha recibido varias funcionalidades que mejoran nuestra lectura y escucha mediante la adición de controles para ajustar el ancho y el espacio entre líneas del texto, y la inclusión de narración donde el navegador lee en voz alta el contenido de la página; sin dudas características que mejorarán la experiencia de uso en personas con discapacidad visual.

El modo de lectura ahora incluye controles adicionales y lectura en alta voz

El modo de lectura ahora incluye controles adicionales y lectura en alta voz

El reproductor de audio y video HTML5 ahora posibilita la reproducción de archivos a diferentes velocidades (0.5x, Normal, 1.25x, 1.5x, 2x) y repetirlos indefinidamente. En este sentido, se mejoró el rendimiento al reproducir videos para usuarios con sistemas que soportan instrucciones SSSE3 sin aceleración por hardware.

Firefox Hello, el sistema de comunicación mediante videollamadas y chat ha sido eliminado por su bajo empleo. No obstante, Mozilla seguirá desarrollando y mejorando WebRTC.

Fin del soporte para sistemas OS X 10.6, 10.7 y 10.8, y Windows que soportan procesadores SSE.

Para desarrolladores
  • Añadida la columna Causa al Monitor de Red para mostrar la causa que generó la petición de red.
  • Introducida la API web speech synthesis.
Para Android
  • Adicionado el modo de vista de página sin conexión, con esto podrás ver algunas páginas aunque no tengas acceso a Internet.
  • Añadido un paseo por características fundamentales como el Modo de Lectura y Sync a la página Primera Ejecución.
  • Introducidos las localizaciones Español de Chile (es-CL) y Noruego (nn-NO).
  • El aspecto y comportamiento de las pestañas ha sido actualizado y ahora:
    • Las pestañas antiguas ahora son ocultadas cuando la opción restaurar pestañas está establecida en “Siempre restaurar”.
    • Recuerdo de la posición del scroll y el nivel de zoom para las pestañas abiertas.
    • Los controles multimedia han sido actualizados para evitar sonidos desde múltiples pestañas al mismo tiempo.
    • Mejoras visuales al mostrar los favicons.
Otras novedades
  • Mejoras en la página about:memory para reportar el uso de memoria dedicada a las fuentes.
  • Rehabilitado el valor por defecto para la organización de las fuentes mediante Graphite2.
  • Mejorado el rendimiento en sistemas Windows y OS X que no cuentan con aceleración por hardware.
  • Varias correcciones de seguridad.

Si prefieres ver la lista completa de novedades, puedes llegarte hasta las notas de lanzamiento (en inglés).

Puedes obtener esta versión desde nuestra zona de Descargas en español e inglés para Android, Linux, Mac y Windows. Si te ha gustado, por favor comparte con tus amigos esta noticia en las redes sociales. No dudes en dejarnos un comentario.

Categorieën: Mozilla-nl planet

Gervase Markham: Security Updates Not Needed

Mozilla planet - ti, 18/10/2016 - 23:55

As Brian Krebs is discovering, a large number of internet-connected devices with bad security can really ruin your day. Therefore, a lot of energy is being spent thinking about how to solve the security problems of the Internet of Things. Most of it is focussed on how we can make sure that these devices get regular security updates, and how to align the incentives to achieve that. And it’s difficult, because cheap IoT devices are cheap, and manufacturers make more money building the next thing than fixing the previous one.

Perhaps, instead, of trying to make water flow uphill, we should be taking a different approach. How can we design these devices such that they don’t need any security updates for their lifetime?

One option would be to make them perfect first time. Yeah, right.

Another option would be the one from my blog post, An IoT Vision. In that post, I outlined a world where IoT devices’ access to the Internet is always mediated through a hub. This has several advantages, including the ability to inspect all traffic and the ability to write open source drivers to control the hardware. But one additional outworking of this design decision is that the devices are not Internet-addressable, and cannot send packets directly to the Internet on their own account. If that’s so, it’s much harder to compromise them and much harder to do anything evil with them if you do. At least, evil things affecting the rest of the net. And if that’s not sufficient, the hub itself can be patched to forbid patterns of access necessary for attacks.

Can we fix IoT security not by making devices secure, but by hiding them from attacks?

Categorieën: Mozilla-nl planet

Gervase Markham: WoSign and StartCom

Mozilla planet - ti, 18/10/2016 - 23:37

One of my roles at Mozilla is that I’m part of the Root Program team, which manages the list of trusted Certificate Authorities (CAs) in Firefox and Thunderbird. And, because we run our program in an open and transparent manner, other entities often adopt our trusted list.

In that connection, I’ve recently been the lead investigator into the activities of a Certificate Authority (CA) called WoSign, and a connected CA called StartCom, who have been acting in ways contrary to those expected of a trusted CA. The whole experience has been really interesting, but I’ve not seen a good moment to blog about it. Now that a decision has been taken on how to move forward, it seems like a good time.

The story started in late August, when Google notified Mozilla about some issues with how WoSign was conducting its operations, including various forms of what seemed to be certificate misissuance. We wrote up the three most serious of those for public discussion. WoSign issued a response to that document.

Further issues were pointed out in discussion, and via the private investigations of various people. That led to a longer, curated issues list and much more public discussion. WoSign, in turn produced a more comprehensive response document, and a “final statement” later.

One or two of the issues on the list turned out to be not their fault, a few more were minor, but several were major – and their attempts to explain them often only led to more issues, or to a clearer understanding of quite how wrong things had gone. On at least one particular issue, the question of whether they were deliberately back-dating certificates using an obsolete cryptographic algorithm (called “SHA-1”) to get around browser blocks on it, we were pretty sure that WoSign was lying.

Around that time, we privately discovered a couple of certificates which had been mis-issued by the CA StartCom but with WoSign fingerprints all over the “style”. Up to this point, the focus has been on WoSign, and StartCom was only involved because WoSign bought them and didn’t disclose it as they should have done. I started putting together the narrative. The result of those further investigations was a 13-page report which conclusively proved that WoSign had been intentionally back-dating certificates to avoid browser-based restrictions on SHA-1 cert issuance.

If you can write an enthralling page-turner about f**king certificate authorities doing scuzzy nerd sh*t, damn, I couldn't pull that off.

— SwiftOnSecurity (@SwiftOnSecurity) September 28, 2016

The report proposed a course of action including a year’s dis-trust for both CAs. At that point, Qihoo 360 (the Chinese megacorporation which is the parent of WoSign and StartCom) requested a meeting with Mozilla, which was held in Mozilla’s London office, and attended by two representatives of Qihoo, and one each from StartCom and WoSign. At that meeting, WoSign’s CEO admitted to intentionally back-dating SHA-1 certificates, as our investigation had discovered. The representatives of Qihoo 360 wanted to know whether it would be possible to disentangle StartCom from WoSign and then treat it separately. Mozilla representatives gave advice on the route which might most likely achieve this, but said that any plan would be subject to public discussion.

WoSign then produced another updated report which included their admissions, and which outlined a plan to split StartCom out from under WoSign and change the management, which was then repeated by StartCom in their remediation plan. However, based on the public discussion, the Mozilla CA Certificates module owner Kathleen Wilson decided that it was appropriate to mostly treat StartCom and WoSign together, although StartCom has an opportunity for quicker restitution than WoSign.

And that’s where we are now :-) StartCom and WoSign will no longer be trusted in Mozilla’s root store for certs issued after 21st October (although it may take some time to implement that decision).

Categorieën: Mozilla-nl planet

Mozilla waarschuwt voor einde support sha-1-certificaten -

Nieuws verzameld via Google - ti, 18/10/2016 - 23:13

Mozilla waarschuwt voor einde support sha-1-certificaten
Mozilla zal vanaf begin 2017 de ondersteuning van ssl-certificaten met het sha-1-algoritme stoppen, wat ervoor kan zorgen dat gebruikers bij het bezoeken van sommige websites een foutmelding zullen krijgen. Uit vorig jaar gepubliceerd onderzoek blijkt ...

en meer »
Categorieën: Mozilla-nl planet

Christian Heilmann: Decoded Chats – first edition live on the Decoded Blog

Mozilla planet - ti, 18/10/2016 - 19:02

Over the last few weeks I was busy recording interviews with different exciting people of the web. Now I am happy to announce that the first edition of Decoded Chats is live on the new Decoded Blog.

Decoded Chats - Chris interviewing Rob Conery

In this first edition, I’m interviewing Rob Conery about his “Imposter Handbook“. We cover the issues of teaching development, how to deal with a constantly changing work environment and how to tackle diversity and integration.

We’ve got eight more interviews ready and more lined up. Amongst the people I talked to are Sarah Drasner, Monica Dinculescu, Ada-Rose Edwards, Una Kravets and Chris Wilson. The format of Decoded Chats is pretty open: interviews ranging from 15 minutes to 50 minutes about current topics on the web, trends and ideas with the people who came up with them.

Some are recorded in a studio (when I am in Seattle), others are Skype calls and yet others are off-the-cuff recordings at conferences.

Do you know anyone you’d like me to interview? Drop me a line on Twitter @codepo8 and I see what I can do :)

Categorieën: Mozilla-nl planet

Aki Sasaki: scriptworker 0.8.1 and 0.7.1

Mozilla planet - ti, 18/10/2016 - 18:47

Tl;dr: I just shipped scriptworker 0.8.1 (changelog) (github) (pypi) and scriptworker 0.7.1 (changelog) (github) (pypi)
These are patch releases, and are currently the only versions of scriptworker that work.

scriptworker 0.8.1

The json, embedded in the Azure XML, now contains a new property, hintId. Ideally this wouldn't have broken anything, but I was using that json dict as kwargs, rather than explicitly passing taskId and runId. This means that older versions of scriptworker no longer successfully poll for tasks.

This is now fixed in scriptworker 0.8.1.

scriptworker 0.7.1

Scriptworker 0.8.0 made some non-backwards-compatible changes to its config format, and there may be more such changes in the near future. To simplify things for other people working on scriptworker, I suggested they stay on 0.7.0 for the time being if they wanted to avoid the churn.

To allow for this, I created a 0.7.x branch and released 0.7.1 off of it. Currently, 0.8.1 and 0.7.1 are the only two versions of scriptworker that will successfully poll Azure for tasks.

comment count unavailable comments
Categorieën: Mozilla-nl planet

Mike Ratcliffe: Running ESLint in Atom for Mozilla Development

Mozilla planet - ti, 18/10/2016 - 17:54

Due to some recent changes in the way that we use eslint to check that our coding style linting Mozilla source code in Atom has been broken for a month or two.

I have recently spent some time working on Atom's linter-eslint plugin making it possible to bring all of that linting goodness back to life!

From the root of the project type:

./mach eslint --setup

Install the linter-eslint package v.8.00 or above. Then go to the package settings and enable the following options:

Eslint Settings

Once done, you should see errors and warnings as shown in the screenshot below:

Eslint in the Atom Editor

Categorieën: Mozilla-nl planet

Air Mozilla: MozFest 2016 Brown Bag

Mozilla planet - ti, 18/10/2016 - 17:00

MozFest 2016 Brown Bag MozFest 2016 Brown Bag - October 18th, 2016 - 16:00 London

Categorieën: Mozilla-nl planet

Mozilla Security Blog: Phasing Out SHA-1 on the Public Web

Mozilla planet - ti, 18/10/2016 - 16:40

An algorithm we’ve depended on for most of the life of the Internet — SHA-1 — is aging, due to both mathematical and technological advances. Digital signatures incorporating the SHA-1 algorithm may soon be forgeable by sufficiently-motivated and resourceful entities.

Via our and others’ work in the CA/Browser Forum, following our deprecation plan announced last year and per recommendations by NIST, issuance of SHA-1 certificates mostly halted for the web last January, with new certificates moving to more secure algorithms. Since May 2016, the use of SHA-1 on the web fell from 3.5% to 0.8% as measured by Firefox Telemetry.

In early 2017, Firefox will show an overridable “Untrusted Connection” error whenever a SHA-1 certificate is encountered that chains up to a root certificate included in Mozilla’s CA Certificate Program. SHA-1 certificates that chain up to a manually-imported root certificate, as specified by the user, will continue to be supported by default; this will continue allowing certain enterprise root use cases, though we strongly encourage everyone to migrate away from SHA-1 as quickly as possible.

This policy has been included as an option in Firefox 51, and we plan to gradually ramp up its usage.  Firefox 51 is currently in Developer Edition, and is currently scheduled for release in January 2017. We intend to enable this deprecation of SHA-1 SSL certificates for a subset of Beta users during the beta phase for 51 (beginning November 7) to evaluate the impact of the policy on real-world usage. As we gain confidence, we’ll increase the number of participating Beta users. Once Firefox 51 is released in January, we plan to proceed the same way, starting with a subset of users and eventually disabling support for SHA-1 certificates from publicly-trusted certificate authorities in early 2017.

Questions about SHA-1 based certificates should be directed to the forum.

Categorieën: Mozilla-nl planet

Mozilla: helft internetpagina's via HTTPS verstuurd - Telecompaper

Nieuws verzameld via Google - ti, 18/10/2016 - 14:33

Mozilla: helft internetpagina's via HTTPS verstuurd
Mondiaal wordt circa de helft van alle opgevraagd internetpagina's van verzender naar ontvanger via het beveiligde HTTPS-protocol verstuurd. Dat stelt Mozilla op basis van gegevens, verzameld via de eigen Firefox-browser. De cijfers zijn afkomstig uit ...

Categorieën: Mozilla-nl planet

Christian Heilmann: crossfit.js

Mozilla planet - ti, 18/10/2016 - 13:43

Also on Medium, in case you want to comment.

Rey Bango telling you to do it

When I first heard about Crossfit, I thought it to be an excellent idea. I still do, to be fair:

  • Short, very focused and intense workouts instead of time consuming exercise schedules
  • No need for expensive and complex equipment; it is basically running and lifting heavy things
  • A lot of the workouts use your own body weight instead of extra equipment
  • A strong focus on good nutrition. Remove the stuff that is fattening and concentrate on what’s good for you

In essence, it sounded like the counterpoint to overly complex and expensive workouts we did before. You didn’t need expensive equipment. Some bars, ropes and tyres will do. There was also no need for a personal trainer, tailor-made outfits and queuing up for machines to be ready for you at the gym.

Fast forward a few years and you’ll see that we made Crossfit almost a running joke. You have overly loud Crossfit bros crashing weights in the gym, grunting and shouting and telling each other to “feel the burn” and “when you haven’t thrown up you haven’t worked out hard enough”. You have all kind of products branded Crossfit and even special food to aid your Crossfit workouts.

Thanks, commercialism and marketing. You made something simple and easy annoying and elitist again. There was no need for that.

One thing about Crossfit is that it can be dangerous. Without good supervision by friends it is pretty easy to seriously injure yourself. It is about moderation, not about competition.

I feel the same thing happened to JavaScript and it annoys me. JavaScript used to be an add-on to what we did on the web. It gave extra functionality and made it easier for our end users to finish the tasks they came for. It was a language to learn, not a lifestyle to subscribe to.

Nowadays JavaScript is everything. Client side use is only a small part of it. We use it to power servers, run tasks, define build processes and create fat client software. And everybody has an opinionated way to use it and is quick to tell others off for “not being professional” if they don’t subscribe to it. The brogrammer way of life rears its ugly head.

Let’s think of JavaScript like Crossfit was meant to be. Lean, healthy exercise going back to what’s good for you:

  • Use your body weight – on the client, if something can be done with HTML, let’s do it with HTML. When we create HTML with JavaScript, let’s create what makes sense, not lots of DIVs.
  • Do the heavy lifting – JavaScript is great to make complex tasks easier. Use it to create simpler interfaces with fewer reloads. Change user input that was valid but not in the right format. Use task runners to automate annoying work. However, if you realise that the task is a nice to have and not a need, remove it instead. Use worker threads to do heavy computation without clobbering the main UI.
  • Watch what you consume – keep dependencies to a minimum and make sure that what you depend on is reliable, safe to use and update-able.
  • Run a lot – performance is the most important part. Keep your solutions fast and lean.
  • Stick to simple equipment – it is not about how many “professional tools” we use. It is about keeping it easy for people to start working out.
  • Watch your calories – we have a tendency to use too much on the web. Libraries, polyfills, frameworks. Many of these make our lives easier but weigh heavy on our end users. It’s important to understand that our end users don’t have our equipment. Test your products on a cheap Android on a flaky connection, remove what isn’t needed and make it better for everyone.
  • Eat good things – browsers are evergreen and upgrade continuously these days. There are a lot of great features to use to make your products better. Visit “Can I use” early and often and play with new things that replace old cruft.
  • Don’t be a code bro – nobody is impressed with louts that constantly tell people off for not being as fit as they are. Be a code health advocate and help people get into shape instead.

JavaScript is much bigger these days than a language to learn in a day. That doesn’t mean, however, that every new developer needs to know the whole stack to be a useful contributor. Let’s keep it simple and fun.

Categorieën: Mozilla-nl planet

Mozilla announces launch of $250000 Equal Rating Innovation Challenge - PC Tech Magazine

Nieuws verzameld via Google - ti, 18/10/2016 - 09:29

Mozilla announces launch of $250000 Equal Rating Innovation Challenge
PC Tech Magazine
Mozilla announces the launch of its global Equal Rating Innovation Challenge, a competition which invites contributions on ways to provide unfettered access to the open Internet for anyone across the globe. As part of its initiative, Mozilla is asking ...

en meer »
Categorieën: Mozilla-nl planet

Mozilla to award R3.5m in prize money for ideas to connect 4bn people online -

Nieuws verzameld via Google - ti, 18/10/2016 - 09:27

Mozilla to award R3.5m in prize money for ideas to connect 4bn people online
Mozilla's Equal Rating Innovation Challenge is hoping to answer one question: how do you connect four billion people to the internet? Okay, perhaps that might not be the easiest question to answer but it is an important one, which even Mozilla concedes ...
Mozilla announces launch of $250000 Equal Rating Innovation ChallengePC Tech Magazine

alle 2 nieuwsartikelen »
Categorieën: Mozilla-nl planet

QMO: Firefox 50 Beta 7 Testday Results

Mozilla planet - ti, 18/10/2016 - 09:00

Hello Mozillians!

As you may already know, last Friday – October 14th – we held a new Testday event, for Firefox 50 Beta 7.

Thank you all for helping us making Mozilla a better place – Onek Jude, Sadamu Samuel, Moin Shaikh, Suramya,ss22ever22 and Ilse Macías.

A big thank you goes out to all our active moderators too!


  • there were 3 verified bugs:
  • all the tests performed on the New Awesome Bar and on Flash 23 were marked as PASS.

Keep an eye on QMO for upcoming events!

Categorieën: Mozilla-nl planet

Nicholas Nethercote: How to speed up the Rust compiler

Mozilla planet - ti, 18/10/2016 - 06:06

Rust is a great language, and Mozilla plans to use it extensively in Firefox. However, the Rust compiler (rustc) is quite slow and compile times are a pain point for many Rust users. Recently I’ve been working on improving that. This post covers how I’ve done this, and should be of interest to anybody else who wants to help speed up the Rust compiler. Although I’ve done all this work on Linux it should be mostly applicable to other platforms as well.

Getting the code

The first step is to get the rustc code. First, I fork the main Rust repository on GitHub. Then I make two local clones: a base clone that I won’t modify, which serves as a stable comparison point (rust0), and a second clone where I make my modifications (rust1). I use commands something like this:

user=nnethercote for r in rust0 rust1 ; do cd ~/moz git clone$user/rust $r cd $r git remote add upstream git remote set-url origin$user/rust done Building the Rust compiler

Within the two repositories, I first configure:

./configure --enable-optimize --enable-debuginfo

I configure with optimizations enabled because that matches release versions of rustc. And I configure with debug info enabled so that I get good information from profilers.

Then I build:

RUSTFLAGS='' make -j8

[Update: I previously had -Ccodegen-units=8 in RUSTFLAGS because it speeds up compile times. But Lars Bergstrom informed me that it can slow down the resulting program significantly. I measured and he was right — the resulting rustc was about 5–10% slower. So I’ve stopped using it now.]

That does a full build, which does the following:

  • Downloads a stage0 compiler, which will be used to build the stage1 local compiler.
  • Builds LLVM, which will become part of the local compilers.
  • Builds the stage1 compiler with the stage0 compiler.
  • Builds the stage2 compiler with the stage1 compiler.

It can be mind-bending to grok all the stages, especially with regards to how libraries work. (One notable example: the stage1 compiler uses the system allocator, but the stage2 compiler uses jemalloc.) I’ve found that the stage1 and stage2 compilers have similar performance. Therefore, I mostly measure the stage1 compiler because it’s much faster to just build the stage1 compiler, which I do with the following command.

RUSTFLAGS='-Ccodegen-units=8' make -j8 rustc-stage1

Building the compiler takes a while, which isn’t surprising. What is more surprising is that rebuilding the compiler after a small change also takes a while. That’s because a lot of code gets recompiled after any change. There are two reasons for this.

  • Rust’s unit of compilation is the crate. Each crate can consist of multiple files. If you modify a crate, the whole crate must be rebuilt. This isn’t surprising.
  • rustc’s dependency checking is very coarse. If you modify a crate, every other crate that depends on it will also be rebuilt, no matter how trivial the modification. This surprised me greatly. For example, any modification to the parser (which is in a crate called libsyntax) causes multiple other crates to be recompiled, a process which takes 6 minutes on my fast desktop machine. Almost any change to the compiler will result in a rebuild that takes at least 2 or 3 minutes.

Incremental compilation should greatly improve the dependency situation, but it’s still in an experimental state and I haven’t tried it yet.

To run all the tests I do this (after a full build):

ulimit -c 0 && make check

The checking aborts if you don’t do the ulimit, because the tests produces lots of core files and it doesn’t want to swamp your disk.

The build system is complex, with lots of options. This command gives a nice overview of some common invocations:

make tips Basic profiling

The next step is to do some basic profiling. I like to be careful about which rustc I am invoking at any time, especially if there’s a system-wide version installed, so I avoid relying on PATH and instead define some environment variables like this:

export RUSTC01="$HOME/moz/rust0/x86_64-unknown-linux-gnu/stage1/bin/rustc" export RUSTC02="$HOME/moz/rust0/x86_64-unknown-linux-gnu/stage2/bin/rustc" export RUSTC11="$HOME/moz/rust1/x86_64-unknown-linux-gnu/stage1/bin/rustc" export RUSTC12="$HOME/moz/rust1/x86_64-unknown-linux-gnu/stage2/bin/rustc"

In the examples that follow I will use $RUSTC01 as the version of rustc that I invoke.

rustc has the ability to produce some basic stats about the time and memory used by each compiler pass. It is enabled with the -Ztime-passes flag. If you are invoking rustc directly you’d do it like this:

$RUSTC01 -Ztime-passes

If you are building with Cargo you can instead do this:

RUSTC=$RUSTC01 cargo rustc -- -Ztime-passes

The RUSTC= part tells Cargo you want to use a non-default rustc, and the part after the -- is flags that will be passed to rustc when it builds the final crate. (A bit weird, but useful.)

Here is some sample output from -Ztime-passes:

time: 0.056; rss: 49MB parsing time: 0.000; rss: 49MB recursion limit time: 0.000; rss: 49MB crate injection time: 0.000; rss: 49MB plugin loading time: 0.000; rss: 49MB plugin registration time: 0.103; rss: 87MB expansion time: 0.000; rss: 87MB maybe building test harness time: 0.002; rss: 87MB maybe creating a macro crate time: 0.000; rss: 87MB checking for inline asm in case the target doesn't support it time: 0.005; rss: 87MB complete gated feature checking time: 0.008; rss: 87MB early lint checks time: 0.003; rss: 87MB AST validation time: 0.026; rss: 90MB name resolution time: 0.019; rss: 103MB lowering ast -> hir time: 0.004; rss: 105MB indexing hir time: 0.003; rss: 105MB attribute checking time: 0.003; rss: 105MB language item collection time: 0.004; rss: 105MB lifetime resolution time: 0.000; rss: 105MB looking for entry point time: 0.000; rss: 105MB looking for plugin registrar time: 0.015; rss: 109MB region resolution time: 0.002; rss: 109MB loop checking time: 0.002; rss: 109MB static item recursion checking time: 0.060; rss: 109MB compute_incremental_hashes_map time: 0.000; rss: 109MB load_dep_graph time: 0.021; rss: 109MB type collecting time: 0.000; rss: 109MB variance inference time: 0.038; rss: 113MB coherence checking time: 0.126; rss: 114MB wf checking time: 0.219; rss: 118MB item-types checking time: 1.158; rss: 125MB item-bodies checking time: 0.000; rss: 125MB drop-impl checking time: 0.092; rss: 127MB const checking time: 0.015; rss: 127MB privacy checking time: 0.002; rss: 127MB stability index time: 0.011; rss: 127MB intrinsic checking time: 0.007; rss: 127MB effect checking time: 0.027; rss: 127MB match checking time: 0.014; rss: 127MB liveness checking time: 0.082; rss: 127MB rvalue checking time: 0.145; rss: 161MB MIR dump time: 0.015; rss: 161MB SimplifyCfg time: 0.033; rss: 161MB QualifyAndPromoteConstants time: 0.034; rss: 161MB TypeckMir time: 0.001; rss: 161MB SimplifyBranches time: 0.006; rss: 161MB SimplifyCfg time: 0.089; rss: 161MB MIR passes time: 0.202; rss: 161MB borrow checking time: 0.005; rss: 161MB reachability checking time: 0.012; rss: 161MB death checking time: 0.014; rss: 162MB stability checking time: 0.000; rss: 162MB unused lib feature checking time: 0.101; rss: 162MB lint checking time: 0.000; rss: 162MB resolving dependency formats time: 0.001; rss: 162MB NoLandingPads time: 0.007; rss: 162MB SimplifyCfg time: 0.017; rss: 162MB EraseRegions time: 0.004; rss: 162MB AddCallGuards time: 0.126; rss: 164MB ElaborateDrops time: 0.001; rss: 164MB NoLandingPads time: 0.012; rss: 164MB SimplifyCfg time: 0.008; rss: 164MB InstCombine time: 0.003; rss: 164MB Deaggregator time: 0.001; rss: 164MB CopyPropagation time: 0.003; rss: 164MB AddCallGuards time: 0.001; rss: 164MB PreTrans time: 0.182; rss: 164MB Prepare MIR codegen passes time: 0.081; rss: 167MB write metadata time: 0.590; rss: 177MB translation item collection time: 0.034; rss: 180MB codegen unit partitioning time: 0.032; rss: 300MB internalize symbols time: 3.491; rss: 300MB translation time: 0.000; rss: 300MB assert dep graph time: 0.000; rss: 300MB serialize dep graph time: 0.216; rss: 292MB llvm function passes [0] time: 0.103; rss: 292MB llvm module passes [0] time: 4.497; rss: 308MB codegen passes [0] time: 0.004; rss: 308MB codegen passes [0] time: 5.185; rss: 308MB LLVM passes time: 0.000; rss: 308MB serialize work products time: 0.257; rss: 297MB linking

As far as I can tell, the indented passes are sub-passes, and the parent pass is the first non-indented pass afterwards.

More serious profiling

The -Ztime-passes flag gives a good overview, but you really need a profiling tool that gives finer-grained information to get far. I’ve done most of my profiling with two Valgrind tools, Cachegrind and DHAT. I invoke Cachegrind like this:

valgrind \ --tool=cachegrind --cache-sim=no --branch-sim=yes \ --cachegrind-out-file=$OUTFILE $RUSTC01 ...

where $OUTFILE specifies an output filename. I find the instruction counts measured by Cachegrind to be highly useful; the branch simulation results are occasionally useful, and the cache simulation results are almost never useful.

The Cachegrind output looks like this:

-------------------------------------------------------------------------------- Ir -------------------------------------------------------------------------------- 22,153,170,953 PROGRAM TOTALS -------------------------------------------------------------------------------- Ir file:function -------------------------------------------------------------------------------- 923,519,467 /build/glibc-GKVZIf/glibc-2.23/malloc/malloc.c:_int_malloc 879,700,120 /home/njn/moz/rust0/src/rt/miniz.c:tdefl_compress 629,196,933 /build/glibc-GKVZIf/glibc-2.23/malloc/malloc.c:_int_free 394,687,991 ???:??? 379,869,259 /home/njn/moz/rust0/src/libserialize/ 376,921,973 /build/glibc-GKVZIf/glibc-2.23/malloc/malloc.c:malloc 263,083,755 /build/glibc-GKVZIf/glibc-2.23/string/::/sysdeps/x86_64/multiarch/memcpy-avx-unaligned.S:__memcpy_avx_unaligned 257,219,281 /home/njn/moz/rust0/src/libserialize/<serialize::opaque::Decoder<'a> as serialize::serialize::Decoder>::read_usize 217,838,379 /build/glibc-GKVZIf/glibc-2.23/malloc/malloc.c:free 217,006,132 /home/njn/moz/rust0/src/librustc_back/ 211,098,567 ???:llvm::SelectionDAG::Combine(llvm::CombineLevel, llvm::AAResults&, llvm::CodeGenOpt::Level) 185,630,213 /home/njn/moz/rust0/src/libcore/hash/<rustc_incremental::calculate_svh::hasher::IchHasher as core::hash::Hasher>::write 171,360,754 /home/njn/moz/rust0/src/librustc_data_structures/<rustc::ty::subst::Substs<'tcx> as core::hash::Hash>::hash 150,026,054 ???:llvm::SelectionDAGISel::SelectCodeCommon(llvm::SDNode*, unsigned char const*, unsigned int)

Here “Ir” is short for “I-cache reads”, which corresponds to the number of instructions executed. Cachegrind also gives line-by-line annotations of the source code.

The Cachegrind results indicate that malloc and free are usually the two hottest functions in the compiler. So I also use DHAT, which is a malloc profiler that tells you exactly where all your malloc calls are coming from.  I invoke DHAT like this:

/home/njn/grind/ws3/vg-in-place \ --tool=exp-dhat --show-top-n=1000 --num-callers=4 \ --sort-by=tot-blocks-allocd $RUSTC01 ... 2> $OUTFILE

I sometimes also use --sort-by=tot-bytes-allocd. DHAT’s output looks like this:

==16425== -------------------- 1 of 1000 -------------------- ==16425== max-live: 30,240 in 378 blocks ==16425== tot-alloc: 20,866,160 in 260,827 blocks (avg size 80.00) ==16425== deaths: 260,827, at avg age 113,438 (0.00% of prog lifetime) ==16425== acc-ratios: 0.74 rd, 1.00 wr (15,498,021 b-read, 20,866,160 b-written) ==16425== at 0x4C2BFA6: malloc (vg_replace_malloc.c:299) ==16425== by 0x5AD392B: <syntax::ptr::P<T> as serialize::serialize::Decodable>::decode ( ==16425== by 0x5AD4456: <core::iter::Map<I, F> as core::iter::iterator::Iterator>::next ( ==16425== by 0x5AE2A52: rustc_metadata::decoder::<impl rustc_metadata::cstore::CrateMetadata>::get_attributes ( ==16425== ==16425== -------------------- 2 of 1000 -------------------- ==16425== max-live: 1,360 in 17 blocks ==16425== tot-alloc: 10,378,160 in 129,727 blocks (avg size 80.00) ==16425== deaths: 129,727, at avg age 11,622 (0.00% of prog lifetime) ==16425== acc-ratios: 0.47 rd, 0.92 wr (4,929,626 b-read, 9,599,798 b-written) ==16425== at 0x4C2BFA6: malloc (vg_replace_malloc.c:299) ==16425== by 0x881136A: <syntax::ptr::P<T> as core::clone::Clone>::clone ( ==16425== by 0x88233A7: syntax::ext::tt::macro_parser::parse ( ==16425== by 0x8812E66: syntax::tokenstream::TokenTree::parse (

The “deaths” value here indicate the total number of calls to malloc for each call stack, which is usually the metric of most interest. The “acc-ratios” value can also be interesting, especially if the “rd” value is 0.00, because that indicates the allocated blocks are never read. (See below for example of problems that I found this way.)

For both profilers I also pipe $OUTFILE through eddyb’s script which demangles ugly Rust symbols like this:


to something much nicer, like this:

<serialize::opaque::Decoder<'a> as serialize::serialize::Decoder>::read_usize

For programs that use Cargo, sometimes it’s useful to know the exact rustc invocations that Cargo uses. Find out with either of these commands:

RUSTC=$RUSTC01 cargo build -v RUSTC=$RUSTC01 cargo rust -v

I also have done a decent amount of ad hoc println profiling, where I insert println! calls in hot parts of the code and then I use a script to post-process them. This can be very useful when I want to know exactly how many times particular code paths are hit.

I’ve also tried perf. It works, but I’ve never established much of a rapport with it. YMMV. In general, any profiler that works with C or C++ code should also work with Rust code.

Finding suitable benchmarks

Once you know how you’re going to profile you need some good workloads. You could use the compiler itself, but it’s big and complicated and reasoning about the various stages can be confusing, so I have avoided that myself.

Instead, I have focused entirely on rustc-benchmarks, a pre-existing rustc benchmark suite. It contains 13 benchmarks of various sizes. It has been used to track rustc’s performance at for some time, but it wasn’t easy to use locally until I wrote a script for that purpose. I invoke it something like this:

./ \ /home/njn/moz/rust0/x86_64-unknown-linux-gnu/stage1/bin/rustc \ /home/njn/moz/rust1/x86_64-unknown-linux-gnu/stage1/bin/rustc

It compares the two given compilers, doing debug builds, on the benchmarks See the next section for example output. If you want to run a subset of the benchmarks you can specify them as additional arguments.

Each benchmark in rustc-benchmarks has a makefile with three targets. See the README for details on these targets, which can be helpful.


Here are the results if I compare the following two versions of rustc with

  • The commit just before my first commit (on September 12).
  • A commit from October 13.
futures-rs-test 5.028s vs 4.433s --> 1.134x faster (variance: 1.020x, 1.030x) helloworld 0.283s vs 0.235s --> 1.202x faster (variance: 1.012x, 1.025x) html5ever-2016- 6.293s vs 5.652s --> 1.113x faster (variance: 1.011x, 1.008x) hyper.0.5.0 6.182s vs 5.039s --> 1.227x faster (variance: 1.002x, 1.018x) inflate-0.1.0 5.168s vs 4.935s --> 1.047x faster (variance: 1.001x, 1.002x) issue-32062-equ 0.457s vs 0.347s --> 1.316x faster (variance: 1.010x, 1.007x) issue-32278-big 2.046s vs 1.706s --> 1.199x faster (variance: 1.003x, 1.007x) jld-day15-parse 1.793s vs 1.538s --> 1.166x faster (variance: 1.059x, 1.020x) piston-image-0. 13.871s vs 11.885s --> 1.167x faster (variance: 1.005x, 1.005x) regex.0.1.30 2.937s vs 2.516s --> 1.167x faster (variance: 1.010x, 1.002x) rust-encoding-0 2.414s vs 2.078s --> 1.162x faster (variance: 1.006x, 1.005x) syntex-0.42.2 36.526s vs 32.373s --> 1.128x faster (variance: 1.003x, 1.004x) syntex-0.42.2-i 21.500s vs 17.916s --> 1.200x faster (variance: 1.007x, 1.013x)

Not all of the improvement is due to my changes, but I have managed a few nice wins, including the following.

#36592: There is an arena allocator called TypedArena. rustc creates many of these, mostly short-lived. On creation, each arena would allocate a 4096 byte chunk, in preparation for the first arena allocation request. But DHAT’s output showed me that the vast majority of arenas never received such a request! So I made TypedArena lazy — the first chunk is now only allocated when necessary. This reduced the number of calls to malloc greatly, which sped up compilation of several rustc-benchmarks by 2–6%.

#36734: This one was similar. Rust’s HashMap implementation is lazy — it doesn’t allocate any memory for elements until the first one is inserted. This is a good thing because it’s surprisingly common in large programs to create HashMaps that are never used. However, Rust’s HashSet implementation (which is just a layer on top of the HashMap) didn’t have this property, and guess what? rustc also creates large numbers of HashSets that are never used. (Again, DHAT’s output made this obvious.) So I fixed that, which sped up compilation of several rustc-benchmarks by 1–4%. Even better, because this change is to Rust’s stdlib, rather than rustc itself, it will speed up any program that creates HashSets without using them.

#36917: This one involved avoiding some useless data structure manipulation when a particular table was empty. Again, DHAT pointed out a table that was created but never read, which was the clue I needed to identify this improvement. This sped up two benchmarks by 16% and a couple of others by 3–5%.

#37064: This one changed a hot function in serialization code to return a Cow<str> instead of a String, which avoided a lot of allocations.

Future work

Profiles indicate that the following parts of the compiler account for a lot of its runtime.

  • malloc and free are still the two hottest functions in most benchmarks. Avoiding heap allocations can be a win.
  • Compression is used for crate metadata and LLVM bitcode. (This shows up in profiles under a function called tdefl_compress.)  There is an issue open about this.
  • Hash table operations are hot. A lot of this comes from the interning of various values during type checking; see the CtxtInterners type for details.
  • Crate metadata decoding is also costly.
  • LLVM execution is a big chunk, especially when doing optimized builds. So far I have treated LLVM as a black box and haven’t tried to change it, at least partly because I don’t know how to build it with debug info, which is necessary to get source files and line numbers in profiles.

A lot of programs have broadly similar profiles, but occasionally you get an odd one that stresses a different part of the compiler. For example, in rustc-benchmarks, inflate-0.1.0 is dominated by operations involving the (delighfully named) ObligationsForest (see #36993), and html5ever-2016-08-25 is dominated by what I think is macro processing. So it’s worth profiling the compiler on new codebases.

Caveat lector

I’m still a newcomer to Rust development. Although I’ve had lots of help on the #rustc IRC channel — big thanks to eddyb and simulacrum in particular — there may be things I am doing wrong or sub-optimally. Nonetheless, I hope this is a useful starting point for newcomers who want to speed up the Rust compiler.

Categorieën: Mozilla-nl planet

This Week In Rust: This Week in Rust 152

Mozilla planet - ti, 18/10/2016 - 06:00

Hello and welcome to another issue of This Week in Rust! Rust is a systems language pursuing the trifecta: safety, concurrency, and speed. This is a weekly summary of its progress and community. Want something mentioned? Tweet us at @ThisWeekInRust or send us a pull request. Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub. If you find any errors in this week's issue, please submit a PR.

Updates from Rust Community Blog Posts News & Project Updates Other Weeklies from Rust Community Crate of the Week

This week's Create of the Week is xargo - for effortless cross compilation of Rust programs to custom bare-metal targets like ARM Cortex-M. It recently reached version 0.2.0 and you can read the announcement here.

Submit your suggestions and votes for next week!

Call for Participation

Always wanted to contribute to open-source projects but didn't know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

If you are a Rust project owner and are looking for contributors, please submit tasks here.

Updates from Rust Core

106 pull requests were merged in the last week.

New Contributors
  • Danny Hua
  • Fabian Frei
  • Mikko Rantanen
  • Nabeel Omer
Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

No RFCs were approved this week.

Final Comment Period

Every week the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now. This week's FCPs are:

New RFCs Style RFCs

Style RFCs are part of the process for deciding on style guidelines for the Rust community and defaults for Rustfmt. The process is similar to the RFC process, but we try to reach rough consensus on issues (including a final comment period) before progressing to PRs. Just like the RFC process, all users are welcome to comment and submit RFCs. If you want to help decide what Rust code should look like, come get involved!

FCP issues:

Other issues getting a lot of discussion:

No PRs this week.

Upcoming Events

If you are running a Rust event please add it to the calendar to get it mentioned here. Email the Rust Community Team for access.

fn work(on: RustProject) -> Money

No jobs listed for this week.

Tweet us at @ThisWeekInRust to get your job offers listed here!

Friends of the Forest

Our community likes to recognize people who have made outstanding contributions to the Rust Project, its ecosystem, and its community. These people are 'friends of the forest'.

This week's friends of the forest are:

I'd like to nominate bluss for his work on scientific programming in Rust. ndarray is a monumental project but in addition to that he has worked (really) hard to share that knowledge among others and provided easy-to-use libraries like matrixmultiply. Without bluss' assistance rulinalg would be in a far worse state.

I'd like to nominate Yehuda Katz, the lord of package managers.

Submit your Friends-of-the-Forest nominations for next week!

Quote of the Week

<dRk> that gives a new array of errors, guess that's a good thing <misdreavus> you passed one layer of tests, and hit the next layer :P <misdreavus> rustc is like onions <dRk> it makes you cry?

— From #rust-beginners.

Thanks to Quiet Misdreavus for the suggestion.

Submit your quotes for next week!

This Week in Rust is edited by: nasa42, llogiq, and brson.

Categorieën: Mozilla-nl planet

The Pirate Bay News & Update: Google Chrome And Mozilla Firefox Flagged ... - Parent Herald

Nieuws verzameld via Google - ti, 18/10/2016 - 05:31


The Pirate Bay News & Update: Google Chrome And Mozilla Firefox Flagged ...
Parent Herald
According to reports, The Pirate Bay is flagged by Google Chrome and Mozilla Firefox due to safety issues. Google Chrome and Mozilla Firefox, tagged The Pirate Bay and other popular torrent providers as fraudulent sites. In addition, Chrome and Firefox ...
The Pirate Bay Shutdown Update: Flagged by Google Chrome! Can It Really ...Gamenguide

alle 7 nieuwsartikelen »
Categorieën: Mozilla-nl planet

Daniel Stenberg: curl up in Nuremberg!

Mozilla planet - mo, 17/10/2016 - 22:45

I’m very happy to announce that the curl project is about to run our first ever curl meeting and developers conference.

March 18-19, Nuremberg Germany

Everyone interested in curl, libcurl and related matters is invited to participate. We only ask of you to register and pay the small fee. The fee will be used for food and more at the event.

You’ll find the full and detailed description of the event and the specific location in the curl wiki.

The agenda for the weekend is purposely kept loose to allow for flexibility and unconference-style adding things and topics while there. You will thus have the chance to present what you like and affect what others present. Do tell us what you’d like to talk about or hear others talk about! The sign-up for the event isn’t open yet, as we first need to work out some more details.

We have a dedicated mailing list for discussing the meeting, called curl-meet, so please consider yourself invited to join in there as well!

Thanks a lot to SUSE for hosting!

Feel free to help us make a cool logo for the event!


(The 19th birthday of curl is suitably enough the day after, on March 20.)

Categorieën: Mozilla-nl planet

Air Mozilla: Mozilla Weekly Project Meeting, 17 Oct 2016

Mozilla planet - mo, 17/10/2016 - 20:00

Mozilla Weekly Project Meeting The Monday Project Meeting

Categorieën: Mozilla-nl planet

Firefox Nightly: These Weeks in Firefox: Issue 3

Mozilla planet - mo, 17/10/2016 - 17:00

The Firefox Desktop team met yet again last Tuesday to share updates. Here are some fresh updates that we think you might find interesting:

Highlights Contributor(s) of the Week Project Updates Context Graph Electrolysis (e10s) Platform UI Privacy / Security Search

Here are the raw meeting notes that were used to derive this list.

Want to help us build Firefox? Get started here!

Here’s a tool to find some mentored, good first bugs to hack on.

Categorieën: Mozilla-nl planet