Mozilla Nederland LogoDe Nederlandse

Abonneren op feed Mozilla planet
Planet Mozilla -
Bijgewerkt: 1 week 6 dagen geleden

Daniel Stenberg: curl 7.67.0

wo, 06/11/2019 - 07:47

There has been 56 days since curl 7.66.0 was released. Here comes 7.67.0!

This might not be a release with any significant bells or whistles that will make us recall this date in the future when looking back, but it is still another steady step along the way and thanks to the new things introduced, we still bump the minor version number. Enjoy!

As always, download curl from

If you need excellent commercial support for whatever you do with curl. Contact us at wolfSSL.


the 186th release
3 changes
56 days (total: 7,901)

125 bug fixes (total: 5,472)
212 commits (total: 24,931)
1 new public libcurl function (total: 81)
0 new curl_easy_setopt() option (total: 269)

1 new curl command line option (total: 226)
68 contributors, 42 new (total: 2,056)
42 authors, 26 new (total: 744)
0 security fixes (total: 92)
0 USD paid in Bug Bounties

The 3 changes Disable progress meter

Since virtually forever you’ve been able to tell curl to “shut up” with -s. The long version of that is --silent. Silent makes the curl tool disable the progress meter and all other verbose output.

Starting now, you can use --no-progress-meter, which in a more granular way only disables the progress meter and lets the other verbose outputs remain.


When doing HTTP/2 using curl and multiple streams over a single connection, you can now also set the number of parallel streams you’d like to use which will be communicated to the server. The idea is that this option should be possible to use for HTTP/3 as well going forward, but due to the early days there it doesn’t yet.


This is a new flag that the URL parser API supports. It informs the parser that even if it doesn’t recognize the URL scheme it should still allow it to not have an authority part (like host name).


Here are some interesting bug-fixes done for this release. Check out the changelog for the full list.

Winbuild build error

The winbuild setup to build with MSVC with nmake shipped in 7.66.0 with a flaw that made it fail. We had added the vssh directory but not adjusted these builds scripts for that. The fix was of course very simple.

We have since added several winbuild builds to the CI to make sure we catch these kinds of mistakes earlier and better in the future.

FTP: optimized CWD handling

At least two landed bug-fixes make curl avoid issuing superfluous CWD commands (FTP lingo for “cd” or change directory) thereby reducing latency.


Several fixes improved HTTP/3 handling. It builds on Windows better, the ngtcp2 backend now also behaves correctly on macOS, the build instructions are clearer.

Mimics socketpair on Windows

Thanks to the new socketpair look-alike function, libcurl now provides a socket for the application to wait for even when doing name resolves in the dedicated resolver thread. This makes the Windows code work catch up with the similar change that landed in 7.66.0. This makes it easier for applications to behave correctly during the short time gaps when libcurl resolves a host name and nothing else is happening.

curl with lots of URLs

With the introduction of parallel transfers in 7.66.0, we changed how curl allocated handles and setup transfers ahead of time. This made command lines that for example would use [1-1000000] ranges create a million CURL handles and thus use a lot of memory.

It did in fact break a few existing use cases where people did very large ranges with curl. Starting now, curl will just create enough curl handles ahead of time to allow the maximum amount of parallelism requested and users should yet again be able to specify ranges with many million iterations.

curl -d@ was slow

It was discovered that if you ask curl to post data with -d @filename, that operation was unnecessary slow for large files and was sped up significantly.

DoH fixes

Several corrections were made after some initial fuzzing of the DoH code. A benign buffer overflow, a memory leak and more.

HTTP/2 fixes

We relaxed the :authority push promise checks, fixed two cases where libcurl could “forget” a stream after it had delivered all data and dup’ed HTTP/2 handles could issue dummy PRIORITY frames!


When libcurl’s connect attempt fails and errno says ETIMEDOUT it means that the underlying TCP connect attempt timed out. This will now be reflected back in the libcurl API as the timed out error code instead of the previously used CURLE_COULDNT_CONNECT.

One of the use cases for this is curl’s --retry option which now considers this situation to be a timeout and thus will consider it fine to retry…

Parsing URL with fragment and question mark

There was a regression in the URL parser that made it mistreat URLs without a query part but with a question mark in the fragment.

Categorieën: Mozilla-nl planet

Marco Zehe: Nolan Lawson shares what he has learned about accessibility

di, 05/11/2019 - 16:24

Over the past year and a half, I have ventured time and again into the federated Mastodon social network. In those ventures, I have contributed bug reports to both the Mastodon client as well as some alternative clients on the web, iOS, and Android.

One of those clients, a single-page, progressive web app, is Pinafore by Nolan Lawson. He had set out to create a fast, light-weight, and accessible, client from the ground up. When I started to use Pinafore, I immediately noticed that a lot of thought and effort had already gone into the client and I could immediately start using it.

I then started contributing some bug reports, and over time, Nolan has improved what was already very good tremendously, by adding more keyboard support, so that even as a screen reader user, one can use Pinafore without using virtual buffers, various light and dark themes, support for reducing animations, and much, much more.

And now, Nolan has shared what he has learned about accessibility in the process. His post is an excellent recollection of some of the challenges when dealing with an SPA, cross-platform, taking into account screen readers, keyboard users, styling stuff etc., and how to overcome those obstacles. It is an excellent read which contains suggestions and food for thought for many web developers. Enjoy the read!

Categorieën: Mozilla-nl planet

This Week In Rust: This Week in Rust 311

di, 05/11/2019 - 06:00

Hello and welcome to another issue of This Week in Rust! Rust is a systems language pursuing the trifecta: safety, concurrency, and speed. This is a weekly summary of its progress and community. Want something mentioned? Tweet us at @ThisWeekInRust or send us a pull request. Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub. If you find any errors in this week's issue, please submit a PR.

Updates from Rust Community News & Blog Posts #Rust2020

Find all #Rust2020 posts at Read Rust.

Crate of the Week

This week's crate is displaydoc, a procedural derive macro to implement Display by string-interpolating the doc comment.

Thanks to Willi Kappler for the suggesion!

Submit your suggestions and votes for next week!

Call for Participation

Always wanted to contribute to open-source projects but didn't know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

If you are a Rust project owner and are looking for contributors, please submit tasks here.

Updates from Rust Core

217 pull requests were merged in the last week

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

No RFCs were approved this week.

Final Comment Period

Every week the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now.

RFCs Tracking Issues & PRs New RFCs Upcoming Events Asia Pacific Europe North America

If you are running a Rust event please add it to the calendar to get it mentioned here. Please remember to add a link to the event too. Email the Rust Community Team for access.

Rust Jobs

Tweet us at @ThisWeekInRust to get your job offers listed here!

Quote of the Week

I did manage to get this compile in the end - does anyone else find that the process of asking the question well on a public forum organizes their thoughts well enough to solve the problem?

David Mason on rust-users

Thanks to Daniel H-M for the suggestion!

Please submit quotes and vote for next week!

This Week in Rust is edited by: nasa42, llogiq, and Flavsditz.

Discuss on r/rust.

Categorieën: Mozilla-nl planet

The Firefox Frontier: Tracking Diaries with Matt Navarra

ma, 04/11/2019 - 18:40

In Tracking Diaries, we invited people from all walks of life to share how they spent a day online while using Firefox’s privacy protections to keep count of the trackers … Read more

The post Tracking Diaries with Matt Navarra appeared first on The Firefox Frontier.

Categorieën: Mozilla-nl planet

Mozilla Future Releases Blog: Restricting Notification Permission Prompts in Firefox

ma, 04/11/2019 - 15:38

In April we announced our intent to reduce the amount of annoying permission prompts for receiving desktop notifications that our users are seeing on a daily basis. To that effect, we ran a series of studies and experiments around restricting these prompts.

Based on these studies, we will require user interaction on all notification permission prompts, starting in Firefox 72. That is, before a site can ask for notification permission, the user will have to perform a tap, click, or press a key.

In this blog post I will give a detailed overview of the study results and further outline our plans for the future.


As previously described, we designed a measurement using Firefox Telemetry that allows us to report details around when and how a user interacts with a permission prompt without revealing personal information. The full probe definition can be seen in our source code. It was enabled for a randomly chosen pool of study participants (0.1% of the user population) on Firefox Release, as well as for all users of Firefox Nightly. The Release study additionally differentiated between new users and existing users, to account for an inherent bias of existing users towards denying permission requests (because they usually already “have” the right permissions on sites relevant to them).

We further enabled requiring user interaction for notification permission prompts in Nightly and Beta.


Most of the heavy lifting here was done by Felix Lawrence, who performed a thorough analysis of the data we collected. You can read his full report for our Firefox Release study. I will highlight some of the key takeaways:

Notification prompts are very unpopular. On Release, about 99% of notification prompts go unaccepted, with 48% being actively denied by the user. This is even worse than what we’ve seen on Nightly, and it paints a dire picture of the user experience on the web. To add from related telemetry data, during a single month of the Firefox 63 Release, a total of 1.45 Billion prompts were shown to users, of which only 23.66 Million were accepted. I.e, for each prompt that is accepted, sixty are denied or ignored. In about 500 Million cases during that month, users actually spent the time to click on “Not Now”.

Users are unlikely to accept a prompt when it is shown more than once for the same site. We had previously given websites the ability to ask users for notification every time they visit a site in a new tab. The underlying assumption that users would want to take several visits to make up their minds turns out to be wrong. As Felix notes, around 85% of prompts were accepted without the user ever having previously clicked “Not Now”.

Most notification prompts don’t follow user interaction. Especially on Release, the overall number of prompts that are already compatible with this intervention is very low.

Prompts that are shown as a result of user interaction have significantly better interaction metrics. This is an important takeaway. Along with the significant decrease in overall volume, we can see a significantly better rate of first-time allow decisions (52%) after enforcing user interaction on Nightly. The same can be observed for prompts with user interaction in our Release study, where existing users will accept 24% of first-time prompts with user interaction and new users would accept a whopping 56% of first-time prompts with user interaction.


Based on the outlined results we have decided to enact two new restrictions:

  • Starting from Firefox 70, replace the default “Not Now” option with “Never”, which will effectively hide the prompt on a page forever.
  • Starting from Firefox 72, require user interaction to show notification permission prompts. Prompts without interaction will only show a small icon in the url bar.

When a permission prompt is denied by Firefox, the user still has the ability to override this automatic decision by clicking the small permission icon that appears in the address bar. This lets users use the feature on websites with that prompt without waiting for user interaction.


Besides the clear improvements in user interaction rates that our study has shown, these restrictions were derived from a few other considerations:

  • Easy to upgrade. Requiring user interaction allows for an easy upgrade path for affected websites, while hiding annoying “on load” prompts.
  • Transparent. Unlike other heuristics (such as “did the user visit this site a lot in the past”), interaction is easy to understand for both developers and users.
  • Encourages pre-prompting. We want to websites to use in-content controls to enable notifications, as long as they have an informative style and do not try to mimic native browser UI. Faking (“spoofing”) browser UI is considered a security risk and will be met with stronger enforcement in the future. A good pre-prompt follows the style of the page and adds additional context to the request. Pre-prompting, when done well, will increase the chance of users opting to receive notifications. Annoying users, as our data shows, will lead to churn.

We will release additional information and resources for web developers on our Mozilla Hacks blog.

We hope that these restrictions will lead to an overall better and less annoying user experience for all our users while retaining the functionality for those that need notifications.

The post Restricting Notification Permission Prompts in Firefox appeared first on Future Releases.

Categorieën: Mozilla-nl planet

Mozilla Reps Community: Rep of the Month – October 2019

ma, 04/11/2019 - 14:16

Please join us in congratulating Shina Dhingra, Rep of the Month for October 2019!

Shina is from Pune, Maharashtra, India. Her journey started with the Mozilla Pune community while she was in college in 2017, with Localization in Hindi and quality assurance bugs.

She’s been an active contributor to the community and since then has helped a lot of newcomers in their onboarding and helping them understand better what the Mozilla Community is all about.


She joined the Reps Program in February 2019 and since then she has actively participated and contributed to Common Voice, A-Frame, Localization, Add-ons, and other Open Source Contributions. She built her own project as a mentee under the Open Leaders Program, and will be organizing and hosting her own cohort called “Healthier AI” which she launched at MozFest this year.

Congratulations and keep rocking the open web!

To congratulate her, please head over to Discourse!

Categorieën: Mozilla-nl planet

Nathan Froyd: evaluating bazel for building firefox, part 2

vr, 01/11/2019 - 18:32

In our last post, we highlighted some of the advantages that Bazel would bring.  The remote execution and caching benefits Bazel bring look really attractive, but it’s difficult to tell exactly how much they would benefit Firefox.  I looked for projects that had switched to Bazel, and a brief summary of each project’s experience is written below.

The Bazel rules for nodejs highlight Dataform’s switch to Bazel, which took about 2 months.  Their build involves some combination of “NPM packages, Webpack builds, Node services, and Java pipelines”. Switching plus enabling remote caching reduced the average time for a build in CI from 30 minutes to 5 minutes; incremental builds for local development have been “reduced to seconds from minutes”.  It’s not clear whether the local development experience is also hooked up to the caching infrastructure as well.

Pinterest recently wrote about their switch to Bazel for iOS.  While they call out remote caching leading to “build times [dropping] under a minute and as low as 30 seconds”, they state their “time to land code” only decreased by 27%.  I wasn’t sure how to reconcile such fast builds with (relatively) modest decreases in CI time.  Tests have gotten a lot faster, given that test results can be cached and reused if the tests in question have their transitive dependencies unchanged.

One of the most complete (relatively speaking) descriptions I found was Redfin’s switch from Maven to Bazel, for building a large amount of JavaScript modules and Java code, nearly 30,000 files in all.  Their CI builds went from 40-90 minutes to 5-6 minutes; in fairness, it must be mentioned that their Maven builds were not parallelized (for correctness reasons) whereas their Bazel builds were.  But it’s worth highlighting that they managed to do this incrementally, by generating Bazel build definitions from their Maven ones, and that the quoted build times did not enable caching.  The associated tech talk slides/video indicates builds would be roughly in the 1-2 minute range with caching, although they hadn’t deployed that yet.

None of the above accounts talked about how long the conversion took, which I found peculiar.  Both Pinterest and Redfin called out how much more reliable their builds were once they switched to Bazel; Pinterest said, “we haven’t performed a single clean build on CI in over a year.”

In some negative results, which are helpful as well, Dropbox wrote about evaluating Bazel for their Android builds.  What’s interesting here is that other parts of Dropbox are heavily invested in Bazel, so there’s a lot of in-house experience, and that Bazel was significantly faster than their current build system (assuming caching was turned on; Bazel was significantly slower for clean builds without caching).  Yet Dropbox decided to not switch to Bazel due to tooling and development experience concerns.  They did leave open the possibility of switching in the future once the ecosystem matures.

The oddly-named Bazel Fawlty describes a conversion to Bazel from Go’s native tooling, and then a switch back after a litany of problems, including slower builds (but faster tests), a poor development experience (especially on OS X), and various things not being supported in Bazel leading to the native Go tooling still being required in some cases.  This post was also noteworthy for noting the amount of porting effort required to switch: eight months plus “many PR’s accepted into the bazel go rules git repo”.  I haven’t used Go, but I’m willing to discount some of the negative experience here due to the native Go tools being so good.

Neither one of these negative experiences translate exactly to Firefox: different languages/ecosystems, different concerns, different scales.  But both of them cite the developer experience specifically, suggesting that not only is there a large investment required to actually do the switchover, but you also need to write tooling around Bazel to make it more convenient to use.

Finally, a 2018 BazelCon talk discusses two Google projects that made the switch to Bazel and specifically to use remote caching and remote execution on Google’s public-facing cloud infrastructure: Android Studio and TensorFlow.  (You may note that this is the first instance where somebody has called out supporting remote execution as part of the switch; I think that implies getting a build to the point of supporting remote execution is more complicated than just supporting remote caching, which makes a certain amount of sense.)  Android Studio increased their test presubmit coverage by 4x, presumably by being able to run more than 4x test jobs than previously due to remote execution.  In the same vein, TensorFlow decreased their build and test times by 80%, and they could use significantly less powerful machines to actually run the builds, given that large machines in the cloud were doing the actual heavy lifting.

Unfortunately, I don’t think expecting those same reductions in test time, were Firefox to switch to Bazel, is warranted.  I can’t speak to Android Studio, but TensorFlow has a number of unit tests whose test results can be cached.  In the Firefox context, these would correspond to cppunittests, which a) we don’t have that many of and b) don’t take that long to run.  The bulk of our tests depend in one way or another on kitchen-sink-style artifacts (e.g. libxul, the JS shell, omni.ja) which essentially depend on everything else.  We could get some reductions for OS-specific modifications; Windows-specific changes wouldn’t require re-running OS X tests, for instance, but my sense is that these sorts of changes are not common enough to lead to an 80% reduction in build + test time.  I suppose it’s also possible that we could teach Bazel that e.g. devtools changes don’t affect, say, non-devtools mochitests/reftests/etc. (presumably?), which would make more test results cacheable.

I want to believe that Bazel + remote caching (+ remote execution if we could get there) will bring Firefox build (and maybe even test) times down significantly, but the above accounts don’t exactly move the needle from belief to certainty.

Categorieën: Mozilla-nl planet

Tantek Çelik: #Redecentralize 2019 Session: Decentralized Identity & Rethinking Reputation

vr, 01/11/2019 - 16:32

On Friday 2019-10-25 I participated in Redecentralize Conference 2019, a one-day unconference in London, England on the topics of decentralisation, privacy, autonomy, and digital infrastructure.

I gave a 3 minute lightning talk, helped run an IndieWeb standards & methods session in the first open slot of the day, and participated in two more sessions. The second open session had no Etherpad notes, so this post is from my one week ago memory recall.

Decentralized lunch

After the first open session of the day, the Redecentralize confrerence provided a nice informal buffet lunch for participants. Though we picked up our eats from a centralized buffet, people self-organized into their own distributed groups. There were a few folks I knew or had recently met, and many more that I had not. I sat with a few people who looked like they had just started talking and that’s when I met Kate.

I asked if she was running a session and she said yes in the next time slot, on decentralized identity and rethinking reputation. She also noted that she wanted to approach it from a human exploration perspective rather than a technical perspective, and was looking to learn from participants. I decided I’d join, looking forward to a humans-first (rather than technology plumbing first) conversation and discussion.

Discussion circle

After lunch everyone found their way to various sessions or corners of the space to work on their own projects. The space for Kate’s session was an area in the middle of a large room, without a whiteboard or projector. About a half dozen of us assembled chairs in a rough oval to get started.

As we informally chatted a few more people showed up and we broadened our circle. The space was a bit noisy with chatter drifting in from other sessions, yet we could hear each other we if leaned in a little. Kate started us off asking our opinions of the subject matter, experiences, and about existing approaches in contrast to letting any one company control identity and reputation.

Gaming of centralized systems

We spent quite a bit of time on discussing existing online or digital reputation systems, and how portable or not these were. China was a subject of discussion along with the social reputation system that they had put in place that was starting to be used for various purposes. Someone provided the example of people putting their phones into little shaker machines to fake an increased stepcount to increase their reputation in that way. Apparently lots of people are gaming the Chinese systems in many ways.

Portability and resets

Two major concerns were brought up about decentralized reputation systems.

  1. Reputation portability. If you build reputation in one system or service, how do you transfer that reputation to another?
  2. Reset abuse. If you develop a bad reputation in a system, what is to stop you from deleting that identity, and creating a new one to reset your reputation?

No one had good answers for either. I offered one observation for the latter, which was that as reputation systems evolve over time, the lack of reputation, i.e. someone just starting out (or a reset), is seen as having a default negative reputation, that they have to prove otherwise. For example the old Twitter “eggs”, so called due to the default icons that Twitter (at some point) assigned to new users that were a white cartoon egg on a pastel background.

Another subsequent thought, Twitter’s profile display of when someone joined has also reinforced some of this “default negative” reputation, as people are suspicious of accounts that have just recently joined Twitter and all of sudden start posting forcefully (especially about political or breaking news stories). Are they bots or state operatives pretending to be someone they’re not? Hard to tell.

Session dynamics

While Kate did a good job keeping discussions on topic, prompting with new questions when the group appeared to rathole in some area, there were a few challenging dynamics in the group.

It looked like no one was using laptop to take notes (myself included), emergently so (no one was told not to use their laptop). While “no laptop” meetings are often praised for focus & attention, they do have several downsides.

First, no one writes anything down, so follow-up discussions are difficult, or rather, it becomes likely that past discussions will be repeated without any new information. Caught in a loop. History repeating.

Second, with only speaking and no writing or note-taking, conversations tend to become more reactive, less thoughtful, and more about the individuals & personalities than about the subject matter.

I noticed that one participant in particular was much more forceful and spoke a lot more than anyone else in the group, asserting all kinds of domain knowledge (usually without citation or reasoning). Normally I tend to question this kind of behavior, but this time I decided to listen and observe instead. On a session about reputation, how would this person’s behavior affect their dynamic reputation in this group?

Eventually Kate was able to ask questions and prompt others who were quiet to speak-up, which was good to see.

Decentralized identity

We did not get into any deep discussions of any specific decentralized identity systems, and that was perhaps ok. Mostly there discussion about the downsides of centrally controlled identity, and how each of us wanted more control over various aspects of our online identities.

For anyone who asked, I posited that a good way to start with decentralized identity was to buy and use a personal domain name for your primary online presence, setting it up to sign-into sites, and build a reputation using that. Since you can pick the domain name, you can pick whatever facet(s) of your identity you wish to represent. It may not be perfectly distributed, however it does work today, and is a good way to explore a lot of the questions and challenges of decentralized identity.

The Nirvana Fallacy

Another challenge discussing various systems both critically, and aspirationally, was the inability to really assess how “real” any examples were, or applicable to any of us, or their usability, or even if they were deployed in any even experimental way instead of just being a white paper proposal.

This was a common theme in several sessions, that of comparing the downsides of real existing systems with the aspirational features of conceived but unimplemented systems. I had just recently come across a name for this phenomenon, and like many things you learn about, was starting to see it a lot: The Nirvana Fallacy. I didn’t bring it up in this session but rather tried to keep it in mind as a way to assess various comparisons.

Distributed reputation

After lunch sessions are always a bit of a challenge. People are full or tired. I myself was already feeling a bit spent from the lightning talk and the session Kevin and I had led right after that.

All in all it was a good discussion, even though we couldn’t point to any notes or conclusions. It felt like everyone walked away having learned something from someone else, and in general people got to know each other in a semi-distributed way, starting to build reputation for future interactions.

Watching that happen in-person made me wonder if there was some way to apply a similar kind of semi-structured group discussion dynamic as a method for building reputation in the online world. Could there be some way to parse out the dynamics of individual interactions in comments or threads to reflect that back to users in the form of customized per-person-pair reputations that they could view as a recent summary or trends over the years?

Previous #Redecentralize 2019 posts
Categorieën: Mozilla-nl planet

Mozilla Security Blog: Validating Delegated Credentials for TLS in Firefox

vr, 01/11/2019 - 14:01

At Mozilla we are well aware of how fragile the Web Public Key Infrastructure (PKI) can be. From fraudulent Certification Authorities (CAs) to implementation errors that leak private keys, users, often unknowingly, are put in a position where their ability to establish trust on the Web is compromised. Therefore, in keeping with our mission to create a Web where individuals are empowered, independent and safe, we welcome ideas that are aimed at making the Web PKI more robust. With initiatives like our Common CA Database (CCADB), CRLite prototyping, and our involvement in the CA/Browser Forum, we’re committed to this objective, and this is why we embraced the opportunity to partner with Cloudflare to test Delegated Credentials for TLS in Firefox, which is currently undergoing standardization at the IETF.

As CAs are responsible for the creation of digital certificates, they dictate the lifetime of an issued certificate, as well as its usage parameters. Traditionally, end-entity certificates are long-lived, exhibiting lifetimes of more than one year. For server operators making use of Content Delivery Networks (CDNs) such as Cloudflare, this can be problematic because of the potential trust placed in CDNs regarding sensitive private key material. Of course, Cloudflare has architectural solutions for such key material but these add unwanted latency to connections and present with operational difficulties. To limit exposure, a short-lived certificate would be preferable for this setting. However, constant communication with an external CA to obtain short-lived certificates could result in poor performance or even worse, lack of access to a service entirely.

The Delegated Credentials mechanism decentralizes the problem by allowing a TLS server to issue short-lived authentication credentials (with a validity period of no longer than 7 days) that are cryptographically bound to a CA-issued certificate. These short-lived credentials then serve as the authentication keys in a regular TLS 1.3 connection between a Firefox client and a CDN edge server situated in a low-trust zone (where the risk of compromise might be higher than usual and perhaps go undetected). This way, performance isn’t hindered and the compromise window is limited. For further technical details see this excellent blog post by Cloudflare on the subject.

See How The Experiment Works

We will soon test Delegated Credentials in Firefox Nightly via an experimental addon, called TLS Delegated Credentials Experiment. In this experiment, the addon will make a single request to a Cloudflare-managed host which supports Delegated Credentials. The Delegated Credentials feature is disabled in Firefox by default, but depending on the experiment conditions the addon will toggle it for the duration of this request. The connection result, including whether Delegated Credentials was enabled or not, gets reported via telemetry to allow for comparative study. Out of this we’re hoping to gain better insights into how effective and stable Delegated Credentials are in the real world, and more importantly, of any negative impact to user experience (for example, increased connection failure rates or slower TLS handshake times). The study is expected to start in mid-November and run for two weeks.

For specific details on the telemetry and how measurements will take place, see bug 1564179.

See The Results In Firefox

You can open a Firefox Nightly or Beta window and navigate to about:telemetry. From here, in the top-right is a Search box, where you can search for “delegated” to find all telemetry entries from our experiment. If Delegated Credentials have been used and telemetry is enabled, you can expect to see the count of Delegated Credentials-enabled handshakes as well as the time-to-completion of each. Additionally, if the addon has run the test, you can see the test result under the “Keyed Scalars” section.

Delegated Credentials telemetry in Nightly 72

Delegated Credentials telemetry in Nightly 72

You can also read more about telemetry, studies, and Mozilla’s privacy policy by navigating to about:preferences#privacy.

See It In Action

If you’d like to enable Delegated Credentials for your own testing or use, this can be done by:

  1. In a Firefox Nightly or Beta window, navigate to about:config.
  2. Search for the “security.tls.enable_delegated_credentials” preference – the preference list will update as you type, and “delegated” is itself enough to find the correct preference.
  3. Click the Toggle button to set the value to true.
  4. Navigate to
  5. If needed, toggling the value back to false will disable Delegated Credentials.

Note that currently, use of Delegated Credentials doesn’t appear anywhere in the Firefox UI. This will change as we evolve the implementation.

We would sincerely like to thank Christopher Patton, fellow Mozillian Wayne Thayer, and the Cloudflare team, particularly Nick Sullivan and Watson Ladd for helping us to get to this point with the Delegated Credentials feature. The Mozilla team will keep you informed on the development of this feature for use in Firefox, and we look forward to sharing our results in a future blog post.




The post Validating Delegated Credentials for TLS in Firefox appeared first on Mozilla Security Blog.

Categorieën: Mozilla-nl planet

The Mozilla Blog: Asking Congress to Examine the Data Practices of Internet Service Providers

vr, 01/11/2019 - 12:08

At Mozilla, we work hard to ensure our users’ browsing activity is protected when they use Firefox. That is why we launched enhanced tracking protection this year – to safeguard users from the pervasive online tracking of personal data by ad networks and companies. And over the last two years, Mozilla, in partnership with other industry stakeholders, has been working to develop, standardize, and deploy DNS over HTTPs (DoH). Our goal with DoH is to protect essentially that same browsing activity from interception, manipulation, and collection in the middle of the network.

This dedication to protecting your browsing activity is why today we’ve also asked Congress to examine the privacy and security practices of internet service providers (ISPs), particularly as they relate to the domain name services (DNS) provided to American consumers. Right now these companies have access to a stream of a user’s browsing history. This is particularly concerning in light of to the rollback of the broadband privacy rules, which removed guardrails for how ISPs can use your data.  The same ISPs are now fighting to prevent the deployment of DoH.

These developments have raised serious questions. How is your browsing data being used by those who provide your internet service? Is it being shared with others? And do consumers understand and agree to these practices? We think it’s time Congress took a deeper look at ISP practices to figure out what exactly is happening with our data.

At Mozilla, we refuse to believe that you have to trade your privacy and control in order to enjoy the technology you love. Our hope is that a congressional review of these practices uncovers valuable insights, informs the public, and helps guide continuing efforts to draft smart and comprehensive consumer privacy legislation.

See our full letter to Congressional leaders here.

The post Asking Congress to Examine the Data Practices of Internet Service Providers appeared first on The Mozilla Blog.

Categorieën: Mozilla-nl planet

Nick Fitzgerald: Always Bump Downwards

vr, 01/11/2019 - 08:00

When writing a bump allocator, always bump downwards. That is, allocate from high addresses, down towards lower addresses by decrementing the bump pointer. Although it is perhaps less natural to think about, it is more efficient than incrementing the bump pointer and allocating from lower addresses up to higher ones.

What is Bump Allocation?

Bump allocation is a super fast method for allocating objects. We have a chunk of memory, and we maintain a “bump pointer” within that memory. Whenever we allocate an object, we do a quick test that we have enough capacity left in the chunk, and then, assuming we have enough room, we move the bump pointer over by sizeof(object) bytes and return the pointer to the space we just reserved for the object within the chunk.

That’s it!

Here is some pseudo-code showing off the algorithm:

bump(size): if our capacity < size: fail else self.ptr = move self.ptr over by size bytes return pointer to the freshly allocated space

The trade off with bump allocation is that we can’t deallocate individual objects in the general case. We can deallocate all of them en mass by resetting the bump pointer back to its initial location. We can deallocate in a LIFO, stack order by moving the bump pointer in reverse. But we can’t deallocate an arbitrary object in the middle of the chunk and reclaim its space for new allocations.

Finally, notice that the chunk of memory we are bump allocating within is always split in two: the side holding allocated objects and the side with free memory. The bump pointer separates the two sides. Furthermore, note that I haven’t defined which side of the bump pointer is free or allocated space, and I’ve carefully avoided saying whether the bump pointer is incremented or decremented.

Bumping Upwards

First, let’s consider what we shouldn’t do: bump upwards by initializing the bump pointer at the low end of our memory chunk and incrementing the bump pointer on each allocation.

We begin with a struct that holds the start and end addresses of our chunk of memory, as well as our current bump pointer:

pub struct BumpUp { // A pointer to the first byte of our memory chunk. start: *mut u8, // A pointer to the last byte of our memory chunk. end: *mut u8, // The bump pointer. At all times, we maintain the // invariant that `start <= ptr <= end`. ptr: *mut u8, }

Constructing our upwards bump allocator requires giving it the start and end pointers, and it will initialize its bump pointer to the start address:

impl BumpUp { pub unsafe fn new( start: *mut u8, end: *mut u8, ) -> BumpUp { assert!(start as usize <= end as usize); let ptr = start; BumpUp { start, end, ptr } } }

To allocate an object, we will begin by grabbing the current bump pointer, and saving it in a temporary variable: this is going to be the pointer to the newly allocated space. Then we increment the bump pointer by the requested size, and check if it is still less than end. If so, then we have capacity for the allocation, and can commit the new bump pointer to self.ptr and return the temporary pointing to the freshly allocated space.

But first, there is one thing that the pseudo-code ignored, but which a real implementation cannot: alignment. We need to round up the initial bump pointer to a multiple of the requested alignment before we compute the new bump pointer by adding the requested size.0

Put all that together, and it looks like this:

impl BumpUp { pub unsafe fn alloc( &mut self, size: usize, align: usize, ) -> *mut u8 { debug_assert!(align > 0); debug_assert!(align.is_power_of_two()); let ptr = self.ptr as usize; // Round the bump pointer up to the requested // alignment. See the footnote for details. let aligned = (ptr + align - 1) & !(align - 1); let new_ptr = aligned + size; let end = self.end as usize; if new_ptr > end { // Didn't have enough capacity! return std::ptr::null_mut(); } self.ptr = new_ptr as *mut u8; aligned as *mut u8 } }

If we compile this allocation routine to x86-64 with optimizations, we get the following code, which I’ve lightly annotated:

; Incoming arguments: ; * `rdi`: pointer to `BumpUp` ; * `rsi`: The allocation's `size` ; * `rdx`: The allocation's `align` ; Outgoing result: ; * `rax`: the pointer to the allocation or null alloc_up: mov rax, rdx mov rcx, QWORD PTR [rdi+0x10] add rcx, rdx add rcx, 0xffffffffffffffff neg rax and rax, rcx add rsi, rax cmp rsi, QWORD PTR [rdi+0x8] ja .return_null mov QWORD PTR [rdi+0x10], rsi ret .return_null: xor eax, eax ret

I’m not going to explain each individual instruction. What’s important to appreciate here is that this is just a small handful of fast instructions with only a single branch to handle the not-enough-capacity case. This is what makes bump allocation so fast — great!

But before we get too excited, there is another practicality to consider: to maintain memory safety, we must handle potential integer overflows in the allocation procedure, or else we could have bugs where we return pointers outside the bounds of our memory chunk. No good!

There are two opportunities for overflow we must take care of:

  1. If the requested allocation’s size is large enough, aligned + size can overflow.

  2. If the requested allocation’s alignment is large enough, the ptr + align - 1 sub-expression we use when rounding up to the alignment can overflow.

To handle both these cases, we will use checked addition and return a null pointer if either addition overflows. Here is the new Rust source code:

// Try and unwrap an `Option`, returning a null pointer // if the option is `None`. macro_rules! try_null { ( $e:expr ) => { match $e { None => return std::ptr::null_mut(), Some(e) => e, } }; } impl BumpUp { pub unsafe fn alloc( &mut self, size: usize, align: usize, ) -> *mut u8 { debug_assert!(align > 0); debug_assert!(align.is_power_of_two()); let ptr = self.ptr as usize; // Round the bump pointer up to the requested // alignment. let aligned = try_null!(ptr.checked_add(align - 1)) & !(align - 1); let new_ptr = try_null!(aligned.checked_add(size)); let end = self.end as usize; if new_ptr > end { // Didn't have enough capacity! return std::ptr::null_mut(); } self.ptr = new_ptr as *mut u8; aligned as *mut u8 } }

Now that we’re handling overflows in addition to the alignment requirements, let’s take a look at the x86-64 code that rustc and LLVM produce for the function now:

; Incoming arguments: ; * `rdi`: pointer to `BumpUp` ; * `rsi`: The allocation's `size` ; * `rdx`: The allocation's `align` ; Outgoing result: ; * `rax`: the pointer to the allocation or null alloc_up: lea rax, [rdx-0x1] add rax, QWORD PTR [rdi+0x10] jb .return_null neg rdx and rax, rdx add rsi, rax jb .return_null cmp rsi, QWORD PTR [rdi+0x8] ja .return_null mov QWORD PTR [rdi+0x10], rsi ret .return_null: xor eax, eax ret

Now there are three conditional branches rather than one. The two new branches are from those two new overflow checks that we added. Less than ideal.

Can bumping downwards do better?

Bumping Downwards

Now let’s implement a bump allocator where the bump pointer is initialized at the end of the memory chunk and is decremented on allocation, so that it moves downwards towards the start of the memory chunk.

The struct is identical to the previous version:

#[repr(C)] pub struct BumpDown { // A pointer to the first byte of our memory chunk. start: *mut u8, // A pointer to the last byte of our memory chunk. end: *mut u8, // The bump pointer. At all times, we have the // invariant that `start <= ptr <= end`. ptr: *mut u8, }

Constructing a BumpDown is similar to constructing a BumpUp except we initialize the ptr to end rather than start:

impl BumpDown { pub unsafe fn new(start: *mut u8, end: *mut u8) -> BumpDown { assert!(start as usize <= end as usize); let ptr = end; BumpDown { start, end, ptr } } }

When we were allocating by incrementing the bump pointer, the original bump pointer value before it was incremented pointed at the space that was about to be reserved for the allocation. When we are allocating by decrementing the bump pointer, the original bump pointer is pointing at either the end of the memory chunk, or at the last allocation we made. What we want to return is the value of the bump pointer after we decrement it down, at which time it will be pointing at our allocated space.

First we subtract the allocation size from the bump pointer. This subtraction might overflow, so we check for that and return a null pointer if that is the case, just like we did in the previous, upward-bumping function. Then, we round that down to the nearest multiple of align to ensure that the allocated space has the object’s alignment. At this point, we check if we are down past the start of our memory chunk, in which case we don’t have the capacity to fulfill this allocation, and we return null. Otherwise, we update the bump pointer to its new value and return the pointer!

impl BumpDown { pub unsafe fn alloc( &mut self, size: usize, align: usize, ) -> *mut u8 { debug_assert!(align > 0); debug_assert!(align.is_power_of_two()); let ptr = self.ptr as usize; let new_ptr = try_null!(ptr.checked_sub(size)); // Round down to the requested alignment. let new_ptr = new_ptr & !(align - 1); let start = self.start as usize; if new_ptr < start { // Didn't have enough capacity! return std::ptr::null_mut(); } self.ptr = new_ptr as *mut u8; self.ptr } }

And here is the x86-64 code generated for this downward-bumping allocation routine!

; Incoming arguments: ; * `rdi`: pointer to `BumpDown` ; * `rsi`: The allocation's `size` ; * `rdx`: The allocation's `align` ; Outgoing result: ; * `rax`: the pointer to the allocation or null alloc_down: mov rax, QWORD PTR [rdi+0x10] sub rax, rsi jb .return_null neg rdx and rax, rdx cmp rax, QWORD PTR [rdi] jb .return_null mov QWORD PTR [rdi+0x10], rax ret .return_null: xor eax, eax ret

Because rounding down doesn’t require an addition or subtraction operation, it doesn’t have an associated overflow check. That means one less conditional branch in the generated code, and downward bumping only has two conditional branches versus the three that upward bumping has.

Additionally, because we don’t need to save the original bump pointer value, this version uses fewer registers than the upward-bumping version. Bump allocation functions are designed to be fast paths that are inlined into callers, which means that downward bumping is creating less register pressure at every call site.

Finally, this downwards-bumping version is implemented with eleven instructions, while the upwards-bumping version requires thirteen instructions. In general, fewer instructions implies a shorter run time.


I recently switched the bumpalo crate from bumping upwards to bumping downwards. It has a nice, little micro-benchmark suite that is written with the excellent, statistics-driven benchmarking framework. With Criterion’s built-in support for defining a baseline measurement and comparing an alternate implementation of the code against it, I compared the new, downwards-bumping implementation against the original, upwards-bumping implementation.

The new, downwards-bumping implementation has up to 19% better allocation throughput than the original, upwards-bumping implementation! We’re down to 2.7 nanoseconds per allocation.

The plot below shows the probability of allocating 10,000 small objects taking a certain amount of time. The red curve represents the old, upwards-bumping implementation, while the blue curve shows the new, downwards-bumping implementation. The lines represent the mean time.

You can view the complete, nitty-gritty benchmark results in the pull request.

The One Downside: Losing a realloc Fast Path

bumpalo doesn’t only provide an allocation method, it also provides a realloc method to resize an existing allocation. realloc is O(n) because in the worst-case scenario it needs to allocate a whole new region of memory and copy the data from the old to the new region. But the old, upwards-bumping implementation had a fast path for growing the last allocation: it would add the delta size to the bump pointer, leaving the allocation in place and avoiding that copy. The new, downwards-bumping implementation also has a fast path for resizing the last allocation , but even if we reuse that space, the start of the allocated region of memory has shifted, and so we can’t avoid the data copy.

The loss of that fast path leads to a 4% slow down in our realloc benchmark that formats a string into a bump-allocated buffer, triggering a number of reallocs as the string is constructed. We felt that this was worth the trade off for faster allocation.

Less Work with More Alignment?

It is rare for types to require more than word alignment. We could enforce a minimum alignment on the bump pointer at all times that is greater than or equal to the vast majority of our allocations’ alignment requirements. If our allocation routine is monomorphized for the type of the allocation it’s making, or it is aggressively inlined — and it definitely should be — then we should be able to completely avoid generating any code to align the bump pointer in most cases, including the conditional branch on overflow if we are rounding up for upwards bumping.

impl BumpDown { pub unsafe fn alloc<T>(&mut self) -> *mut MaybeUninit<T> { let ptr = self.ptr as usize; // Ensure we always always keep the bump pointer // `MIN_ALIGN`-aligned by rounding the size up. This // should be boiled away into a constant by the compiler // after monomorphization. let size = (size_of::<T>() + MIN_ALIGN - 1) & !(MIN_ALIGN - 1); let new_ptr = try_null!(ptr.checked_sub(size)); // If necessary, round down to the requested alignment. // Again, this `if`/`else` should be boiled away by the // compiler after monomorphization. let new_ptr = if align_of::<T> > MIN_ALIGN { new_ptr & !(align_of::<T>() - 1) } else { new_ptr }; let start = self.start as usize; if new_ptr < start { // Didn't have enough capacity! return std::ptr::null_mut(); } self.ptr = new_ptr as *mut u8; self.ptr as *mut _ } }

The trade off is extra memory overhead from introducing wasted space between small allocations that don’t require that extra alignment.


If you are writing your own bump allocator, you should bump downwards: initialize the bump pointer to the end of the chunk of memory you are allocating from within, and decrement it on each allocation so that it moves down towards the start of the memory chunk. Downwards bumping requires fewer registers, fewer instructions, and fewer conditional branches. Ultimately, that makes it faster than bumping upwards.

The one exception is if, for some reason, you frequently use realloc to grow the last allocation you made, in which case you might get more out of a fast path for growing the last allocation in place without copying any data. And if you do decide to bump upwards, then you should strongly consider enforcing a minimum alignment on the bump pointer to recover some of the performance that you’re otherwise leaving on the table.

Finally, I’d like to thank Jim Blandy, Alex Crichton, Jeena Lee, and Jason Orendorff for reading an early draft of this blog post, for discussing these ideas with me, and for being super friends :)

0 The simple way to round n up to a multiple of align is

(n + align - 1) / align * align

Consider the numerator: n + align - 1. This is ensuring that if there is any remainder for n / align, then the result of the division sub-expression is one greater than n / align, and that otherwise we get exactly the same result as n / align due to integer division rounding off the remainder. In other words, we only round up if n is not aligned to align.

However, we know align is a power of two, and therefore anything / align is equivalent to anything >> log2(align) and anything * align is equivalent to anything << log2(align). We can therefore rewrite our expression into:

(n + align - 1) >> log2(align) << log2(align)

But shifting a value right by some number of bits b and then shifting it left by that same number of bits b is equivalent to clearing the bottom b bits of the number. We can clear the bottom b bits of a number by bit-wise and’ing the number with the bit-wise not of 2^b - 1. Plugging this into our equation and simplifying, we get:

(n + align - 1) >> log2(align) << log2(align) = (n + align - 1) & !(2^log2(align) - 1) = (n + align - 1) & !(align - 1)

And now we have our final version of rounding up to a multiple of a power of two!

If you find these bit twiddling hacks fun, definitely find yourself a copy of Hacker’s Delight. It’s a wonderful book!

Categorieën: Mozilla-nl planet

The Rust Programming Language Blog: Completing the transition to the new borrow checker

vr, 01/11/2019 - 01:00

For most of 2018, we've been issuing warnings about various bugs in the borrow checker that we plan to fix -- about two months ago, in the current Rust nightly, those warnings became hard errors. In about two weeks, when the nightly branches to become beta, those hard errors will be in the beta build, and they will eventually hit stable on December 19th, as part of Rust 1.40.0. If you're testing with Nightly, you should be all set -- but otherwise, you may want to go and check to make sure your code still builds. If not, we have advice for fixing common problems below.

Background: the non-lexical lifetime transition

When we released Rust 2018 in Rust 1.31, it included a new version of the borrow checker, one that implemented "non-lexical lifetimes". This new borrow checker did a much more precise analysis than the original, allowing us to eliminate a lot of unnecessary errors and make Rust easier to use. I think most everyone who was using Rust 2015 can attest that this shift was a big improvement.

The new borrow checker also fixed a lot of bugs

What is perhaps less well understood is that the new borrow checker implementation also fixed a lot of bugs. In other words, the new borrow checker did not just accept more programs -- it also rejected some programs that were only accepted in the first place due to memory unsafety bugs in the old borrow checker!

Until recently, those fixed bugs produced warnings, not errors

Obviously, we don't want to accept programs that could undermine Rust's safety guarantees. At the same time, as part of our commitment to stability, we try to avoid making sudden bug fixes that will affect a lot of code. Whenever possible, we prefer to "phase in" those changes gradually. We usually begin with "Future Compatibility Warnings", for example, before moving those warnings to hard errors (sometimes a small bit at a time). Since the bug fixes to the borrow checker affected a lot of crates, we knew we needed a warning period before we could make them into hard errors.

To implement this warning period, we kept two copies of the borrow checker around (this is a trick we use quite frequently, actually). The new checker ran first. If it found errors, we didn't report them directly: instead, we ran the old checker in order to see if the crate used to compile before. If so, we reported the errors as Future Compatibility Warnings, since we were changing something that used to compile into errors.

All good things must come to an end; and bad ones, too

Over time we have been slowly transitioning those future compatibility warnings into errors, a bit at a time. About two months ago, we decided that the time had come to finish the job. So, over the course of two PRs, we converted all remaining warnings to errors and then removed the old borrow checker implementation.

What this means for you

If you are testing your package with nightly, then you should be fine. In fact, even if you build on stable, we always recommend that you test your builds in CI with the nightly build, so that you can identify upcoming issues early and report them to us.

Otherwise, you may want to check your dependencies. When we decided to remove the old borrow checker, we also analyzed which crates would stop compiling. For anything that seemed to be widely used, we made sure that there were newer versions of that crate available that do compile (for the most part, this had all already happened during the warning period). But if you have those older versions in your Cargo.lock file, and you are only using stable builds, then you may find that your code no longer builds once 1.40.0 is released -- you will have to upgrade the dependency.

The most common crates that were affected are the following:

  • url version 1.7.0 -- you can upgrade to 1.7.2, though you'd be better off upgrading to 2.1.0
  • nalgebra version 0.16.13 -- you can upgrade to 0.16.14, though you'd be better off upgrading to 0.19.0
  • rusttype version 0.2.0 to 0.2.3 -- you can upgrade to 0.2.4, though you'd be better upgrading to 0.8.1

You can find out which crates you rely upon using the cargo-tree command. If you find that you do rely (say) on url 1.7.0, you can upgrade to 1.7.2 by executing:

cargo update --package url --precise 1.7.2 Want to learn more?

If you'd like to learn more about the kinds of bugs that were fixed -- or if you are seeing errors in your code that you need to fix -- take a look at this excellent blog post by Felix Klock, which goes into great detail.

Categorieën: Mozilla-nl planet

Mozilla Addons Blog: Upcoming changes to extension sideloading

do, 31/10/2019 - 22:27

Sideloading is a method of installing an extension in Firefox by adding an extension file to a special location using an executable application installer. This installs the extension in all Firefox instances on a computer.

Sideloaded extensions frequently cause issues for users since they did not explicitly choose to install them and are unable to remove them from the Add-ons Manager. This mechanism has also been employed in the past to install malware into Firefox. To give users more control over their extensions, support for sideloaded extensions will be discontinued. 

November 1 update: we’ve heard some feedback expressing confusion about how this change will give more control to Firefox users. Ever since we implemented abuse reporting in Firefox 68, the top kind of report we receive by far has been extension installs that weren’t expected and couldn’t be removed, and the extensions being reported are known to be distributed through sideloading. With this change, we are enforcing more transparency in the installation process, by letting users choose whether they want to install an application companion extension or not, and letting them remove it when they want to. Developers will still be free to self-distribute extensions on the web, and users will still be able to install self-distributed extensions. Enterprise administrators will continue to be able to deploy extensions to their users via policies. Other forms of automatic extension deployment like the ones used for some Linux distributions and applications like Selenium may be impacted by these changes. We’re still investigating some technical details around these cases and will try to strike the right balance between user choice and minimal disruption.

During the release cycle for Firefox version 73, which goes into pre-release channels on December 3, 2019 and into release on February 11, 2020, Firefox will continue to read sideloaded files, but they will be copied over to the user’s individual profile and installed as regular add-ons. Sideloading will stop being supported in Firefox version 74, which will be released on March 10, 2020. The transitional stage in Firefox 73 will ensure that no installed add-ons will be lost, and end users will gain the ability to remove them if they chose to.

If you self-distribute your extension via sideloading, please update your install flows and direct your users to download your extension through a web property that you own, or through (AMO). Please note that all extensions must meet the requirements outlined in our Add-on Policies and Developer Agreement.  If you choose to continue self-distributing your extension, make sure that new versions use an update URL to keep users up-to-date. Instructions for distributing an extension can be found in our Extension Workshop document repository.

If you have any questions, please head to our community forum.

The post Upcoming changes to extension sideloading appeared first on Mozilla Add-ons Blog.

Categorieën: Mozilla-nl planet

The Mozilla Blog: Facebook Is Still Failing at Ad Transparency (No Matter What They Claim)

do, 31/10/2019 - 20:58

Yesterday, Jack Dorsey made a bold statement: Twitter will cease all political advertising on the platform. “Internet political ads present entirely new challenges to civic discourse: machine learning-based optimization of messaging and micro-targeting, unchecked misleading information, and deep fakes. All at increasing velocity, sophistication, and overwhelming scale,” he tweeted.

Later that day, Sheryl Sandberg responded: Facebook doesn’t have to cease political advertising… because the platform is “focused and leading on transparency.” Sandberg cited Facebook’s ad archive efforts, which ostensibly allow researchers to study the provenance and impact of political ads.

To be clear: Facebook is still falling short on its transparency commitments. Further, even perfect transparency wouldn’t change the fact that Facebook is accepting payment to promote dangerous and untrue ads. 

Some brief history: Because of the importance of transparency in the political ad arena, Mozilla has been closely analyzing Facebook’s ad archive for over a year, and assessing its ability to provide researchers and others with meaningful information.

In February, Mozilla and 37 civil society organizations urged Facebook to provide better transparency into political advertising on their platform. Then, in March, Mozilla and leading disinformation researchers laid out exactly what an effective ad transparency archive should look like.

But when Facebook finally released its ad transparency API in March, it was woefully ineffective. It met just two of experts’ five minimum guidelines. Further, a Mozilla researcher uncovered a long list of bugs and shortcomings that rendered the API nearly useless.

The New York Times agreed: “Ad Tool Facebook Built to Fight Disinformation Doesn’t Work as Advertised,” reads a July headline. The article continues: “The social network’s new ad library is so flawed, researchers say, that it is effectively useless as a way to track political messaging.”

Since that time, Mozilla has confirmed that Facebook has made small changes in the API’s functionality — but we still judge the tool to be fundamentally flawed for its intended purpose of providing transparency and a data source for rigorous research.

Rather than deceptively promoting their failed API, Facebook must heed researchers’ advice and commit to truly transparent political advertising. If they can’t get that right, maybe they shouldn’t be running political ads at all for the time being.

The post Facebook Is Still Failing at Ad Transparency (No Matter What They Claim) appeared first on The Mozilla Blog.

Categorieën: Mozilla-nl planet

Paul McLanahan: The Lounge on Dokku

do, 31/10/2019 - 18:44

Mozilla has hosted an enterprise instance of IRCCloud for several years now, and it’s been a great client to use with our IRC network. IRCCloud has deprecated their enterprise product and so Mozilla recently decommissioned our instance. I then saw several colleagues praising The Lounge as a good self-hosted alternative. I became even more interested when I saw that the project maintains a docker image distribution of their releases. I now have an instance running and I’m using via this client and I agree with my colleagues: it’s a decent replacement.

Categorieën: Mozilla-nl planet

Anne van Kesteren: Shadow tree encapsulation theory

do, 31/10/2019 - 12:02

A long time ago Maciej wrote down five types of encapsulation for shadow trees (i.e., node trees that are hidden in the shadows from the document tree):

  1. Encapsulation against accidental exposure — DOM nodes from the shadow tree are not leaked via pre-existing generic APIs — for example, events flowing out of a shadow tree don't expose shadow nodes as the event target.
  2. Encapsulation against deliberate access — no API is provided which lets code outside the component poke at the shadow DOM. Only internals that the component chooses to expose are exposed.
  3. Inverse encapsulation — no API is provided which lets code inside the component see content from the page embedding it (this would have the effect of something like sandboxed iframes or Caja).
  4. Isolation for security purposes — it is strongly guaranteed that there is no way for code outside the component to violate its confidentiality or integrity.
  5. Inverse isolation for security purposes — it is strongly guaranteed that there is no way for code inside the component to violate the confidentiality or integrity of the embedding page.

Types 3 through 5 do not have any kind of support and type 4 and 5 encapsulation would be hard to pull off due to Spectre. User agents typically use a weaker variant of type 4 for their internal controls, such as the video and input elements, that does not protect confidentiality. The DOM and HTML standards provide type 1 and 2 encapsulation to web developers, and type 2 mostly due to Apple and Mozilla pushing rather hard for it. It might be worth providing an updated definition for the first two as we’ve come to understand them:

  1. Open shadow trees — no standardized web platform API provides access to nodes in an open shadow tree, except APIs that have been explicitly named and designed to do so (e.g., composedPath()). Nothing should be able to observe these shadow trees other than through those designated APIs (or “monkey patching”, i.e., modifying objects). Limited form of information hiding, but no integrity or confidentiality.
  2. Closed shadow trees — very similar to open shadow trees, except that designated APIs also do not get access to nodes in a closed shadow tree.

Type 2 encapsulation gives component developers control over what remains encapsulated and what is exposed. You need to take all your users into account and expose the best possible public API for them. At the same time, it protects you from folks taking a dependency on the guts of the component. Aspects you might want to refactor or add functionality to over time. This is much harder with type 1 encapsulation as there will be APIs that can reach into the details of your component and if users do so you cannot refactor it without updating all the callers.

Now, both type 1 and 2 encapsulation can be circumvented, e.g., by a script changing the attachShadow() method or mutating another builtin that the component has taken a dependency on. I.e., there is no integrity and as they run in the same origin there is no security boundary either. The limited form of information hiding is primarily a maintenance boon and a way to manage complexity. Maciej addresses this as well:

If the shadow DOM is exposed, then you have the following risks:

  1. A page using the component starts poking at the shadow DOM because it can — perhaps in a rarely used code path.
  2. The component is updated, unaware that the page is poking at its guts.
  3. Page adopts new version of component.
  4. Page breaks.
  5. Page author blames component author or rolls back to old version.

This is not good. Information hiding and hiding of implementation details are key aspects of encapsulation, and are good software engineering practices.

Categorieën: Mozilla-nl planet

Dave Townsend: Creating HTML content with a fixed aspect ratio without the padding trick

wo, 30/10/2019 - 18:01

It seems to be a common problem, you want to display some content on the web with a certain aspect ratio but you don’t know the size you will be displaying at. How do you do this? CSS doesn’t really have the tools to do the job well currently (there are proposals). In my case I want to display a video and associated controls as large as possible inside a space that I don’t know the size of. The size of the video also varies depending on the one being displayed.

Padding height

The answer to this according to almost all the searching I’ve done is the padding-top/bottom trick. For reasons that I don’t understand when using relative lengths (percentages) with the CSS padding-top and padding-bottom properties the values are calculated based on the width of the element. So padding-top: 100% gives you padding equal to the width of the element. Weird. So you can fairly easily create a box with a height calculated from its width and from there display content at whatever aspect ratio you choose. But there’s an inherent problem here, you need to know the width of the box in the first place, or at least be able to constrain it based on something. In my case the aspect ratio of the video and the container are both unknown. In some cases I need to constrain the width and calculate the height, but in others I need to constrain the height and calculate the width which is where this trick fails.


There is one straightforward solution. The CSS object-fit property allows you to scale up content to the largest size possible for the space allocated. This is perfect for my needs, except that it only works for replaced content like videos and images. In my case I also need to overlay some controls on top and I won’t know where to position them unless they are inside a box the size of the video.

The solution?

So what I need is something where I can create a box with set sizes and then scale both width and height to the largest that fit entirely in the container. What do we have on the web that can do that … oh yes, SVG. In SVG you can define the viewport for your content and any shapes you like inside with SVG coordinates and then scale the entire SVG viewport using CSS properties. I want HTML content to scale here and luckily SVG provides the foreignObject element which lets you define a rectangle in SVG coordinates that contains non-SVG content, such as HTML! So here is what I came up with:

<!DOCTYPE html> <html> <head> <style type="text/css"> html, body, svg, div { height: 100%; width: 100%; margin: 0; padding: 0; } div { background: red; } </style> </head> <body> <svg viewBox="0 0 4 3"> <foreignObject x="0" y="0" width="100%" height="100%"> <div></div> </foreignObject> </svg> </body> </html>

This is pretty straightforward. It creates an SVG document with a viewport with a 4:3 aspect ratio, a foreignObject container that fills the viewport and then a div that fills that. what you end up with is a div with a 4:3 aspect ratio. While this shows it working against the full page it seems to work anywhere with constraints on either height, width or both such as in a flex or grid layout. Obviously changing the viewBox allows you to get any aspect ratio you like, just setting it to the size of the video gives me exactly what I want.

You can see it working over on codepen.

Categorieën: Mozilla-nl planet

William Lachance: Using BigQuery JavaScript UDFs to analyze Firefox telemetry for fun & profit

wo, 30/10/2019 - 16:11

For the last year, we’ve been gradually migrating our backend Telemetry systems from AWS to GCP. I’ve been helping out here and there with this effort, most recently porting a job we used to detect slow tab spinners in Firefox nightly, which produced a small dataset that feeds a small adhoc dashboard which Mike Conley maintains. This was a relatively small task as things go, but it highlighted some features and improvements which I think might be broadly interesting, so I decided to write up a small blog post about it.

Essentially all this dashboard tells you is what percentage of the Firefox nightly population saw a tab spinner over the past 6 months. And of those that did see a tab spinner, what was the severity? Essentially we’re just trying to make sure that there are no major regressions of user experience (and also that efforts to improve things bore fruit):

Pretty simple stuff, but getting the data necessary to produce this kind of dashboard used to be anything but trivial: while some common business/product questions could be answered by a quick query to clients_daily, getting engineering-specific metrics like this usually involved trawling through gigabytes of raw heka encoded blobs using an Apache Spark cluster and then extracting the relevant information out of the telemetry probe histograms (in this case, FX_TAB_SWITCH_SPINNER_VISIBLE_MS and FX_TAB_SWITCH_SPINNER_VISIBLE_LONG_MS) contained therein.

The code itself was rather complicated (take a look, if you dare) but even worse, running it could get very expensive. We had a 14 node cluster churning through this script daily, and it took on average about an hour and a half to run! I don’t have the exact cost figures on hand (and am not sure if I’d be authorized to share them if I did), but based on a back of the envelope sketch, this one single script was probably costing us somewhere on the order of $10-$40 a day (that works out to between $3650-$14600 a year).

With our move to BigQuery, things get a lot simpler! Thanks to the combined effort of my team and data operations[1], we now produce “stable” ping tables on a daily basis with all the relevant histogram data (stored as JSON blobs), queryable using relatively vanilla SQL. In this case, the data we care about is in telemetry.main (named after the main ping, appropriately enough). With the help of a small JavaScript UDF function, all of this data can easily be extracted into a table inside a single SQL query scheduled by

CREATE TEMP FUNCTION udf_js_json_extract_highest_long_spinner (input STRING) RETURNS INT64 LANGUAGE js AS """ if (input == null) { return 0; } var result = JSON.parse(input); var valuesMap = result.values; var highest = 0; for (var key in valuesMap) { var range = parseInt(key); if (valuesMap[key]) { highest = range > 0 ? range : 1; } } return highest; """; SELECT build_id, sum (case when highest >= 64000 then 1 else 0 end) as v_64000ms_or_higher, sum (case when highest >= 27856 and highest < 64000 then 1 else 0 end) as v_27856ms_to_63999ms, sum (case when highest >= 12124 and highest < 27856 then 1 else 0 end) as v_12124ms_to_27855ms, sum (case when highest >= 5277 and highest < 12124 then 1 else 0 end) as v_5277ms_to_12123ms, sum (case when highest >= 2297 and highest < 5277 then 1 else 0 end) as v_2297ms_to_5276ms, sum (case when highest >= 1000 and highest < 2297 then 1 else 0 end) as v_1000ms_to_2296ms, sum (case when highest > 0 and highest < 50 then 1 else 0 end) as v_0ms_to_49ms, sum (case when highest >= 50 and highest < 100 then 1 else 0 end) as v_50ms_to_99ms, sum (case when highest >= 100 and highest < 200 then 1 else 0 end) as v_100ms_to_199ms, sum (case when highest >= 200 and highest < 400 then 1 else 0 end) as v_200ms_to_399ms, sum (case when highest >= 400 and highest < 800 then 1 else 0 end) as v_400ms_to_799ms, count(*) as count from (select build_id, client_id, max(greatest(highest_long, highest_short)) as highest from (SELECT SUBSTR(application.build_id, 0, 8) as build_id, client_id, udf_js_json_extract_highest_long_spinner(payload.histograms.FX_TAB_SWITCH_SPINNER_VISIBLE_LONG_MS) AS highest_long, udf_js_json_extract_highest_long_spinner(payload.histograms.FX_TAB_SWITCH_SPINNER_VISIBLE_MS) as highest_short FROM telemetry.main WHERE'nightly' AND normalized_os='Windows' AND application.build_id > FORMAT_DATE("%Y%m%d", DATE_SUB(CURRENT_DATE(), INTERVAL 2 QUARTER)) AND DATE(submission_timestamp) >= DATE_SUB(CURRENT_DATE(), INTERVAL 2 QUARTER)) group by build_id, client_id) group by build_id;

In addition to being much simpler, this new job is also way cheaper. The last run of it scanned just over 1 TB of data, meaning it cost us just over $5. Not as cheap as I might like, but considerably less expensive than before: I’ve also scheduled it to only run once every other day, since Mike tells me he doesn’t need this data any more often than that.

[1] I understand that Jeff Klukas, Frank Bertsch, Daniel Thorn, Anthony Miyaguchi, and Wesley Dawson are the principals involved - apologies if I’m forgetting someone.

Categorieën: Mozilla-nl planet

Mozilla VR Blog: A Year with Spoke: Announcing the Architecture Kit

di, 29/10/2019 - 17:52
 Announcing the Architecture Kit

Spoke, our 3D editor for creating environments for Hubs, is celebrating its first birthday with a major update. Last October, we released the first version of Spoke, a compositing tool for mixing 2D and 3D content to create immersive spaces. Over the past year, we’ve made a lot of improvements and added new features to make building scenes for VR easier than ever. Today, we’re excited to share the latest feature that adds to the power of Spoke: the Architecture Kit!

We first talked about the components of the Architecture Kit back in March. With the Architecture Kit, creators now have an additional way to build custom content for their 3D scenes without using an external tool. Specifically, we wanted to make it easier to take existing components that have already been optimized for VR and make it easy to configure those pieces to create original models and scenes. The Architecture Kit contains over 400 different pieces that are designed to be used together to create buildings - the kit includes wall, floor, ceiling, and roof pieces, as well as windows, trim, stairs, and doors.

Because Hubs runs across mobile, desktop, and VR devices and delivered through the browser, performance is a key consideration. The different components of the Architecture Kit were created in Blender so that each piece would align together to create seamless connections. By avoiding mesh overlap, which is a common challenge when building with pieces that were made separately, z-fighting between faces becomes less of a problem. Many of the Architecture Kit pieces were made single-sided, which reduces the number of faces that need to be rendered. This is incredibly useful when creating interior or exterior pieces where one side of a wall or ceiling piece will never be visible by a user.

We wanted the Architecture Kit to be configurable beyond just the meshes. Buildings exist in a variety of contexts, so different pieces of the Kit can have one or more material slots with unique textures and materials that can be applied. This allows you to customize a wall with a window trim to have, for example, a brick wall and a wood trim. You can choose from the built-in textures of the Architecture Kit pieces directly in Spoke, or download the entire kit from GitHub.

 Announcing the Architecture Kit

This release, which focuses on classic architectural styles and interior building tools, is just the beginning. As part of building the architecture kit, we’ve built a Blender add-on that will allow 3D artists to create their own kits. Creators will be able to specify individual collections of pieces and set an array of different materials that can be applied to the models to provide variation for different meshes. If you’re an artist interested in contributing to Spoke and Hubs by making a kit, drop us a line at

 Announcing the Architecture Kit

In addition to the Architecture Kit, Spoke has had some other major improvements over the course of its first year. Recent changes to object placement have made it easier when laying out scene objects, and an update last week introduced the ability to edit multiple objects at one time. We added particle systems over the summer, enabling more dynamic elements to be placed in your rooms. It’s also easier to visualize different components of your scene with a render mode toggle that allows you to swap between wireframe, normals, and shadows. The Asset Panel got a makeover, multi-select and edit was added, and we fixed a huge list of bugs when we made Spoke available as a fully hosted web app.

Looking ahead to next year, we’re planning on adding features that will give creators more control over interactivity within their scenes. While the complete design and feature set for this isn’t yet fully scoped, we’ve gotten great feedback from Spoke creators that they want to add custom behaviors, so we’re beginning to think about what it will look like to experiment with scripting or programmed behavior on elements in a scene. If you’re interested in providing feedback and following along with the planning, we encourage you to join the Hubs community Discord server, and keep an eye on the Spoke channel. You can also follow development of Spoke on GitHub.

There has never been a better time to start creating 3D content for the web. With new tools and features being added to Spoke all the time, we want your feedback in and we’d love to see what you’re building! Show off your creations and tag us on Twitter @ByHubs, or join us in the Hubs community Discord and meet other users, chat with the team, and stay up to date with the latest in Mozilla Social VR.

Categorieën: Mozilla-nl planet

Mozilla Open Policy & Advocacy Blog: A Year in Review: Fighting Online Disinformation

di, 29/10/2019 - 16:45

A year ago, Mozilla signed the first ever Code of Practice on Disinformation, brokered in Europe as part of our commitment to an internet that elevates critical thinking, reasoned argument, shared knowledge, and verifiable facts. The Code set a wide range of commitments for all the signatories, from transparency in political advertising to the closure of fake accounts, to address the spread of disinformation online. And we were hopeful that the Code would help to drive change in the platform and advertising sectors.

Since then, we’ve taken proactive steps to help tackle this issue, and today our self assessment of this work was published by the European Commission. Our assessment covers the work we’ve been doing at Mozilla to build tools within the Firefox browser to fight misinformation, empower users with educational resources, support research on disinformation and lead advocacy efforts to push the ecosystem to live up to their own commitments within the Code of Practice.

Most recently, we’ve rolled-out enhanced security features in the default setting of Firefox that safeguards our users from the pervasive tracking and collection of personal data by ad networks and tech companies by blocking all third-party cookies by default. As purveyors of disinformation feed off of information that can be revealed about an individual’s browsing behavior, we expect this protection to reduce the exposure of users to the risks of being targeted by disinformation campaigns.

We are proud of the steps we’ve taken during the last year, but it’s clear that the platforms and online advertising sectors  need to do more – e.g. protection against online tracking, meaningful transparency of political ads and support of the research community  – to fully tackle online disinformation and truly empower individuals. In fact, recent studies have highlighted the fact that disinformation is not going away – rather, it is becoming a “pervasive and ubiquitous part of everyday life”.

The Code of Practice represents a positive shift in the fight to counter misinformation, and today’s self assessments are proof of progress. However, this work has only just begun and we must make sure that the objectives of the Code are fully realized. At Mozilla, we remain committed to further this agenda to counter disinformation online.

The post A Year in Review: Fighting Online Disinformation appeared first on Open Policy & Advocacy.

Categorieën: Mozilla-nl planet