mozilla

Mozilla Nederland LogoDe Nederlandse
Mozilla-gemeenschap

Subscribe to Mozilla planet feed
Planet Mozilla - https://planet.mozilla.org/
Updated: 3 dagen 40 min ferlyn

This Week In Rust: This Week in Rust 301

ti, 27/08/2019 - 06:00

Hello and welcome to another issue of This Week in Rust! Rust is a systems language pursuing the trifecta: safety, concurrency, and speed. This is a weekly summary of its progress and community. Want something mentioned? Tweet us at @ThisWeekInRust or send us a pull request. Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub. If you find any errors in this week's issue, please submit a PR.

Updates from Rust Community News & Blog Posts Crate of the Week

This week's crate is include_flate, a variant of include_bytes!/include_str with compile-time DEFLATE compression and runtime lazy decompression.

Thanks to Willi Kappler for the suggestion!

Submit your suggestions and votes for next week!

Call for Participation

Always wanted to contribute to open-source projects but didn't know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

No issues were proposed for CfP.

If you are a Rust project owner and are looking for contributors, please submit tasks here.

Updates from Rust Core

221 pull requests were merged in the last week

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

No RFCs were approved this week.

Final Comment Period

Every week the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now.

RFCs Tracking Issues & PRs New RFCs

No new RFCs were proposed this week.

Upcoming Events Africa Asia Pacific Europe North America

If you are running a Rust event please add it to the calendar to get it mentioned here. Please remember to add a link to the event too. Email the Rust Community Team for access.

Rust Jobs

Tweet us at @ThisWeekInRust to get your job offers listed here!

Quote of the Week

Just as Bruce Lee practiced Jeet Kune Do, the style of all styles, Rust is not bound to any one paradigm. Instead of trying to put it into an existing box, it's best to just feel it out. Rust isn't Haskell and it's not C. It has aspects in common with each and it has traits unique to itself.

Alexander Nye on rust-users

Thanks to Louis Cloete for the suggestion!

Please submit quotes and vote for next week!

This Week in Rust is edited by: nasa42, llogiq, and Flavsditz.

Discuss on r/rust.

Categorieën: Mozilla-nl planet

IRL (podcast): Making Privacy Law

mo, 26/08/2019 - 09:05

The word “regulation" gets tossed around a lot. And it’s often aimed at the internet’s Big Tech companies. Some worry that the size of these companies and the influence they wield is too much. On the other side, there’s the argument that any regulation is overreach — leave it to the market, and everything will sort itself out. But over the last year, in the midst of this regulation debate, a funny thing happened. Tech companies got regulated. And our right to privacy got a little easier to exercise.

Gabriela Zanfir-Fortuna gives us the highlights of Europe’s sweeping GDPR privacy law, and explains how the law netted a huge fine against Spain’s National Football League. Twitter’s Data Protection Officer, Damien Kieran explains how regulation has shaped his new job and is changing how Twitter works with our personal data. Julie Brill at Microsoft says the company wants legislators to go further, and bring a federal privacy law to the U.S. And Manoush chats with Alastair MacTaggart, the California resident whose work led to the passing of the California Consumer Privacy Act.

IRL is an original podcast from Firefox. For more on the series go to irlpodcast.org

Learn more about consumer rights under the GDPR, and for a top-level look at what the GDPR does for you, check out our GDPR summary.

Here’s more about the California Consumer Privacy Act and Alastair MacTaggart.

And, get commentary and analysis on data privacy from Julie Brill, Gabriela Zanfir-Fortuna, and Damien Kieran.

Firefox has a department dedicated to open policy and advocacy. We believe that privacy is a right, not a privilege. Follow our blog for more.

Categorieën: Mozilla-nl planet

Cameron Kaiser: TenFourFox FPR16b1 available

snein, 25/08/2019 - 04:54
TenFourFox Feature Parity Release 16 beta 1 is now available (downloads, hashes, release notes). In addition, the official FAQ has been updated, along with the tech notes.

FPR16 got delayed because I really tried very hard to make some progress on our two biggest JavaScript deficiencies, the infamous issues 521 (async and await) and 533 (this is undefined). Unfortunately, not only did I make little progress on either, but the speculative fix I tried for issue 533 turned out to be the patch that unsettled the optimized build and had to be backed out. There is some partial work on issue 521, though, including a fully working parser patch. The problem is plumbing this into the browser runtime which is ripe for all kinds of regressions and is not currently implemented (instead, for compatibility, async functions get turned into a bytecode of null throw null return, essentially making any call to an async function throw an exception because it wouldn't have worked in the first place).

This wouldn't seem very useful except that effectively what the whole shebang does is convert a compile-time error into a runtime warning, such that other functions that previously might not have been able to load because of the error can now be parsed and hopefully run. With luck this should improve the functionality of sites using these functions even if everything still doesn't fully work, as a down payment hopefully on a future implementation. It may not be technically possible but it's a start.

Which reminds me, and since this blog is syndicated on Planet Mozilla: hey, front end devs, if you don't have to minify your source, how about you don't? Issue 533, in fact, is entirely caused because uglify took some fast and loose shortcuts that barf on older parsers, and it is nearly impossible to unwind errors that occur in minified code (this is now changing as sites slowly update, so perhaps this will be self-limited in the end, but in the meantime it's as annoying as Andy Samberg on crack). This is particularly acute given that the only thing fixing it in the regression range is a 2.5 megabyte patch that I'm only a small amount of the way through reading. On the flip side, I was able to find and fix several parser edge cases because Bugzilla itself was triggering them and the source file that was involved was not minified. That means I could actually read it and get error reports that made sense! Help support us lonely independent browser makers by keeping our lives a little less complicated. Thank you for your consideration!

Meanwhile, I have the parser changes on by default to see if it induces any regressions. Sites may or may not work any differently, but they should not work worse. If you find a site that seems to be behaving adversely in the beta, please toggle javascript.options.asyncfuncs to false and restart the browser, which will turn the warning back into an error. If even that doesn't fix it, make sure nothing on the site changed (like, I dunno, checking it in FPR15) before reporting it in the comments.

This version also "repairs" Firefox Sync support by connecting the browser back up to the right endpoints. You are reminded, however, that like add-on support Firefox Sync is only supported at a "best effort" level because I have no control over the backend server. I'll make reasonable attempts to keep it working, but things can break at any time, and it is possible that it will stay broken for good (and be removed from the UI) if data structures or the protocol change in a way I can't control for. There's a new FAQ entry for this I suggest you read.

Finally, there are performance improvements for HTML5 and URL parsing from later versions of Firefox as well as a minor update to same-site cookie support, plus a fix for a stupid bug with SVG backgrounds that I caused and Olga found, updates to basic adblock with new bad hosts, updates to the font blacklist with new bad fonts, and the usual security and stability updates from the ESRs.

I realize the delay means there won't be a great deal of time to test this, so let me know deficiencies as quickly as possible so they can be addressed before it goes live on or about September 2 Pacific time.

Categorieën: Mozilla-nl planet

Joel Maher: Digging into regressions

fr, 23/08/2019 - 22:53

Whenever a patch is landed on autoland, it will run many builds and tests to make sure there are no regressions.  Unfortunately many times we find a regression and 99% of the time backout the changes so they can be fixed.  This work is done by the Sheriff team at Mozilla- they monitor the trees and when something is wrong, they work to fix it (sometimes by a quick fix, usually by a backout).  A quick fact, there were 1228 regressions in H1 (January-June) 2019.

My goal in writing is not to recommend change, but instead to start conversations and figure out what data we should be collecting in order to have data driven discussions.  Only then would I expect that recommendations for changes would come forth.

What got me started in looking at regressions was trying to answer a question: “How many regressions did X catch?”  This alone is a tough question, instead I think the question should be “If we were not running X, how many regressions would our end users see?”  This is a much different question and has two distinct parts:

  • Unique Regressions: Only look at regressions found that only X found, not found on both X and Y
  • Product Fixes: did the regression result in changing code that we ship to users? (i.e. not editing the test)
  • Final Fix: many times a patch [set] lands and is backed out multiple times, in this case do we look at each time it was backed out, or only the change from initial landing to final landing?

These can be more difficult to answer.  For example, Product Fixes- maybe by editing the test case we are preventing a regression in the future because the test is more accurate.

In addition we need to understand how accurate the data we are using is.  As the sheriffs do a great job, they are human and humans make judgement calls.  In this case once a job is marked as “fixed_by_commit”, then we cannot go back in and edit it, so a typo or bad data will result in incorrect data.  To add to it, often times multiple patches are backed out at the same time, so is it correct to say that changes from bug A and bug B should be considered?

This year I have looked at this data many times to answer:

This data is important to harvest because if we were to turn off a set of jobs or run them as tier-2 we would end up missing regressions.  But if all we miss is editing manifests to disable failing tests, then we are getting no value from the test jobs- so it is important to look at what the regression outcome was.

In fact every time I did this I would run an active-data-recipe (fbc recipe in my repo) and have a large pile of data I needed to sort through and manually check.  I spent some time every day for a few weeks looking at regressions and now I have looked at 700 (bugs/changesets).  I found that in manually checking regressions, the end results fell into buckets:

test 196 28.00% product 272 38.86% manifest 134 19.14% unknown 48 6.86% backout 27 3.86% infra 23 3.29%

Keep in mind that many of the changes which end up in mozilla-central are not only product bugs, but infrastructure bugs, test editing, etc.

After looking at many of these bugs, I found that ~80% of the time things are straightforward (single patch [set] landed, backed out once, relanded with clear comments).  Data I would like to have easily available via a query:

  • Files that are changed between backout and relanding (even if it is a new patch).
  • A reason as part of phabricator that when we reland, it is required to have a few pre canned fields

Ideally this set of data would exist for not only backouts, but for anything that is landed to fix a regression (linting, build, manifest, typo).

Categorieën: Mozilla-nl planet

Mozilla Open Policy & Advocacy Blog: Mozilla Mornings on the future of EU content regulation

to, 22/08/2019 - 13:12

On 10 September, Mozilla will host the next installment of our EU Mozilla Mornings series – regular breakfast meetings where we bring together policy experts, policymakers and practitioners for insight and discussion on the latest EU digital policy developments.

The next installment will focus on the future of EU content regulation. We’re bringing together a high-level panel to discuss how the European Commission should approach the mooted Digital Services Act, and to lay out a vision for a sustainable and rights-protective content regulation framework in Europe.

Featuring

Werner Stengg
Head of Unit, E-Commerce & Platforms, European Commission DG CNECT
Alan Davidson
Vice President of Global Policy, Trust & Security, Mozilla
Liz Carolan
Executive Director, Digital Action
Eliska Pirkova
Europe Policy Analyst, Access Now

Moderated by Brian Maguire, EURACTIV

 Logistical information

10 September 2019
08:30-10:30
L42 Business Centre, rue de la Loi 42, Brussels 1040

 

Register your attendance here

The post Mozilla Mornings on the future of EU content regulation appeared first on Open Policy & Advocacy.

Categorieën: Mozilla-nl planet

Dustin J. Mitchell: Outreachy Round 20

to, 22/08/2019 - 02:00

Outreachy is a program that provides paid internships working on FOSS (Free and Open Source Software) to applicants from around the world. Internships are three months long and involve deep, technical work on a mentor-selected project, guided by mentors and other developers working on the FOSS application. At Mozilla, projects include work on Firefox itself, development of associated services and sites like Taskcluster and Treeherder, and analysis of Firefox telemetry data from a data-science perspective.

The program has an explicit focus on diversity: “Anyone who faces under-representation, systemic bias, or discrimination in the technology industry of their country is invited to apply.” It’s a small but very effective step in achieving better representation in this field. One of the interesting side-effects is that the program sees a number of career-changing participants. These people bring a wealth of interesting and valuable perspectives, but face challenges in a field where many have been programming since they were young.

Round 20 will involve full-time, remote work from mid-December 2019 to mid-March 2020. Initial applications for this round are now open.

New this year, applicants will need to make an “initial application” to determine eligibility before September 24. During this time, applicants can only see the titles of potential internship projects – not the details. On October 1, all applicants who have been deemed eligible will be able to see the full project descriptions. At that time, they’ll start communicating with project mentors and making small open-source contributions, and eventually prepare applications to one or more projects.

So, here’s the call to action:

  • If you are or know people who might benefit from this program, encourage them to apply or to talk to one of the Mozilla coordinators (Kelsey Witthauer and myself) at outreachy-coordinators@mozilla.com.
  • If you would like to mentor for the program, there’s still time! Get in touch with us and we’ll figure it out.
Categorieën: Mozilla-nl planet

Armen Zambrano: Frontend security — thoughts on Snyk

wo, 21/08/2019 - 21:29
Frontend security — thoughts on Snyk

I can’t remember why but few months ago I started looking into keeping my various React projects secure. Here’s some of what I discovered (more to come). I hope some will be valuable to you.

A while ago I discovered Snyk and I hooked it up my various projects with it. Snyk sends me a weekly security summary with the breakdown of various security issues across all of my projects.

<figcaption>This is part of Snyk’s weekly report I receive in my inbox</figcaption>

Snyk also gives me context about the particular security issues found:

<figcaption>This is extremely useful if you want to understand the security issue</figcaption>

It also analyzes my dependencies on a per-PR level:

<figcaption>Safey? Check! — This is a GitHub PR check (like Travis)</figcaption>

Other features that I’ve tried from Snyk:

  1. It sends you an email when there’s a vulnerable package (no need to wait for the weekly report)
  2. Open PRs upgrading vulnerable packages when possible
  3. Patch your code while there’s no published package with a fix
<figcaption>This is a summary for your project — it shows that a PR can be opened</figcaption>

The above features I have tried and I decided not to use them for the following reasons (listed in the same order as above):

  1. As a developer I already get enough interruptions in a week. I don’t need to be notified for every single security issue in my dependency tree. My projects don’t deal with anything sensitive, thus, I’m OK with waiting to deal with them at the beginning of the week
  2. The PR opened by Snyk does not work well with Yarn since it does not update the yarn.lock file, thus, requirying me to fetch the PR, run yarn install and push it back (This wastes my time)
  3. The feature to patch your code (Runtime protection or snyk protect) adds a very high set up cost (1–2 minutes) everytime you need to run yarn install. This is because it analyzes all your dependencies and patches your code in-situ. This gets on the way of my development workflow.

Overall I’m very satisfied with Snyk and I highly recommend using it.

In the following posts I’m thinking of speaking on:

  • How Renovate can help reduce the burden of keeping your projects up-to-date (reducing security work later on)
  • Differences between GitHub’s security tab (DependaBot) and Snyk
  • npm audit, yarn audit & snyk test

NOTE: This post is not sponsored by Snyk. I love what they do, I root for them and I hope they soon fix the issues I mention above.

Categorieën: Mozilla-nl planet

Support.Mozilla.Org: Introducing Bryce and Brady

wo, 21/08/2019 - 18:18

Hello SUMO Community,

I’m thrilled to share this update with you today. Bryce and Brady have joined us last week and will be able to help out on Support for some of the new efforts Mozilla are working on towards creating a connected and integrated Firefox experience.

They are going to be involved with new products, but also they won’t forget to put extra effort in providing support on forums and as well as serving as an escalation point for hard to solve issues.

Here is a short introduction to Brady and Bryce:

Hi! My name is Brady, and I am one of the new members of the SUMO team. I am originally from Boise, Idaho and am currently going to school for a Computer Science degree at Boise State. In my free time, I’m normally playing video games, writing, drawing, or enjoying the Sawtooths. I will be providing support for Mozilla products and for the SUMO team.

Hello!  My name is Bryce, I was born and raised in San Diego and I reside in Boise, Idaho.  Growing up I spent a good portion of my life trying to be the best sponger(boogie boarder) and longboarder in North County San Diego.  While out in the ocean I had all sorts of run-ins with sea creatures; but nothing to scary. I am also an IN-N-Out fan, as you may find me sporting their merchandise with boardshorts and the such.   I am truly excited to be part of this amazing group of fun loving folks and I am looking forward to getting to know everyone.

Please welcome them warmly!

Categorieën: Mozilla-nl planet

Hacks.Mozilla.Org: WebAssembly Interface Types: Interoperate with All the Things!

wo, 21/08/2019 - 18:02

People are excited about running WebAssembly outside the browser.

That excitement isn’t just about WebAssembly running in its own standalone runtime. People are also excited about running WebAssembly from languages like Python, Ruby, and Rust.

Why would you want to do that? A few reasons:

  • Make “native” modules less complicated
    Runtimes like Node or Python’s CPython often allow you to write modules in low-level languages like C++, too. That’s because these low-level languages are often much faster. So you can use native modules in Node, or extension modules in Python. But these modules are often hard to use because they need to be compiled on the user’s device. With a WebAssembly “native” module, you can get most of the speed without the complication.
  • Make it easier to sandbox native code
    On the other hand, low-level languages like Rust wouldn’t use WebAssembly for speed. But they could use it for security. As we talked about in the WASI announcement, WebAssembly gives you lightweight sandboxing by default. So a language like Rust could use WebAssembly to sandbox native code modules.
  • Share native code across platforms
    Developers can save time and reduce maintainance costs if they can share the same codebase across different platforms (e.g. between the web and a desktop app). This is true for both scripting and low-level languages. And WebAssembly gives you a way to do that without making things slower on these platforms.

Scripting languages like Python and Ruby saying 'We like WebAssembly's speed', low-level languages like Rust and C++ saying, 'and we like the security it could give us' and all of them saying 'and we all want to make developers more effective'

So WebAssembly could really help other languages with important problems.

But with today’s WebAssembly, you wouldn’t want to use it in this way. You can run WebAssembly in all of these places, but that’s not enough.

Right now, WebAssembly only talks in numbers. This means the two languages can call each other’s functions.

But if a function takes or returns anything besides numbers, things get complicated. You can either:

  • Ship one module that has a really hard-to-use API that only speaks in numbers… making life hard for the module’s user.
  • Add glue code for every single environment you want this module to run in… making life hard for the module’s developer.

But this doesn’t have to be the case.

It should be possible to ship a single WebAssembly module and have it run anywhere… without making life hard for either the module’s user or developer.

user saying 'what even is this API?' vs developer saying 'ugh, so much glue code to worry about' vs both saying 'wait, it just works?'

So the same WebAssembly module could use rich APIs, using complex types, to talk to:

  • Modules running in their own native runtime (e.g. Python modules running in a Python runtime)
  • Other WebAssembly modules written in different source languages (e.g. a Rust module and a Go module running together in the browser)
  • The host system itself (e.g. a WASI module providing the system interface to an operating system or the browser’s APIs)

 logos for different runtimes (Ruby, php, and Python), other wasm files compiled from Rust and Go, and host systems like the OS or browser

And with a new, early-stage proposal, we’re seeing how we can make this Just Work™, as you can see in this demo.

So let’s take a look at how this will work. But first, let’s look at where we are today and the problems that we’re trying to solve.

WebAssembly talking to JS

WebAssembly isn’t limited to the web. But up to now, most of WebAssembly’s development has focused on the Web.

That’s because you can make better designs when you focus on solving concrete use cases. The language was definitely going to have to run on the Web, so that was a good use case to start with.

This gave the MVP a nicely contained scope. WebAssembly only needed to be able to talk to one language—JavaScript.

And this was relatively easy to do. In the browser, WebAssembly and JS both run in the same engine, so that engine can help them efficiently talk to each other.

A js file asking the engine to call a WebAssembly function

The engine asking the WebAssembly file to run the function

But there is one problem when JS and WebAssembly try to talk to each other… they use different types.

Currently, WebAssembly can only talk in numbers. JavaScript has numbers, but also quite a few more types.

And even the numbers aren’t the same. WebAssembly has 4 different kinds of numbers: int32, int64, float32, and float64. JavaScript currently only has Number (though it will soon have another number type, BigInt).

The difference isn’t just in the names for these types. The values are also stored differently in memory.

First off, in JavaScript any value, no matter the type, is put in something called a box (and I explained boxing more in another article).

WebAssembly, in contrast, has static types for its numbers. Because of this, it doesn’t need (or understand) JS boxes.

This difference makes it hard to communicate with each other.

JS asking wasm to add 5 and 7, and Wasm responding with 9.2368828e+18

But if you want to convert a value from one number type to the other, there are pretty straightforward rules.

Because it’s so simple, it’s easy to write down. And you can find this written down in WebAssembly’s JS API spec.

A large book that has mappings between the wasm number types and the JS number types

This mapping is hardcoded in the engines.

It’s kind of like the engine has a reference book. Whenever the engine has to pass parameters or return values between JS and WebAssembly, it pulls this reference book off the shelf to see how to convert these values.

JS asking the engine to call wasm's add function with 5 and 7, and the engine looking up how to do conversions in the book

Having such a limited set of types (just numbers) made this mapping pretty easy. That was great for an MVP. It limited how many tough design decisions needed to be made.

But it made things more complicated for the developers using WebAssembly. To pass strings between JS and WebAssembly, you had to find a way to turn the strings into an array of numbers, and then turn an array of numbers back into a string. I explained this in a previous post.

JS putting numbers into WebAssembly's memory

This isn’t difficult, but it is tedious. So tools were built to abstract this away.

For example, tools like Rust’s wasm-bindgen and Emscripten’s Embind automatically wrap the WebAssembly module with JS glue code that does this translation from strings to numbers.

JS file complaining about having to pass a string to Wasm, and the JS glue code offering to do all the work

And these tools can do these kinds of transformations for other high-level types, too, such as complex objects with properties.

This works, but there are some pretty obvious use cases where it doesn’t work very well.

For example, sometimes you just want to pass a string through WebAssembly. You want a JavaScript function to pass a string to a WebAssembly function, and then have WebAssembly pass it to another JavaScript function.

Here’s what needs to happen for that to work:

  1. the first JavaScript function passes the string to the JS glue code

  2. the JS glue code turns that string object into numbers and then puts those numbers into linear memory

  3. then passes a number (a pointer to the start of the string) to WebAssembly

  4. the WebAssembly function passes that number over to the JS glue code on the other side

  5. the second JavaScript function pulls all of those numbers out of linear memory and then decodes them back into a string object

  6. which it gives to the second JS function

JS file passing string 'Hello' to JS glue code
JS glue code turning string into numbers and putting that in linear memory
JS glue code telling engine to pass 2 to wasm
Wasm telling engine to pass 2 to JS glue code
JS glue code taking bytes from linear memory and turning them back into a string
JS glue code passing string to JS file

So the JS glue code on one side is just reversing the work it did on the other side. That’s a lot of work to recreate what’s basically the same object.

If the string could just pass straight through WebAssembly without any transformations, that would be way easier.

WebAssembly wouldn’t be able to do anything with this string—it doesn’t understand that type. We wouldn’t be solving that problem.

But it could just pass the string object back and forth between the two JS functions, since they do understand the type.

So this is one of the reasons for the WebAssembly reference types proposal. That proposal adds a new basic WebAssembly type called anyref.

With an anyref, JavaScript just gives WebAssembly a reference object (basically a pointer that doesn’t disclose the memory address). This reference points to the object on the JS heap. Then WebAssembly can pass it to other JS functions, which know exactly how to use it.

JS passing a string to Wasm and the engine turning it into a pointer
Wasm passing the string to a different JS file, and the engine just passes the pointer on

So that solves one of the most annoying interoperability problems with JavaScript. But that’s not the only interoperability problem to solve in the browser.

There’s another, much larger, set of types in the browser. WebAssembly needs to be able to interoperate with these types if we’re going to have good performance.

WebAssembly talking directly to the browser

JS is only one part of the browser. The browser also has a lot of other functions, called Web APIs, that you can use.

Behind the scenes, these Web API functions are usually written in C++ or Rust. And they have their own way of storing objects in memory.

Web APIs’ parameters and return values can be lots of different types. It would be hard to manually create mappings for each of these types. So to simplify things, there’s a standard way to talk about the structure of these types—Web IDL.

When you’re using these functions, you’re usually using them from JavaScript. This means you are passing in values that use JS types. How does a JS type get converted to a Web IDL type?

Just as there is a mapping from WebAssembly types to JavaScript types, there is a mapping from JavaScript types to Web IDL types.

So it’s like the engine has another reference book, showing how to get from JS to Web IDL. And this mapping is also hardcoded in the engine.

A book that has mappings between the JS types and Web IDL types

For many types, this mapping between JavaScript and Web IDL is pretty straightforward. For example, types like DOMString and JS’s String are compatible and can be mapped directly to each other.

Now, what happens when you’re trying to call a Web API from WebAssembly? Here’s where we get to the problem.

Currently, there is no mapping between WebAssembly types and Web IDL types. This means that, even for simple types like numbers, your call has to go through JavaScript.

Here’s how this works:

  1. WebAssembly passes the value to JS.
  2. In the process, the engine converts this value into a JavaScript type, and puts it in the JS heap in memory
  3. Then, that JS value is passed to the Web API function. In the process, the engine converts the JS value into a Web IDL type, and puts it in a different part of memory, the renderer’s heap.

Wasm passing number to JS
Engine converting the int32 to a Number and putting it in the JS heap
Engine converting the Number to a double, and putting that in the renderer heap

This takes more work than it needs to, and also uses up more memory.

There’s an obvious solution to this—create a mapping from WebAssembly directly to Web IDL. But that’s not as straightforward as it might seem.

For simple Web IDL types like boolean and unsigned long (which is a number), there are clear mappings from WebAssembly to Web IDL.

But for the most part, Web API parameters are more complex types. For example, an API might take a dictionary, which is basically an object with properties, or a sequence, which is like an array.

To have a straightforward mapping between WebAssembly types and Web IDL types, we’d need to add some higher-level types. And we are doing that—with the GC proposal. With that, WebAssembly modules will be able to create GC objects—things like structs and arrays—that could be mapped to complicated Web IDL types.

But if the only way to interoperate with Web APIs is through GC objects, that makes life harder for languages like C++ and Rust that wouldn’t use GC objects otherwise. Whenever the code interacts with a Web API, it would have to create a new GC object and copy values from its linear memory into that object.

That’s only slightly better than what we have today with JS glue code.

We don’t want JS glue code to have to build up GC objects—that’s a waste of time and space. And we don’t want the WebAssembly module to do that either, for the same reasons.

We want it to be just as easy for languages that use linear memory (like Rust and C++) to call Web APIs as it is for languages that use the engine’s built-in GC. So we need a way to create a mapping between objects in linear memory and Web IDL types, too.

There’s a problem here, though. Each of these languages represents things in linear memory in different ways. And we can’t just pick one language’s representation. That would make all the other languages less efficient.

someone standing between the names of linear memory languages like C, C++, and Rust, pointing to Rust and saying 'I pick... that one!'. A red arrow points to the person saying 'bad idea'

But even though the exact layout in memory for these things is often different, there are some abstract concepts that they usually share in common.

For example, for strings the language often has a pointer to the start of the string in memory, and the length of the string. And even if the string has a more complicated internal representation, it usually needs to convert strings into this format when calling external APIs anyways.

This means we can reduce this string down to a type that WebAssembly understands… two i32s.

The string Hello in linear memory, with an offset of 2 and length of 5. Red arrows point to offset and length and say 'types that WebAssembly understands!'

We could hardcode a mapping like this in the engine. So the engine would have yet another reference book, this time for WebAssembly to Web IDL mappings.

But there’s a problem here. WebAssembly is a type-checked language. To keep things secure, the engine has to check that the calling code passes in types that match what the callee asks for.

This is because there are ways for attackers to exploit type mismatches and make the engine do things it’s not supposed to do.

If you’re calling something that takes a string, but you try to pass the function an integer, the engine will yell at you. And it should yell at you.

So we need a way for the module to explicitly tell the engine something like: “I know Document.createElement() takes a string. But when I call it, I’m going to pass you two integers. Use these to create a DOMString from data in my linear memory. Use the first integer as the starting address of the string and the second as the length.”

This is what the Web IDL proposal does. It gives a WebAssembly module a way to map between the types that it uses and Web IDL’s types.

These mappings aren’t hardcoded in the engine. Instead, a module comes with its own little booklet of mappings.

Wasm file handing a booklet to the engine and saying `Here's a little guidebook. It will tell you how to translate my types to interface types`

So this gives the engine a way to say “For this function, do the type checking as if these two integers are a string.”

The fact that this booklet comes with the module is useful for another reason, though.

Sometimes a module that would usually store its strings in linear memory will want to use an anyref or a GC type in a particular case… for example, if the module is just passing an object that it got from a JS function, like a DOM node, to a Web API.

So modules need to be able to choose on a function-by-function (or even even argument-by-argument) basis how different types should be handled. And since the mapping is provided by the module, it can be custom-tailored for that module.

Wasm telling engine 'Read carefully... For some function that take DOMStrings, I'll give you two numbers. For others, I'll just give you the DOMString that JS gave to me.'

How do you generate this booklet?

The compiler takes care of this information for you. It adds a custom section to the WebAssembly module. So for many language toolchains, the programmer doesn’t have to do much work.

For example, let’s look at how the Rust toolchain handles this for one of the simplest cases: passing a string into the alert function.

#[wasm_bindgen] extern "C" { fn alert(s: &str); }

The programmer just has to tell the compiler to include this function in the booklet using the annotation #[wasm_bindgen]. By default, the compiler will treat this as a linear memory string and add the right mapping for us. If we needed it to be handled differently (for example, as an anyref) we’d have to tell the compiler using a second annotation.

So with that, we can cut out the JS in the middle. That makes passing values between WebAssembly and Web APIs faster. Plus, it means we don’t need to ship down as much JS.

And we didn’t have to make any compromises on what kinds of languages we support. It’s possible to have all different kinds of languages that compile to WebAssembly. And these languages can all map their types to Web IDL types—whether the language uses linear memory, or GC objects, or both.

Once we stepped back and looked at this solution, we realized it solved a much bigger problem.

WebAssembly talking to All The Things

Here’s where we get back to the promise in the intro.

Is there a feasible way for WebAssembly to talk to all of these different things, using all these different type systems?

 logos for different runtimes (Ruby, php, and Python), other wasm files compiled from Rust and Go, and host systems like the OS or browser

Let’s look at the options.

You could try to create mappings that are hardcoded in the engine, like WebAssembly to JS and JS to Web IDL are.

But to do that, for each language you’d have to create a specific mapping. And the engine would have to explicitly support each of these mappings, and update them as the language on either side changes. This creates a real mess.

This is kind of how early compilers were designed. There was a pipeline for each source language to each machine code language. I talked about this more in my first posts on WebAssembly.

We don’t want something this complicated. We want it to be possible for all these different languages and platforms to talk to each other. But we need it to be scalable, too.

So we need a different way to do this… more like modern day compiler architectures. These have a split between front-end and back-end. The front-end goes from the source language to an abstract intermediate representation (IR). The back-end goes from that IR to the target machine code.

This is where the insight from Web IDL comes in. When you squint at it, Web IDL kind of looks like an IR.

Now, Web IDL is pretty specific to the Web. And there are lots of use cases for WebAssembly outside the web. So Web IDL itself isn’t a great IR to use.

But what if you just use Web IDL as inspiration and create a new set of abstract types?

This is how we got to the WebAssembly interface types proposal.

Diagram showing WebAssembly interface types in the middle. On the left is a wasm module, which could be compiled from Rust, Go, C, etc. Arrows point from these options to the types in the middle. On the right are host languages like JS, Python, and Ruby; host platforms like .NET, Node, and operating systems, and more wasm modules. Arrows point from these options to the types in the middle.

These types aren’t concrete types. They aren’t like the int32 or float64 types in WebAssembly today. There are no operations on them in WebAssembly.

For example, there won’t be any string concatenation operations added to WebAssembly. Instead, all operations are performed on the concrete types on either end.

There’s one key point that makes this possible: with interface types, the two sides aren’t trying to share a representation. Instead, the default is to copy values between one side and the other.

saying 'since this is a string in linear memory, I know how to manipulate it' and browser saying 'since this is a DOMString, I know how to manipulate it'

There is one case that would seem like an exception to this rule: the new reference values (like anyref) that I mentioned before. In this case, what is copied between the two sides is the pointer to the object. So both pointers point to the same thing. In theory, this could mean they need to share a representation.

In cases where the reference is just passing through the WebAssembly module (like the anyref example I gave above), the two sides still don’t need to share a representation. The module isn’t expected to understand that type anyway… just pass it along to other functions.

But there are times where the two sides will want to share a representation. For example, the GC proposal adds a way to create type definitions so that the two sides can share representations. In these cases, the choice of how much of the representation to share is up to the developers designing the APIs.

This makes it a lot easier for a single module to talk to many different languages.

In some cases, like the browser, the mapping from the interface types to the host’s concrete types will be baked into the engine.

So one set of mappings is baked in at compile time and the other is handed to the engine at load time.

Engine holding Wasm's mapping booklet and its own mapping reference book for Wasm Interface Types to Web IDL, saying 'So this maps to a string? Ok, I can take it from here to the DOMString that the function is asking for using my hardcoded bindings'

But in other cases, like when two WebAssembly modules are talking to each other, they both send down their own little booklet. They each map their functions’ types to the abstract types.

Engine reaching for mapping booklets from two wasm files, saying 'Ok, let's see how these map to each other'

This isn’t the only thing you need to enable modules written in different source languages to talk to each other (and we’ll write more about this in the future) but it is a big step in that direction.

So now that you understand why, let’s look at how.

What do these interface types actually look like?

Before we look at the details, I should say again: this proposal is still under development. So the final proposal may look very different.

Two construction workers with a sign that says 'Use caution'

Also, this is all handled by the compiler. So even when the proposal is finalized, you’ll only need to know what annotations your toolchain expects you to put in your code (like in the wasm-bindgen example above). You won’t really need to know how this all works under the covers.

But the details of the proposal are pretty neat, so let’s dig into the current thinking.

The problem to solve

The problem we need to solve is translating values between different types when a module is talking to another module (or directly to a host, like the browser).

There are four places where we may need to do a translation:

For exported functions

  • accepting parameters from the caller
  • returning values to the caller

For imported functions

  • passing parameters to the function
  • accepting return values from the function

And you can think about each of these as going in one of two directions:

  • Lifting, for values leaving the module. These go from a concrete type to an interface type.
  • Lowering, for values coming into the module. These go from an interface type to a concrete type.

Telling the engine how to transform between concrete types and interface types

So we need a way to tell the engine which transformations to apply to a function’s parameters and return values. How do we do this?

By defining an interface adapter.

For example, let’s say we have a Rust module compiled to WebAssembly. It exports a greeting_ function that can be called without any parameters and returns a greeting.

Here’s what it would look like (in WebAssembly text format) today.

a Wasm module that exports a function that returns two numbers. See proposal linked above for details.

So right now, this function returns two integers.

But we want it to return the string interface type. So we add something called an interface adapter.

If an engine understands interface types, then when it sees this interface adapter, it will wrap the original module with this interface.

an interface adapter that returns a string. See proposal linked above for details.

It won’t export the greeting_ function anymore… just the greeting function that wraps the original. This new greeting function returns a string, not two numbers.

This provides backwards compatibility because engines that don’t understand interface types will just export the original greeting_ function (the one that returns two integers).

How does the interface adapter tell the engine to turn the two integers into a string?

It uses a sequence of adapter instructions.

Two adapter instructions inside of the adapter function. See proposal linked above for details.

The adapter instructions above are two from a small set of new instructions that the proposal specifies.

Here’s what the instructions above do:

  1. Use the call-export adapter instruction to call the original greeting_ function. This is the one that the original module exported, which returned two numbers. These numbers get put on the stack.
  2. Use the memory-to-string adapter instruction to convert the numbers into the sequence of bytes that make up the string. We have to specifiy “mem” here because a WebAssembly module could one day have multiple memories. This tells the engine which memory to look in. Then the engine takes the two integers from the top of the stack (which are the pointer and the length) and uses those to figure out which bytes to use.

This might look like a full-fledged programming language. But there is no control flow here—you don’t have loops or branches. So it’s still declarative even though we’re giving the engine instructions.

What would it look like if our function also took a string as a parameter (for example, the name of the person to greet)?

Very similar. We just change the interface of the adapter function to add the parameter. Then we add two new adapter instructions.

Here’s what these new instructions do:

  1. Use the arg.get instruction to take a reference to the string object and put it on the stack.
  2. Use the string-to-memory instruction to take the bytes from that object and put them in linear memory. Once again, we have to tell it which memory to put the bytes into. We also have to tell it how to allocate the bytes. We do this by giving it an allocator function (which would be an export provided by the original module).

One nice thing about using instructions like this: we can extend them in the future… just as we can extend the instructions in WebAssembly core. We think the instructions we’re defining are a good set, but we aren’t committing to these being the only instruct for all time.

If you’re interested in understanding more about how this all works, the explainer goes into much more detail.

Sending these instructions to the engine

Now how do we send this to the engine?

These annotations gets added to the binary file in a custom section.

A file split in two. The top part is labeled 'known sections, e.g. code, data'. The bottom part is labeled 'custom sections, e.g. interface adapter'

If an engine knows about interface types, it can use the custom section. If not, the engine can just ignore it, and you can use a polyfill which will read the custom section and create glue code.

How is this different than CORBA, Protocol Buffers, etc?

There are other standards that seem like they solve the same problem—for example CORBA, Protocol Buffers, and Cap’n Proto.

How are those different? They are solving a much harder problem.

They are all designed so that you can interact with a system that you don’t share memory with—either because it’s running in a different process or because it’s on a totally different machine across the network.

This means that you have to be able to send the thing in the middle—the “intermediate representation” of the objects—across that boundary.

So these standards need to define a serialization format that can efficiently go across the boundary. That’s a big part of what they are standardizing.

Two computers with wasm files on them and multiple lines flowing into a single line connecting connecting them. The single line represents serialization and is labelled 'IR'

Even though this looks like a similar problem, it’s actually almost the exact inverse.

With interface types, this “IR” never needs to leave the engine. It’s not even visible to the modules themselves.

The modules only see the what the engine spits out for them at the end of the process—what’s been copied to their linear memory or given to them as a reference. So we don’t have to tell the engine what layout to give these types—that doesn’t need to be specified.

What is specified is the way that you talk to the engine. It’s the declarative language for this booklet that you’re sending to the engine.

Two wasm files with arrows pointing to the word 'IR' with no line between, because there is no serialization happening.

This has a nice side effect: because this is all declarative, the engine can see when a translation is unnecessary—like when the two modules on either side are using the same type—and skip the translation work altogether.

The engine looking at the booklets for a Rust module and a Go module and saying 'Ooh, you’re both using linear memory for this string... I’ll just do a quick copy between your memories, then'

How can you play with this today?

As I mentioned above, this is an early stage proposal. That means things will be changing rapidly, and you don’t want to depend on this in production.

But if you want to start playing with it, we’ve implemented this across the toolchain, from production to consumption:

  • the Rust toolchain
  • wasm-bindgen
  • the Wasmtime WebAssembly runtime

And since we maintain all these tools, and since we’re working on the standard itself, we can keep up with the standard as it develops.

Even though all these parts will continue changing, we’re making sure to synchronize our changes to them. So as long as you use up-to-date versions of all of these, things shouldn’t break too much.

Construction worker saying 'Just be careful and stay on the path'

So here are the many ways you can play with this today. For the most up-to-date version, check out this repo of demos.

Thank you
  • Thank you to the team who brought all of the pieces together across all of these languages and runtimes: Alex Crichton, Yury Delendik, Nick Fitzgerald, Dan Gohman, and Till Schneidereit
  • Thank you to the proposal co-champions and their colleagues for their work on the proposal: Luke Wagner, Francis McCabe, Jacob Gravelle, Alex Crichton, and Nick Fitzgerald
  • Thank you to my fantastic collaborators, Luke Wagner and Till Schneidereit, for their invaluable input and feedback on this article

The post WebAssembly Interface Types: Interoperate with All the Things! appeared first on Mozilla Hacks - the Web developer blog.

Categorieën: Mozilla-nl planet

The Mozilla Blog: Mozilla takes action to protect users in Kazakhstan

wo, 21/08/2019 - 12:01

Today, Mozilla and Google took action to protect the online security and privacy of individuals in Kazakhstan. Together the companies deployed technical solutions within Firefox and Chrome to block the Kazakhstan government’s ability to intercept internet traffic within the country.

The response comes after credible reports that internet service providers in Kazakhstan have required people in the country to download and install a government-issued certificate on all devices and in every browser in order to access the internet. This certificate is not trusted by either of the companies, and once installed, allowed the government to decrypt and read anything a user types or posts, including intercepting their account information and passwords. This targeted people visiting popular sites Facebook, Twitter and Google, among others.

“People around the world trust Firefox to protect them as they navigate the internet, especially when it comes to keeping them safe from attacks like this that undermine their security. We don’t take actions like this lightly, but protecting our users and the integrity of the web is the reason Firefox exists.” — Marshall Erwin, Senior Director of Trust and Security, Mozilla

“We will never tolerate any attempt, by any organization—government or otherwise—to compromise Chrome users’ data. We have implemented protections from this specific issue, and will always take action to secure our users around the world.” — Parisa Tabriz, Senior Engineering Director, Chrome

This is not the first attempt by the Kazakhstan government to intercept the internet traffic of everyone in the country. In 2015, the Kazakhstan government attempted to have a root certificate included in Mozilla’s trusted root store program. After it was discovered that they were intending to use the certificate to intercept user data, Mozilla denied the request. Shortly after, the government forced citizens to manually install its certificate but that attempt failed after organizations took legal action.

Each company will deploy a technical solution unique to its browser. For additional information on those solutions please see the below links.

Mozilla
Google

Russian: Если вы хотите ознакомиться с этим текстом на русском языке, нажмите здесь.

Kazakh: Бұл постыны қазақ тілінде мына жерден оқыңыз.

 

 

 

 

The post Mozilla takes action to protect users in Kazakhstan appeared first on The Mozilla Blog.

Categorieën: Mozilla-nl planet

Mozilla Security Blog: Protecting our Users in Kazakhstan

wo, 21/08/2019 - 12:00

Russian translation: Если вы хотите ознакомиться с этим текстом на русском языке, нажмите здесь.

Kazakh translation: Бұл постыны қазақ тілінде мына жерден оқыңыз.

In July, a Firefox user informed Mozilla of a security issue impacting Firefox users in Kazakhstan: They stated that Internet Service Providers (ISPs) in Kazakhstan had begun telling their customers that they must install a government-issued root certificate on their devices. What the ISPs didn’t tell their customers was that the certificate was being used to intercept network communications. Other users and researchers confirmed these claims, and listed 3 dozen popular social media and communications sites that were affected.

The security and privacy of HTTPS encrypted communications in Firefox and other browsers relies on trusted Certificate Authorities (CAs) to issue website certificates only to someone that controls the domain name or website. For example, you and I can’t obtain a trusted certificate for www.facebook.com because Mozilla has strict policies for all CAs trusted by Firefox which only allow an authorized person to get a certificate for that domain. However, when a user in Kazakhstan installs the root certificate provided by their ISP, they are choosing to trust a CA that doesn’t have to follow any rules and can issue a certificate for any website to anyone. This enables the interception and decryption of network communications between Firefox and the website, sometimes referred to as a Monster-in-the-Middle (MITM) attack.

We believe this act undermines the security of our users and the web, and it directly contradicts Principle 4 of the Mozilla Manifesto that states, “Individuals’ security and privacy on the internet are fundamental and must not be treated as optional.”

To protect our users, Firefox, together with Chrome, will block the use of the Kazakhstan root CA certificate. This means that it will not be trusted by Firefox even if the user has installed it. We believe this is the appropriate response because users in Kazakhstan are not being given a meaningful choice over whether to install the certificate and because this attack undermines the integrity of a critical network security mechanism.  When attempting to access a website that responds with this certificate, Firefox users will see an error message stating that the certificate should not be trusted.

We encourage users in Kazakhstan affected by this change to research the use of virtual private network (VPN) software, or the Tor Browser, to access the Web. We also strongly encourage anyone who followed the steps to install the Kazakhstan government root certificate to remove it from your devices and to immediately change your passwords, using a strong, unique password for each of your online accounts.

The post Protecting our Users in Kazakhstan appeared first on Mozilla Security Blog.

Categorieën: Mozilla-nl planet

Cameron Kaiser: FPR16 delays

wo, 21/08/2019 - 05:46
FPR16 was supposed to reach you in beta sometime tomorrow but I found a reproducible crash in the optimized build, probably due to one of my vain attempts to fix JavaScript bugs. I'm still investigating exactly which change(s) were responsible. We should still make the deadline (September 3) to be concurrent with the 60.9/68.1 ESRs, but there will not be much of a beta testing period and I don't anticipate it being available until probably at least Friday or Saturday. More later.

While you're waiting, read about today's big OpenPOWER announcement. Isn't it about time for a modern PowerPC under your desk?

Categorieën: Mozilla-nl planet

Mozilla GFX: moz://gfx newsletter #47

mo, 19/08/2019 - 17:33

Hi there! Time for another mozilla graphics newsletter. In the comments section of the previous newsletter, Michael asked about the relation between WebRender and WebGL, I’ll try give a short answer here.

Both WebRender and WebGL need access to the GPU to do their work. At the moment both of them use the OpenGL API, either directly or through ANGLE which emulates OpenGL on top of D3D11. They, however, each work with their own OpenGL context. Frames produced with WebGL are sent to WebRender as texture handles. WebRender, at the API level, has a single entry point for images, video frames, canvases, in short for every grid of pixels in some flavor of RGB format, be them CPU-side buffers or already in GPU memory as is normally the case for WebGL. In order to share textures between separate OpenGL contexts we rely on platform-specific APIs such as EGLImage and DXGI.

Beyond that there isn’t any fancy interaction between WebGL and WebRender. The latter sees the former as a image producer just like 2D canvases, video decoders and plain static images.

What’s new in gfx Wayland and hidpi improvements on Linux
  • Martin Stransky made a proof of concept implementation of DMABuf textures in Gecko’s IPC mechanism. This dmabuf EGL texture backend on Wayland is similar what we have on Android/Mac. Dmabuf buffers can be shared with main/compositor process, can be bound as a render target or texture and can be located at GPU memory. The same dma buf buffer can be also used as hardware overlay when it’s attached to wl_surface/wl_subsurface as wl_buffer.
  • Jan Horak fixed a bug that prevented tabs from rendering after restoring a minimized window.
  • Jan Horak fixed the window parenting hierarchy with Wayland.
  • Jan Horak fixed a bug with hidpi that was causing select popups to render incorrectly after scrolling.
WebGL multiview rendering

WebGL’s multiview rendering extension has been approved by the working group and it’s implementation by Jeff Gilbert will be shipping in Firefox 70.
This extension allows more efficient rendering into multiple viewports, which is most commonly use by VR/AR for rendering both eyes at the same time.

Better high dynamic range support

Jean Yves landed the first part of his HDR work (a set of 14 patches). While we can’t yet output HDR content to HDR screen, this work greatly improved the correctness of the conversion from various HDR formats to low dynamic range sRGB.

You can follow progress on the color space meta bug.

What’s new in WebRender

WebRender is a GPU based 2D rendering engine for web written in Rust, currently powering Firefox‘s rendering engine as well as the research web browser servo.

If you are curious about the state of WebRender on a particular platform, up to date information is available at http://arewewebrenderyet.com

Speaking of which, darkspirit enabled webrender on Nightly for Nvidia+Nouveau drivers linux users in Firefox Nightly.

More filters in WebRender

When We run into a primitive that isn’t supported by WebRender, we make it go through software fallback implementation which can be slow for some things. SVG filters are a good example of primitives that perform much better if implemented on the GPU in WebRender.
Connor Brewster has been working on implementing a number of SVG filters in WebRender:

See the SVG filters in WebRender meta bug.

Texture swizzling and allocation

WebRender previously only worked with BGRA for color textures. Unfortunately this format is optimal on some platforms but sub-optimal (or even unsupported) on others. So a conversion sometimes has to happen and this conversion if done by the driver can be very costly.

Kvark reworked the texture caching logic to support using and swizzling between different formats (for example RGBA and BGRA).
A document that landed with the implementation provides more details about the approach and context.

Kvark also improved the texture cache allocation behavior.

Kvark also landed various refactorings (1), (2), (3), (4).

Android improvements

Jamie fixed emoji rendering on webrender on android and continues investigating driver issues on Adreno 3xx devices.

Displaylist serialization

Dan replaced bincode in our DL IPC code with a new bespoke and private serialization library (peek-poke), ending the terrible reign of the secret serde deserialize_in_place hacks and our fork of serde_derive.

Picture caching improvements

Glenn landed several improvements and fixes to the picture caching infrastructure:

  • Bug 1566712 – Fix quality issueswith picture caching when transform has fractional offsets.
  • Bug 1572197 – Fix world clip region for preserve-3d items with picture caching.
  • Bug 1566901 – Make picture caching more robust to float issues.
  • Bug 1567472 – Fix bug in preserve-3d batching code in WebRender.
Font rendering improvements

Lee landed quite a few font related patches:

  • Bug 1569950 – Only partially clear WR glyph caches if it is not necessary to fully clear.
  • Bug 1569174 – Disable embedded bitmaps if ClearType rendering mode is forced.
  • Bug 1568841 – Force GDI parameters for GDI render mode.
  • Bug 1568858 – Always stretch box shadows except for Cairo.
  • Bug 1568841 – Don’t use enhanced contrast on GDI fonts.
  • Bug 1553818 – Use GDI ClearType contrast for GDI font gamma.
  • Bug 1565158 – Allow forcing DWrite symmetric rendering mode.
  • Bug 1563133 – Limit the GlyphBuffer capacity.
  • Bug 1560520 – Limit the size of WebRender’s glyph cache.
  • Bug 1566449 – Don’t reverse glyphs in GlyphBuffer for RTL.
  • Bug 1566528 – Clamp the amount of synthetic bold extra strikes.
  • Bug 1553228 – Don’t free the result of FcPatternGetString.
Various fixes and improvements
  • Gankra fixed an issue with addon popups and document splitting.
  • Sotaro prevented some unnecessary composites on out-of-viewport external image updates.
  • Nical fixed an integer oveflow causing the browser to freeze
  • Nical improved the overlay profiler by showing more relevant numbers when the timings are noisy.
  • Nical fixed corrupted rendering of progressively loaded images.
  • Nical added a fast path when none of the primitives of an image batch need anti-aliasing or repetition.
Categorieën: Mozilla-nl planet

Wladimir Palant: Kaspersky in the Middle - what could possibly go wrong?

mo, 19/08/2019 - 09:40

Roughly a decade ago I read an article that asked antivirus vendors to stop intercepting encrypted HTTPS connections, this practice actively hurting security and privacy. As you can certainly imagine, antivirus vendors agreed with the sensible argument and today no reasonable antivirus product would even consider intercepting HTTPS traffic. Just kidding… Of course they kept going, and so two years ago a study was published detailing the security issues introduced by interception of HTTPS connections. Google and Mozilla once again urged antivirus vendors to stop. Surely this time it worked?

Of course not. So when I decided to look into Kaspersky Internet Security in December last year, I found it breaking up HTTPS connections so that it would get between the server and your browser in order to “protect” you. Expecting some deeply technical details about HTTPS protocol misimplementations now? Don’t worry, I don’t know enough myself to inspect Kaspersky software on this level. The vulnerabilities I found were far more mundane.

Kaspersky Internet Security getting between browser and server

I reported eight vulnerabilities to Kaspersky Lab between 2018-12-13 and 2018-12-21. This article will only describe three vulnerabilities which have been fixed in April this year. This includes two vulnerabilities that weren’t deemed a security risk by Kaspersky, it’s up to you to decide whether you agree with this assessment. The remaining five vulnerabilities have only been fixed in July, and I agreed to wait until November with the disclosure to give users enough time to upgrade.

Edit (2019-08-22): In order to disable this functionality you have to go into Settings, select “Additional” on the left side, then click “Network.” There you will see a section called “Encryption connection scanning” where you need to choose “Do not scan encrypted connections.”

{{toc}}

The underappreciated certificate warning pages

There is an important edge case with HTTPS connections: what if a connection is established but the other side uses an invalid certificate? Current browsers will generally show you a certificate warning page in this scenario. In Firefox it looks like this:

Certificate warning page in Firefox

This page has seen a surprising amount of changes over the years. The browser vendors recognized that asking users to make a decision isn’t a good idea here. Most of the time, getting out is the best course of action, and ignoring the warning only a viable option for very technical users. So the text here is very clear, low on technical details, and the recommended solution is highlighted. The option to ignore the warning is well-hidden on the other hand, to prevent people from using it without understanding the implications. While the page looks different in other browsers, the main design considerations are the same.

But with Kaspersky Internet Security in the middle, the browser is no longer talking to the server, Kaspersky is. The way HTTPS is designed, it means that Kaspersky is responsible for validating the server’s certificate and producing a certificate warning page. And that’s what the certificate warning page looks like then:

Certificate warning page when Kaspersky is installed

There is a considerable amount of technical details here, supposedly to allow users to make an informed decision, but usually confusing them instead. Oh, and why does it list the URL as “www.example.org”? That’s not what I typed into the address bar, it’s actually what this site claims to be (the name has been extracted from the site’s invalid certificate). That’s a tiny security issue here, wasn’t worth reporting however as this only affects sites accessed by IP address which should never be the case with HTTPS.

The bigger issue: what is the user supposed to do here? There is “leave this website” in the text, but experience shows that people usually won’t read when hitting a roadblock like this. And the highlighted action here is “I understand the risks and wish to continue” which is what most users can be expected to hit.

Using clickjacking to override certificate warnings

Let’s say that we hijacked some user’s web traffic, e.g. by tricking them into connecting to our malicious WiFi hotspot. Now we want to do something evil with that, such as collecting their Google searches or hijacking their Google account. Unfortunately, HTTPS won’t let us do it. If we place ourselves between the user and the Google server, we have to use our own certificate for the connection to the user. With our certificate being invalid, this will trigger a certificate warning however.

So the goal is to make the user click “I understand the risks and wish to continue” on the certificate warning page. We could just ask nicely, and given how this page is built we’ll probably succeed in a fair share of cases. Or we could use a trick called clickjacking – let the user click it without realizing what they are clicking.

There is only one complication. When the link is clicked there will be an additional confirmation pop-up:

Warning displayed by Kaspersky when overriding a certificate

But don’t despair just yet! That warning is merely generic text, it would apply to any remotely insecure action. We would only need to convince the user that the warning is expected and they will happily click “Continue.” For example, we could give them the following page when they first connect to the network, similar to those captive portals:

Fake Kaspersky warning page

Looks like a legitimate Kaspersky warning page but isn’t, the text here was written by me. The only “real” thing here is the “I understand the risks and wish to continue” link which actually belongs to an embedded frame. That frame contains Kaspersky’s certificate warning for www.google.com and has been positioned in such a way that only the link is visible. When the user clicks it, they will get the generic warning from above and without doubt confirm ignoring the invalid certificate. We won, now we can do our evil thing without triggering any warnings!

How browser vendors deal with this kind of attack? They require at least two clicks to happen on different spots of the certificate warning page in order to add an exception for an invalid certificate, this makes clickjacking attacks impracticable. Kaspersky on the other hand felt very confident about their warning prompt, so they opted for adding more information to it. This message will now show you the name of the site you are adding the exception for. Let’s just hope that accessing a site by IP address is the only scenario where attackers can manipulate that name…

Something you probably don’t know about HSTS

There is a slightly less obvious detail to the attack described above: it shouldn’t have worked at all. See, if you reroute www.google.com traffic to a malicious server and navigate to the site then, neither Firefox nor Chrome will give you the option to override the certificate warning. Getting out will be the only option available, meaning no way whatsoever to exploit the certificate warning page. What is this magic? Did browsers implement some special behavior only for Google?

Firefox certificate warning for www.google.com

They didn’t. What you see here is a side-effect of the HTTP Strict-Transport-Security (HSTS) mechanism, which Google and many other websites happen to use. When you visit Google it will send the HTTP header Strict-Transport-Security: max-age=31536000 with the response. This tells the browser: “This is an HTTPS-only website, don’t ever try to create an unencrypted connection to it. Keep that in mind for the next year.”

So when the browser later encounters a certificate error on a site using HSTS, it knows: the website owner promised to keep HTTPS functional. There is no way that an invalid certificate is ok here, so allowing users to override the certificate would be wrong.

Unless you have Kaspersky in the middle of course, because Kaspersky completely ignores HSTS and still allows users to override the certificate. When I reported this issue, the vendor response was that this isn’t a security risk because the warning displayed is sufficient. Somehow they decided to add support for HSTS nevertheless, so that current versions will no longer allow overriding certificates here.

It’s no doubt that there are more scenarios where Kaspersky software weakens the security precautions made by browsers. For example, if a certificate is revoked (usually because it has been compromised), browsers will normally recognize that thanks to OCSP stapling and prevent the connection. But I noticed recently that Kaspersky Internet Security doesn’t support OCSP stapling, so if this application is active it will happily allow you to connect to a likely malicious server.

Using injected content for Universal XSS

Kaspersky Internet Security isn’t merely listening in on connections to HTTPS sites, it is also actively modifying those. In some cases it will generate a response of its own, such as the certificate warning page we saw above. In others it will modify the response sent by the server.

For example, if you didn’t install the Kaspersky browser extension, it will fall back to injecting a script into server responses which is then responsible for “protecting” you. This protection does things like showing a green checkmark next to Google search results that are considered safe. As Heise Online wrote merely a few days ago, this also used to leak a unique user ID which allowed tracking users regardless of any protective measures on their side. Oops…

There is a bit more to this feature called URL Advisor. When you put the mouse cursor above the checkmark icon a message appears stating that you have a safe site there. That message is a frame displaying url_advisor_balloon.html. Where does this file load from? If you have the Kaspersky browser extension, it will be part of that browser extension. If you don’t, it will load from ff.kis.v2.scr.kaspersky-labs.com in Firefox and gc.kis.v2.scr.kaspersky-labs.com in Chrome – Kaspersky software will intercept requests to these servers and answer them locally. I noticed however that things were different in Microsoft Edge, here this file would load directly from www.google.com (or any other website if you changed the host name).

URL Advisor frame showing up when the checkmark icon is hovered

Certainly, when injecting their own web page into every domain on the web Kaspersky developers thought about making it very secure? Let’s have a look at the code running there:

var policeLink = document.createElement("a"); policeLink.href = IsDefined(UrlAdvisorLinkPoliceDecision) ? UrlAdvisorLinkPoliceDecision : locales["UrlAdvisorLinkPoliceDecision"]; policeLink.target = "_blank"; div.appendChild(policeLink);

This creates a link inside the frame dynamically. Where the link target comes from? It’s part of the data received from the parent document, no validation performed. In particular, javascript: links will be happily accepted. So a malicious website needs to figure out the location of url_advisor_balloon.html and embed it in a frame using the host name of the website they want to attack. Then they send a message to it:

frame.contentWindow.postMessage(JSON.stringify({ command: "init", data: { verdict: { url: "", categories: [ 21 ] }, locales: { UrlAdvisorLinkPoliceDecision: "javascript:alert('Hi, this JavaScript code is running on ' + document.domain)", CAT_21: "click here" } } }), "*");

What you get is a link labeled “click here” which will run arbitrary JavaScript code in the context of the attacked domain when clicked. And once again, the attackers could ask the user nicely to click it. Or they could use clickjacking, so whenever the user clicks anywhere on their site, the click goes to this link inside an invisible frame.

Injected JavaScript code running in context of the Google domain

And here you have it: a malicious website taking over your Google or social media accounts, all because Kaspersky considered it a good idea to have their content injected into secure traffic of other people’s domains. But at least this particular issue was limited to Microsoft Edge.

Timeline
  • 2018-12-13: Sent report via Kaspersky bug bounty program: Lack of HSTS support facilitating MiTM attacks.
  • 2018-12-17: Sent reports via Kaspersky bug bounty program: Certificate warning pages susceptible to clickjacking and Universal XSS in Microsoft Edge.
  • 2018-12-20: Response from Kaspersky: HSTS and clickjacking reports are not considered security issues.
  • 2018-12-20: Requested disclosure of the HSTS and clickjacking reports.
  • 2018-12-24: Disclosure denied due to similarity with one of my other reports.
  • 2019-04-29: Kaspersky notifies me about the three issues here being fixed (KIS 2019 Patch E, actually released three weeks earlier).
  • 2019-04-29: Requested disclosure of these three issues, no response.
  • 2019-07-29: With five remaining issues reported by me fixed (KIS 2019 Patch F and KIS 2020), requested disclosure on all reports.
  • 2019-08-04: Disclosure denied on HSTS report because “You’ve requested too many tickets for disclosure at the same time.”
  • 2019-08-05: Disclosure denied on five not yet disclosed reports, asking for time until November for users to update.
  • 2019-08-06: Notified Kaspersky about my intention to publish an article about the three issues here on 2019-08-19, no response.
  • 2019-08-12: Reminded Kaspersky that I will publish an article on these three issues on 2019-08-19.
  • 2019-08-12: Kaspersky requesting an extension of the timeline until 2019-08-22, citing that they need more time to prepare.
  • 2019-08-16: Security advisory published by Kaspersky without notifying me.
Categorieën: Mozilla-nl planet

Cameron Kaiser: Chrome murders FTP like Jeffrey Epstein

sn, 17/08/2019 - 16:36
What is it with these people? Why can't things that are working be allowed to still go on working? (Blah blah insecure blah blah unused blah blah maintenance blah blah web everything.)

This leaves an interesting situation where Google has, in its very own search index, HTML pages served by FTP its own browser won't be able to view:

At the top of the search results, even!

Obviously those FTP HTML pages load just fine in mainline Firefox, at least as of this writing, and of course TenFourFox. (UPDATE: This won't work in Firefox either after Fx70, though FTP in general will still be accessible. Note that it references Chrome's announcements; as usual, these kinds of distributed firing squads tend to be self-reinforcing.)

Is it a little ridiculous to serve pages that way? Okay, I'll buy that. But it works fine and wasn't bothering anyone, and they must have some relevance to be accessible because Google even indexed them.

Why is everything old suddenly so bad?

Categorieën: Mozilla-nl planet

Tantek Çelik: IndieWebCamps Timeline 2011-2019: Amsterdam to Utrecht

fr, 16/08/2019 - 23:21

While not a post directly about IndieWeb Summit 2019, this post provides a bit of background and is certainly related, so I’m including it in my series of posts about the Summit. Previous post in this series:

At the beginning of IndieWeb Summit 2019, I gave a brief talk on State of the IndieWeb and mentioned that:

We've scheduled lots of IndieWebCamps this year and are on track to schedule a record number of different cities as well.

I had conceived of a graphical representation of the growth of IndieWebCamps over the past nine years, both in number and across the world, but with everything else involved with setting up and running the Summit, ran out of time. However, the idea persisted, and finally this past week, with a little help from Aaron Parecki re-implementing Dopplr’s algorithm for turning city names into colors, was able to put togther something pretty close to what I’d envisioned:

Istanbul  Amsterdam  Utrecht  Nürnberg    Düsseldorf      Berlin      Edinburgh  Oxford   Brighton       New Haven  Baltimore  Cambridge    New York      Austin   Bellingham  Los Angeles   San Francisco    Portland          201120122013201420152016201720182019

I don’t know of any tools to take something like this kind of locations vs years data and graph it as such. So I built an HTML table with a cell for each IndieWebCamp, as well as cells for the colspans of empty space. Each colored cell is hyperlinked to the IndieWebCamp for that city for that year.

2011-2018 and over half of 2019 are IndieWebCamps (and Summits) that have already happened. 2019 includes bars for four upcoming IndieWebCamps, which are fully scheduled and open for sign-ups.

The table markup is copy pasted from the IndieWebCamp wiki template where I built it, and you can see the template working live in the context of the IndieWebCamp Cities page. I’m sure the markup could be improved, suggestions welcome!

Categorieën: Mozilla-nl planet