mozilla

Mozilla Nederland LogoDe Nederlandse
Mozilla-gemeenschap

Daniel Stenberg: curl supports rustls

Mozilla planet - di, 09/02/2021 - 11:24

curl is an internet transfer engine. A rather modular one too. Parts of curl’s functionality is provided by selectable alternative implementations that we call backends. You select what backends to enable at build-time and in many cases the backends are enabled and powered by different 3rd party libraries.

Many backends

curl has a range of such alternative backends for various features:

  1. International Domain Names
  2. Name resolving
  3. TLS
  4. SSH
  5. HTTP/3
  6. HTTP content encoding
  7. HTTP
Stable API and ABI

Maintaining a stable API and ABI is key to libcurl. As long as those promises are kept, changing internals such as switching between backends is perfectly fine.

The API is the armored front door that we don’t change. The backends is the garden on the back of the house that we can dig up and replant every year if we want, without us having to change the front door.

TLS backends

Already back in 2005 we added support for using an alternative TLS library in curl when we added support for GnuTLS in addition to OpenSSL, and since then we’ve added many more. We do this by having an internal API through which we do all the TLS related things and for each third party library we support we have code that does the necessary logic to connect the internal API with the corresponding TLS library.

rustls

Today, we merged support for yet another TLS library: rustls. This is a TLS library written in rust and it has a C API provided in a separate project called crustls. Strictly speaking, curl is built to use crustls.

This is still early days for the rustls backend and it is not yet feature complete. There’s more work to do and polish to apply before we can think of it as a proper competitor to the already established and well-used TLS backends, but with this merge it makes it much easier for more people to help out and test it out. Feel free and encouraged to join in!

We count this addition as the 14th concurrently supported TLS library in curl. I’m not aware of any other project, anywhere, that supports more or even this many TLS libraries.

rustls again!

The TLS library named mesalink is actually already using rustls, but under an OpenSSL API disguise and we support that since a few years back…

Credits

The TLS backend code for rustls was written and contributed by Jacob Hoffman-Andrews.

Categorieën: Mozilla-nl planet

The Mozilla Blog: Mozilla Welcomes the Rust Foundation

Mozilla planet - ma, 08/02/2021 - 18:07

Today Mozilla is thrilled to join the Rust community in announcing the formation of the Rust Foundation. The Rust Foundation will be the home of the popular Rust programming language that began within Mozilla. Rust has long been bigger than just a Mozilla project and today’s announcement is the culmination of many years of community building and collaboration. Mozilla is pleased to be a founding Platinum Sponsor of the Rust Foundation and looks forward to working with it to help Rust continue to grow and prosper.

Rust is an open-source programming language focused on safety, speed and concurrency. It started life as a side project in Mozilla Research. Back in 2010, Graydon Hoare presented work on something he hoped would become a “slightly less annoying” programming language that could deliver better memory safety and more concurrency. Within a few years, Rust had grown into a project with an independent governance structure and contributions from inside and outside Mozilla. In 2015, the Rust project announced the first stable release, Rust 1.0.

Success quickly followed. Rust is so popular that it has been voted the most “most-loved” programming language in Stack Overflow’s developer survey for five years in a row. Adoption is increasing as companies big and small, scientists, and many others discover its power and usability. Mozilla used Rust to build Stylo, the CSS engine in Firefox (replacing approximately 160,000 lines of C++ with 85,000 lines of Rust).

It takes a lot for a new programming language to be successful. Rust’s growth is thanks to literally thousands of contributors and a strong culture of inclusion. The wide range of contributors and adopters has made Rust a better language for everyone.

Mozilla is proud of its role in Rust’s creation and we are happy to see it outgrow its origins and secure a dedicated organization to support its continued evolution. Given its reach and impact, Rust will benefit from an organization that is 100% focused on the project.

The new Rust Foundation will have board representation from a wide set of stakeholders to help set a path to its own future. Other entities will be able to provide direct financial resources to Rust beyond in-kind contributions. The Rust Foundation will not replace the existing community and technical governance for Rust. Rather, it will be the organization that hosts Rust infrastructure, supports the community, and stewards the language for the benefit of all users.

Mozilla joins all Rustaceans in welcoming the new Rust Foundation.

The post Mozilla Welcomes the Rust Foundation appeared first on The Mozilla Blog.

Categorieën: Mozilla-nl planet

Mike Taylor: Obsolete RFCs and obsolete Cookie Path checking comments

Mozilla planet - ma, 08/02/2021 - 07:00

The other day I was reading Firefox’s CookieService.cpp to figure out how Firefox determines its maximum cookie size (more on that one day, maybe) when the following comment (from 2002, according to blame) caught my eye:

The following test is part of the RFC2109 spec. Loosely speaking, it says that a site cannot set a cookie for a path that it is not on. See bug 155083. However this patch broke several sites -- nordea (bug 155768) and citibank (bug 156725). So this test has been disabled, unless we can evangelize these sites.

Note 1: Anything having to do with broken websites is wont to catch my attention, especially olde bugs (let’s face it, in 2002 the internet was basically the High Middle Ages. Like yeah, we were killing it with the technological innovation on top of windmills and we’re getting pretty good at farming and what not, but it’s still the Middle Ages compared to today and kind of sucked).

Note 2: The two sites referenced in the Firefox comment are banks (see 155768 and 156725). And one of the axioms of web compatibility is that if you break a bank with some cool new API or non-security bug fix, game over, it’s getting reverted. And I’m pretty sure you can’t legally create test accounts for banks to run tests against and Silk Road got taken down by the feds.

But at the time, the now obsolete rfc2019 had this to say about cookies Path attributes:

To prevent possible security or privacy violations, a user agent rejects a cookie (shall not store its information) if any of the following is true: * The value for the Path attribute is not a prefix of the request-URI.

Well, as documented, that wasn’t really web-compatible, and it’s kind of a theoretical concern (so long as you enforce path-match rules before handing out cookies from the cookie jar. ¯\_(ツ)_/¯). So Firefox commented out the conditional that would reject the cookie and added the comment above in question. As a result, people’s banks starting working in Firefox again (in 2002, remmeber, so people could check their online balance then hit up the ATM to buy Beyblades and Harry Potter merch, and whatever else was popular back then).

My colleague Lily pointed out that Chromium has a similar comment in canonical_cookie.cc:

The RFC says the path should be a prefix of the current URL path. However, Mozilla allows you to set any path for compatibility with broken websites. We unfortunately will mimic this behavior. We try to be generous and accept cookies with an invalid path attribute, and default the path to something reasonable.

These days, rfc6265 (and 6265bis) is much more pragmatic and states exactly what Firefox and Chromium are doing:

…the Path attribute does not provide any integrity protection because the user agent will accept an arbitrary Path attribute in a Set-Cookie header.

Never one to pass up on an opportunity to delete things, I wrote some patches for Firefox and Chromium so maybe someone reading Cookie code in the future doesn’t get distracted.

Aside 1: Awkwardly my moz-phab account has been disabled, so I just attached the patch file using Splinter like it’s 2002 (more Medieval code review tech references).

Aside 2: Both of these comments have two spaces after periods. Remember that?

Categorieën: Mozilla-nl planet

Will Kahn-Greene: Markus v3.0.0 released! Better metrics API for Python projects.

Mozilla planet - vr, 05/02/2021 - 16:00
What is it?

Markus is a Python library for generating metrics.

Markus makes it easier to generate metrics in your program by:

  • providing multiple backends (Datadog statsd, statsd, logging, logging roll-up, and so on) for sending metrics data to different places

  • sending metrics to multiple backends at the same time

  • providing testing helpers for easy verification of metrics generation

  • providing a decoupled architecture making it easier to write code to generate metrics without having to worry about making sure creating and configuring a metrics client has been done--similar to the Python logging module in this way

We use it at Mozilla on many projects.

v3.0.0 released!

I released v3.0.0 just now. Changes:

Features

  • Added support for Python 3.9 (#79). Thank you, Brady!

  • Changed assert_* helper methods on markus.testing.MetricsMock to print the records to stdout if the assertion fails. This can save some time debugging failing tests. (#74)

Backwards incompatible changes

  • Dropped support for Python 3.5 (#78). Thank you, Brady!

  • markus.testing.MetricsMock.get_records and markus.testing.MetricsMock.filter_records return markus.main.MetricsRecord instances now.

    This might require you to rewrite/update tests that use the MetricsMock.

Where to go for more

Changes for this release: https://markus.readthedocs.io/en/latest/history.html#february-5th-2021

Documentation and quickstart here: https://markus.readthedocs.io/en/latest/index.html

Source code and issue tracker here: https://github.com/willkg/markus/

Let me know how this helps you!

Categorieën: Mozilla-nl planet

Firefox Nightly: These Weeks in Firefox: Issue 87

Mozilla planet - vr, 05/02/2021 - 04:36
Highlights
  • Starting from Firefox 86, WebExtensions will not need to request the broader “tabs” permission to have access to some of the more privileged part of the tabs API (in particular access to tab url, title and favicon url) on tabs they have host permissions for – Bug 1679688. Thanks to robbendebiene for contributing this enhancement!
  • Over ¼ of our Nightly population has Fission enabled, either by opting in, or via Normandy!
    • You can go to about:support to see if Fission is enabled. You can opt in to using it on Nightly by visiting about:preferences#experimental
    • Think you’ve found a Fission bug? Please file it here!
  • With the export and now import of logins landed and looking likely to ship soon, we are starting to have a much better story for migrating from other browsers, password managers, other Firefox profiles, etc. We ingest a common subset of the many fields these kinds of software export. Please try it out and file bugs!
  • Multiple Picture-in-Picture player support has been enabled to ride the trains in Firefox 86!
Friends of the Firefox team Resolved bugs (excluding employees) Fixed more than one bug (between Jan. 12th and Jan 26th)
  • Hunter Jones
  • Swapnik Katkoori
  • Tim Nguyen :ntim
Project Updates Add-ons / Web Extensions Addon Manager & about:addons
  • Starting from Firefox 86 about:addons will not (wrongly) show a pending update badge on all addons cards when a new extension is installed – Bug 1659283
    • Thanks to Tilden Windsor for contributing this fix!
  • In preparation for “addons.mozilla.org API v3 deprecation”, usage of the addons.mozilla.org (AMO) API in Firefox has been updated to point to the AMO API v4 – Bug 1686187 (riding Firefox 86 train, will be also uplifted to ESR 78.8)
  • “Line Extension” badge description in about:addons updated to make it clear that the extensions built by Mozilla are reviewed for security and performance (similarly to the description we already have on the “Recommended Extensions” badges) and to match the wording for the similar badge shown on the AMO listing pages – Bug 1687375
WebExtensions Framework
  • Manifest V3 content security policy (CSP) updated in Nightly Fx86, the new base CSP will disallow remotely hosted code in extensions with manifest_version 3 (this is an ongoing work part of the changes needed to support manifest v3 extensions in Firefox, and so this restrictions does not affect manifest v2 extensions) – Bug 1594234
WebExtension APIs
  • WebRequest internals do not await on “webRequest.onSendHeaders” listeners anymore (because they are not blocking listeners). Thanks to Brenda M Lima for contributing this fix!
Developer Tools
  • Removed cd() command (was available on the Command line in the Console panel), bug 
    • The alternative will be JS Context Selector / IFrame picker
  • Fixed support for debugging mochitests (bug)
    • mach test devtools/client/netmonitor/test/browser_net_api-calls.js –jsdebugger
    • Also works for xpcshell tests
  • DevTools Fission M3 planning and analysis
    • Backlog almost ready
    • Implementation starts next week
Fission
  • Over ¼ of our Nightly population has Fission enabled, either by opting in, or via Normandy!
Lint Password Manager
  • Some contributors to thank:
PDFs & Printing
  • Rolling out on release. Currently at 25% enabled, plan to monitor errors and increase to 100% in late February
  • Simplify page feature is a work-in-progress, but close to being finished.
  • Duplex printing orientation is the last remaining feature to add. We’re waiting on icons from UX.
Performance Picture-in-Picture
  • New group of MSU students just started! This semester we’ll be working with:
    • Tony (frostwyrm98)
    • David (heftydav)
    • Swapnik (katkoor2)
    • Oliver (popeoliv)
    • Guanlin (chenggu3)
  • This past weekend was our intro hackathon:
    • Over the weekend, they already landed:
      • Bug 1670094 – Fixed Picture-in-Picture (PIP) explainer text never getting RTL aligned
      • Bug 1678351 – Removed some dead CSS
      • Bug 1626600 – Leave out the PIP context menu item for empty <video>’s
    • Not yet landed but made progress:
      • Bug 1654054 – Port video controls strings to Fluent
      • Bug 1674152 – Make PIP code generate Sphinx docs
      • Bug 1669205 – PIP icon will disappear when dragging the tab to create a new window
  • Here’s the metabug for all their work.
Search and Navigation
  • Added a new Nightly Experiments option for Address Bar IME (Input Method Editor) users – Bug 1685991
  • A non-working Switch to Tab result could be offered for Top Sites in Private Browsing windows – Bug 1681697
  • History results were not shown in Search Mode when the “Show history before search suggestions” option was enabled – Bug 1672507
  • Address Bar performance improvements when highlighting search strings – Bug 1687767
  • Fixed built-in Ebay search engine with multi word search strings – Bug 1683991
Screenshots
  • Screenshots has new module owners. It was recently updated to use `browser.tabs.captureTab`. We hope to clean up the module a bit and start opening up mentored bugs.
Categorieën: Mozilla-nl planet

The Mozilla Blog: What WebRTC means for you

Mozilla planet - do, 04/02/2021 - 22:51

If I told you that two weeks ago IETF and W3C finally published the standards for WebRTC, your response would probably be to ask what all those acronyms were. Read on to find out!

Widely available high quality videoconferencing is one of the real successes of the Internet. The idea of videoconferencing is of course old (go watch that scene in 2001 where Heywood Floyd makes a video call to his family on a Bell videophone), but until fairly recently it required specialized equipment or at least downloading specialized software. Simply put, WebRTC is videoconferencing (VC) in a Web browser, with no download: you just go to a Web site and make a call. Most of the major VC services have a WebRTC version: this includes Google Meet, Cisco WebEx, and Microsoft Teams, plus a whole bunch of smaller players.

A toolkit, not a phone

WebRTC isn’t a complete videoconferencing system; it’s a set of tools built in to the browser that take care of many of the hard pieces of building a VC system so that you don’t have to. This includes:

  • Capturing the audio and video from the computer’s microphone and camera. This also includes what’s called Acoustic Echo Cancellation: removing echos (hopefully) even when people don’t wear headphones.
  • Allowing the two endpoints to negotiate their capabilities (e.g., “I want to send and receive video at 1080p using the AV1 codec”) and arrive at a common set of parameters.
  • Establishing a secure connection between you and other people on the call. This includes getting your data through any NATs or firewalls that may be on your network.
  • Compressing the audio and video for transmission to the other side and then reassembling it on receipt. It’s also necessary to deal with situations where some of the data is lost, in which case you want to avoid having the picture freeze or hearing audio glitches.

This functionality is embedded in what’s called an application programming interface (API): a set of commands that the programmer can give the browser to get it to set up a video call. The upshot of this is that it’s possible to write a very basic VC system in a very small number of lines of code. Building a production system is more work, but with WebRTC, the browser does much of the work of building the client side for you.

Standardization

Importantly, this functionality is all standardized: the API itself was published and by the World Wide Web Consortium(W3C) and the network protocols (encryption, compression, NAT traversal, etc.) were standardized by the Internet Engineering Task Force (IETF). The result is a giant pile of specifications, including the API specification, the protocol for negotiating what media will be sent or received, and a mechanism for sending peer-to-peer data. All in all, this represents a huge amount of work by too many people to count spanning a decade and resulting in hundreds of pages of specifications.

The result is that it’s possible to build a VC system that will work for everyone right in their browser and without them having to install any software

Ironically, the actual publication of the standards is kind of anticlimactic: every major browser has been shipping WebRTC for years and as I mentioned above, there are a large number of WebRTC VC systems. This is a good thing: widespread deployment is the only way to get confidence that technologies really work as expected and that the documents are clear enough to implement from. What the standards reflect is the collective judgement of the technical community that we have a system which generally works and that we’re not going to change the basic pieces. It also means that it’s time for VC providers who implemented non-standard mechanisms to update to what the standards say[1].

Why do you care about any of this?

At this point you might be thinking “OK, you all did a lot of work, but why does it matter? Can’t I just download Zoom? There are a number of important reasons why WebRTC is a big deal, as described below.

Security

Probably the most important reason is security. Because WebRTC runs entirely in the browser, it means that you don’t need to worry about security issues in the software that the VC provider wants you to download. As an example, last year Zoom had a number of high profile security flaws that would, for instance, have allowed web sites to add you to calls without your permission, or mount what’s called a Remote Code Execution attack that would allow attackers to run their code on your computer. By contrast, because WebRTC doesn’t require a download, you’re not exposed to whatever vulnerabilities the vendor may have in their client. Of course browsers don’t have a perfect security record, but every major browser invests a huge amount in security technologies like sandboxing. Moreover, you’re already running a browser, so every additional application you run increases your security risk. For this reason, Kaspersky recommends running the Zoom Web client, even though the experience is a lot worse than the app.[2]

The second security advantage of WebRTC-based conferencing is that the browser controls access to the camera and microphone. This means that you can easily prevent sites from using them, as well as be sure when they are in use. For instance, Firefox prompts you before letting a site use the camera and microphone and then shows something in the URL bar whenever they are live.

WebRTC is always encrypted in transit without the VC system having to do anything else, so you mostly don’t have to ask whether the vendor has done a good job with their encryption. This is one of the pieces of WebRTC that Mozilla was most involved in putting into place, in line with Mozilla Manifesto principle number 4 (Individuals’ security and privacy on the internet are fundamental and must not be treated as optional.). Even more exciting, we’re starting to see work on built-in end-to-end encrypted conferencing for WebRTC built on MLS and SFrame. This will help address the one major security feature that some native clients have that WebRTC does not provide: preventing the service from listening in on your calls. It’s good to see progress on that front.

Low Friction

Because WebRTC-based video calling apps work out of the box with a standard Web browser, they dramatically reduce friction. For users, this means they can just join a call without having to install anything, which makes life a lot easier. I’ve been on plenty of calls where someone couldn’t join — often because their company used a different VC system — because they hadn’t downloaded the right software, and this happens a lot less now that it just works with your browser. This can be an even bigger issue in enterprises have restrictions on what software can be installed.

For people who want to stand up a new VC service, WebRTC means that they don’t need to write a new piece of client software and get people to download it. This makes it much easier to enter the market without having to worry about users being locked into one VC system and unable to use yours.

None of this means that you can’t build your own client and a number of popular systems such as WebEx and Meet have downloadable endpoints (or, in the case of WebEx, hardware devices you can buy). But it means you don’t have to, and if you do things right, browser users will be able to talk to your custom endpoints, thus giving casual users an easy way to try out your service without being too committed.[3]

Enhancing The Web

Because WebRTC is part of the Web, not isolated into a separate app, that means that it can be used not just for conferencing applications but to enhance the Web itself. You want to add an audio stream to your game? Share your screen in a webinar? Upload video from your camera? No problem, just use WebRTC.

One exciting thing about WebRTC is that there turn out to be a lot of Web applications that can use WebRTC besides just video calling. Probably the most interesting is the use of WebRTC “Data Channels”, which allow a pair of clients to set up a connection between them which they can use to directly exchange data. This has a number of interesting applications, including gaming, file transfer, and even BitTorrent in the browser. It’s still early days, but I think we’re going to be seeing a lot of DataChannels in the future.

The bigger picture

By itself, WebRTC is a big step forward for the Web: it If you’d told people 20 years ago that they would be doing video calling from their browser, they would have laughed at you — and I have to admit, I was initially skeptical — and yet I do that almost every day at work. But more importantly, it’s a great example of the power the Web has to make to make people’s lives better and of what we can do when we work together to do that.

  1. Technical note: probably the biggest source of problems for Firefox users is people who implemented a Chrome-specific mechanism for handling multiple media streams called “Plan B”. The IETF eventually went with something called “Unified Plan” and Chrome supports it (as does Google Meet) but there are still a number of services, such as Slack and Facebook Video Calling, which do Plan B only which means they don’t work properly with Firefox, which implemented Unified Plan. ↩︎
  2. The Zoom Web client is an interesting case in that it’s only partly WebRTC. Unlike (say) Google Meet, Zoom Web uses WebRTC to capture audio and video and to transmit media over the network, but does all the audio and video locally using WebAssembly. It’s a testament to the power of WebAssembly that this works at all, but a head-to-head comparison of Zoom Web to other clients such as Meet or Jitsi reveals the advantages of using the WebRTC APIs built into the browser. ↩︎
  3. Google has open sourced their WebRTC stack, which makes it easier to write your own downloadable client, including one which will interoperate with browsers. ↩︎

The post What WebRTC means for you appeared first on The Mozilla Blog.

Categorieën: Mozilla-nl planet

Daniel Stenberg: Webinar: curl, Hyper and Rust

Mozilla planet - do, 04/02/2021 - 19:49

On February 11th, 2021 18:00 UTC (10am Pacific time, 19:00 Central Europe) we invite you to participate in a webinar we call “curl, Hyper and Rust”. To join us at the live event, please register via the link below:

https://www.wolfssl.com/isrg-partner-webinar/

What is the project about, how will this improve curl and Hyper, how was it done, what lessons can be learned, what more can we expect in the future and how can newcomers join in and help?

Participating speakers in this webinar are:

Daniel Stenberg. Founder of and lead developer of curl.

Josh Aas, Executive Director at ISRG / Let’s Encrypt.

Sean McArthur, Lead developer of Hyper.

The event went on for 60 minutes, including the Q&A session at the end.

Recording Questions?

If you already have a question you want to ask, please let us know ahead of time. Either in a reply here on the blog, or as a reply on one of the many tweets that you will see about about this event from me and my fellow “webinarees”.

Categorieën: Mozilla-nl planet

Mozilla GFX: Improving texture atlas allocation in WebRender

Mozilla planet - do, 04/02/2021 - 12:28

This is going to be a rather technical dive into a recent improvement that went into WebRender.

Texture atlas allocation

In order to submit work to the GPU efficiently, WebRender groups as many drawing primitives as it can into what we call batches. A batch is submitted to the GPU as a single drawing command and has a few constraints. for example a batch can only reference a fixed set of resources (such as GPU buffers and textures). So in order to group as many drawing primitives as possible in a single batch we need to place as many drawing parameters as possible in few resources. When rendering text, WebRender pre-renders the glyphs before compositing them on the screen so this means packing as many pre-rendered glyphs as possible into a single texture, and the same applies for rendering images and various other things.

For a moment let’s simplify the case of images and text and assume that it is the same problem: input images (rectangles) of various rectangular sizes that we need to pack into a larger textures. This is the job of the texture atlas allocator. Another common name for this is rectangle bin packing.

Many in game and web development are used to packing many images into fewer assets. In most cases this can be achieved at build time Which means that the texture atlas allocator isn’t constrained by allocation performance and only needs to find a good layout for a fixed set of rectangles without supporting dynamic allocation/deallocation within the atlas at run time. I call this “static” atlas allocation as opposed to “dynamic” atlas allocation.

There’s a lot more literature out there about static than dynamic atlas allocation. I recommend reading A thousand ways to pack the bin which is a very good survey of various static packing algorithms. Dynamic atlas allocation is unfortunately more difficult to implement while keeping good run-time performance. WebRender needs to maintain texture atlases into which items are added and removed over time. In other words we don’t have a way around needing dynamic atlas allocation.

A while back

A while back, WebRender used a simple implementation of the guillotine algorithm (explained in A thousand ways to pack the bin). This algorithm strikes a good compromise between packing quality and implementation complexity.
The main idea behind it can be explained simply: “Maintain a list of free rectangles, find one that can hold your allocation, split the requested allocation size out of it, creating up to two additional rectangles that are added back to the free list.”. There is subtlety in which free rectangle to choose and how to split it, but the overall, the algorithm is built upon reassuringly understandable concepts.

Deallocation could simply consist of adding the deallocated rectangle back to the free list, but without some way to merge back neighbor free rectangles, the atlas would quickly get into a fragmented stated with a lot of small free rectangles and can’t allocate larger ones anymore.

<figcaption>Lots of free space, but too fragmented to host large-ish allocations.</figcaption>

To address that, WebRender’s implementation would regularly do a O(n²) complexity search to find and merge neighbor free rectangles, which was very slow when dealing with thousands of items. Eventually we stopped using the guillotine allocator in systems that needed support for deallocation, replacing it with a very simple slab allocator which I’ll get back to further down this post.

Moving to a worse allocator because of the run-time defragmentation issue was rubbing me the wrong way, so as a side project I wrote a guillotine allocator that tracks rectangle splits in a tree in order to find and merge neighbor free rectangle in constant instead of quadratic time. I published it in the guillotiere crate. I wrote about how it works in details in the documentation so I won’t go over it here. I’m quite happy about how it turned out, although I haven’t pushed to use it in WebRender, mostly because I wanted to first see evidence that this type of change was needed and I already had evidence for many other things that needed to be worked on.

The slab allocator

What replaced WebRender’s guillotine allocator in the texture cache was a very simple one based on fixed power-of-two square slabs, with a few special-cased rectangular slab sizes for tall and narrow items to avoid wasting too much space. The texture is split into 512 by 512 regions, each region is split into a grid of slabs with a fixed slab size per region.

<figcaption>The slab allocator in action. This is a debugging view generated from a real browsing session.</figcaption>

This is a very simple scheme with very fast allocation and deallocation, however it tends to waste a lot of texture memory. For example allocating an 8×10 pixels glyph occupies a 16×16 slot, wasting more than twice the requested space. Ouch!
In addition, since regions can allocate a single slab size, space can be wasted by having a region with few allocations because the slab size happens to be uncommon.

Improvements to the slab allocator

Images and glyphs used to be cached in the same textures. However we render images and glyphs with different shaders, so currently they can never be used in the same rendering batches. I changed image and glyphs to be cached into a separate set of textures which provided with a few opportunities.

Not mixing images and glyphs means the glyph textures get more room for glyphs which reduces the number of textures containing glyphs overall. In other words, less chances to break batches. The same naturally applies to images. This is of course at the expense of allocating more textures on average, but it is a good trade-off for us and we are about to compensate the memory increase by using tighter packing.

In addition, glyphs and images are different types of workloads: we usually have a few hundred images of all sizes in the cache, while we have thousands of glyphs most of which have similar small sizes. Separating them allows us to introduce some simple workload-specific optimizations.

The first optimization came from noticing that glyphs are almost never larger than 128px. Having more and smaller regions, reduces the amount of atlas space that is wasted by partially empty regions, and allows us to hold more slab sizes at a given time so I reduced the region size from 512×512 to 128×128 in the glyph atlases. In the unlikely event that a glyph is larger than 128×128, it will go into the image atlas.

Next, I recorded the allocations and deallocations browsing different pages, gathered some statistics about most common glyph sizes and noticed that on a low-dpi screen, a quarter of the glyphs would land in a 16×16 slab but would have fit in a 8×16 slab. In latin scripts at least, glyphs are usually taller than wide. Adding 8×16 and 16×32 slab sizes that take advantage of this helps a lot.
I could have further optimized specific slab sizes by looking at the data I had collected, but the more slab sizes I would add, the higher the risk of regressing different workloads. This problem is called over-fitting. I don’t know enough about the many non-latin scripts used around the world to trust that my testing workloads were representative enough, so I decided that I should stick to safe bets (such as “glyphs are usually small”) and avoid piling up optimizations that might penalize some languages. Adding two slab sizes was fine (and worth it!) but I wouldn’t add ten more of them.

<figcaption>The original slab allocator needed two textures to store a workload that the improved allocator can fit into a single one.</figcaption>

At this point, I had nice improvements to glyph allocation using the slab allocator, but I had a clear picture of the ceiling I would hit from the fixed slab allocation approach.

Shelf packing allocators

I already had guillotiere in my toolbelt, in addition to which I experimented with two algorithms derived from the shelf packing allocation strategy, both of them released in the Rust crate etagere. The general idea behind shelf packing is to separate the 2-dimensional allocation problem into a 1D vertical allocator for the shelves and within each shelf, 1D horizontal allocation for the items.

The atlas is initialized with no shelf. When allocating an item, we first find the shelf that is the best fit for the item vertically, if there is none or the best fit wastes too much vertical space, we add a shelf. Once we have found or added a suitable shelf, an horizontal slice of it is used to host the allocation.

At a glance we can see that this scheme is likely to provide much better packing than the slab allocator. For one, items are tightly packed horizontally within the shelves. That alone saves a lot of space compared to the power-of-two slab widths. A bit of waste happens vertically, between an item and the top of its shelf. How much the shelf allocator wastes vertically depends on how the shelve heights are chosen. Since we aren’t constrained to power-of-two size, we can also do much better than the slab allocator vertically.

The bucketed shelf allocator

The first shelf allocator I implemented was inspired from Mapbox’s shelf-pack allocator written in JavaScript. It has an interesting bucketing strategy: items are accumulated into fixed size “buckets” that behave like a small bump allocators. Shelves are divided into a certain number of buckets and buckets are only freed when all elements are freed. The trade-off here is to keep atlas space occupied for longer in order to reduce the CPU cost of allocating and deallocating. Only the top-most shelf is removed when empty so consecutive empty shelves in the middle aren’t merged until they become the top-most shelves, which can cause a bit of vertical fragmentation for long running sessions. When the atlas is full of (potentially empty) shelves the chance that a new item is too tall to fit into one of the existing shelves depends on how common the item height is. Glyphs tend to be of similar (small) heights so it works out well enough.

I added very limited support for merging neighbor empty shelves. When an allocation fails, the atlas iterates over the shelves and checks if there is a sequence of empty shelves that in total would be able to fit the requested allocation. If so, the first shelf of the sequence becomes the size of the sum, and the other shelves are squashed to zero height. It sounds like a band aid (it is) but the code is simple and it is working within the constraints that make the rest of the allocator very simple and fast. It’s only a limited form of support for merging empty shelves but it was an improvement for workloads that contain both small and large items.

<figcaption>Image generated from the glyph cache in a real borwsing session via a debugging tool. We see fewer/wider boxes rather than many thin boxes because the allocator internally doesn’t keep track of each item rectangle individually, so we only see buckets filling up instead.</figcaption>

This allocator worked quite well for the glyph texture (unsurprisingly, as Mapbox’s implementation it was inspired from is used with their glyph cache). The bucketing strategy was problematic, however, with large images. The relative cost of keeping allocated space longer was higher with larger items. Especially with long running sessions, this allocator was good candidate for the glyph cache but not for the image cache.

The simple shelf allocator

The guillotine allocator was working rather well with images. I was close to just using it for the image cache and move on. However, having spent a lot of time looking at various allocations patterns, my intuition was that we could do better. This is largely thanks to being able to visualize the algorithms via our integrated debugging tool that could generate nice SVG visualizations.

It motivated experimenting with a second shelf allocator. This one is conceptually even simpler: A basic vertical 1D allocator for shelves with a basic horizontal 1D allocator per shelf. Since all items are managed individually, they are deallocated eagerly which is the main advantage over the bucketed implementation. It is also why it is slower than the bucketed allocator, especially when the number of items is high. This allocator also has full support for merging/splitting empty shelves wherever they are in the atlas.

<figcaption>This was generated from the same glyph cache wokrload as the previous image.</figcaption>

Unlike the Bucketed allocator, this one worked very well for the image workloads. For short sessions (visiting only a handful of web pages) it was not packing as tightly as the guillotine allocator, but after browsing for longer period of time, it had a tendency to better deal with fragmentation.

<figcaption>The simple shelf allocator used on the image cache. Notice how different the image workloads look (using the same texture size), with much fewer items and a mix of large and small items sizes. </figcaption>

The implementation is very simple, scanning shelves linearly and then within the selected shelf another linear scan to find a spot for the allocation. I expected performance to scale somewhat poorly with high number of glyphs (we are dealing in the thousands of glyphs which arguably isn’t that high), but the performance hit wasn’t as bad I had anticipated, probably helped by mostly cache friendly underlying data-structure.

A few other experiments

For both allocators I implemented the ability to split the atlas into a fixed number of columns. Adding columns means more (smaller) shelves in the atlas, which further reduces vertical fragmentation issues, at the cost of wasting some space at the end of the shelves. Good results were obtained on 2048×2048 atlases with two columns. You can see in the previous two images that the shelf allocator was configured to use two columns.

The shelf allocators support arranging items in vertical shelves instead of horizontal ones. It can have an impact depending on the type of workload, for example if there is more variation in width than height for the requested allocations. As far as my testing went, it did not make a significant difference with workloads recorded in Firefox so I kept the default horizontal shelves.

The allocators also support enforcing specific alignments in x and y (effectively, rounding up the size of allocated items to a multiple of the x and y alignment). This introduces a bit of wasted space but avoids leaving tiny holes in some cases. Some platforms also require a certain alignment for various texture transfer operations so it is useful to have this knob to tweak at our disposal. In the Firefox integration, we use different alignments for each type of atlas, favoring small alignments for atlases that mostly contain small items to keep the relative wasted space small.

Conclusion <figcaption>Various visualizations generated while I was working on this. It’s been really fun to be able “look” at the algorithms at each step of the process. </figcaption>

The guillotine allocator is the best at keeping track of all available space and can provide the best packing of the algorithms I tried. The shelf allocators waste a bit of space by simplifying the arrangement into shelves, and the slab allocator wastes a lot of space for the sake of simplicity. On the other hand the guillotine allocator is the slowest when dealing with multiple thousands of items and can suffer from fragmentations in some of the workloads I recorded. Overall the best compromise was the simple shelf allocator which I ended up integrating in Firefox for both glyph and image textures in the cache (in both cases configured to have two columns per texture). The bucketed allocator is still a very reasonable option for glyphs and we could switch to it in the future if we feel we should trade some packing efficiency for allocation/deallocation performance. In other parts of WebRender, for short lived atlases (a single frame), the guillotine allocation algorithm is used.

These observations are mostly workload-dependent, though. Workloads are rarely completely random so results may vary.

There are other algorithms I could have explored (and maybe will someday, who knows), but I had found a satisfying compromise between simplicity, packing efficiency, and performance. I wasn’t aiming for state of the art packing efficiency. Simplicity was a very important parameter and whatever solutions I came up with would have to be simple enough to ship it in a web browser without risks.

To recap, my goals were to:

  • allow packing more texture cache items into fewer textures,
  • reduce the amount of texture allocation/deallocation churn,
  • avoid increasing GPU memory usage, and if possible reduce it.

This was achieved by improving atlas packing to the point that we more rarely have to allocate multiple textures for each item type . The results look pretty good so far. Before the changes in Firefox, glyphs would often be spread over a number of textures after having visited a couple of websites, Currently the cache eviction is set so that we rarely need more than than one or two textures with the new allocator and I am planning to crank it up so we only use a single texture. For images, the shelf allocator is pretty big win as well. what used to fit into five textures now fits into two or three. Today this translates into fewer draw calls and less CPU-to-GPU transfers which has a noticeable impact on performance on low end Intel GPUs, in addition to reducing GPU memory usage.

The slab allocator improvements landed in bug 1674443 and shipped in Firefox 85, while the shelf allocator integration work went in bug 1679751 and will make it hit the release channel in Firefox 86. The interesting parts of this work are packaged in a couple of rust crates under permissive MIT OR Apache-2.0 license:

Categorieën: Mozilla-nl planet

Armen Zambrano: Making Big Sur and pyenv play nicely

Mozilla planet - wo, 03/02/2021 - 23:09

Soon after Big Sur came out, I received my new work laptop. I decided to upgrade to it. Unfortunately, I quickly discovered that the Python set up needed for Sentry required some changes. Since it took me a bit of time to figure it out I decided to document it for anyone trying to solve the same problem.

If you are curious about all that I went through and see references to upstream issues you can visit this issue. It’s a bit raw. Most important notes are in the first comment.

On Big Sur, if you try to install older versions of Python you will need to tell pyenv to patch the code. For instance, you can install Python 3.8.7 the typical way ( pyenv install 3.8.7 ), however, if you try to install 3.8.0, or earlier, you will have to patch the code before building Python.

pyenv install --patch 3.6.10 < \
<(curl -sSL https://github.com/python/cpython/commit/8ea6353.patch\?full_index\=1)

If your pyenv version is lesser than 1.2.22 you will also need to specify LDFLAGS. You can read more about it here.

LDFLAGS="-L$(xcrun --show-sdk-path)/usr/lib ${LDFLAGS}" \
pyenv install --patch 3.6.10 < \
<(curl -sSL https://github.com/python/cpython/commit/8ea6353.patch\?full_index\=1)

It seems very simple, however, it took me a lot of work to figure it out. I hope I saved you some time!

Categorieën: Mozilla-nl planet

Martin Giger: Sunsetting Stream Notifier

Mozilla planet - wo, 03/02/2021 - 17:43

I have decided to halt any plans to maintain the extension and focus on other spare time open source projects instead. I should have probably made this decision about seven months ago, when Twitch integration broke, however this extension means a lot to me. It was my first browser extension that still exists and went …

The post Sunsetting Stream Notifier appeared first on Humanoids beLog.

Categorieën: Mozilla-nl planet

Will Kahn-Greene: Socorro: This Period in Crash Stats: Volume 2021.1

Mozilla planet - wo, 03/02/2021 - 16:00
New features and changes in Crash Stats Crash Stats crash report view pages show Breadcrumbs information

In 2020q3, Roger and I worked out a new Breadcrumbs crash annotation for crash reports generated by the android-components crash reporter. It's a JSON-encoded field with a structure just like the one that the sentry-sdk sends. Incoming crash reports that have this annotation will show the data on the crash report view page to people who have protected data access.

/images/tpics_2021_1_breadcrumbs.thumbnail.png

Figure 1: Screenshot of Breadcrumbs data in Details tab of crash report view on Crash Stats.

I implemented it based on what we get in the structure and what's in the Sentry interface.

Breadcrumbs information is not searchable with Supersearch. Currently, it's only shown in the crash report view in the Details tab.

Does this help your work? Are there ways to improve this? If so, let me know!

This work was done in [bug 1671276].

Crash Stats crash report view pages show Java exceptions

For the longest of long times, crash reports from a Java process included a JavaStackTrace annotation which was a big unstructured string of problematic parseability and I couldn't do much with it.

In 2020q4, Roger and I worked out a new JavaException crash annotation which was a JSON-encoded structured string containing the exception information. Not only does it have the final exception, but it also has cascading exceptions if there are any! With a structured form of the exception, we can suddenly do a lot of interesting things.

As a first step, I added display of the Java exception information to the crash report view page in the Display tab. It's in the same place that you would see the crashing thread stack if this were a C++/Rust crash.

Just like JavaStackTrace, the JavaException annotation has some data in it that can have PII in it. Because of that, the Socorro processor generates two versions of the data: one that's sanitized (no java exception message values) and one that's raw. If you have protected data access, you can see the raw one.

The interface is pretty wide and exceeds the screenshot. Sorry about that.

/images/tpics_2021_1_exception.thumbnail.png

Figure 2: Screenshot of Java exception data in Details tab of crash report view in Crash Stats.

My next step is to use the structured exception information to improve Java crash report signatures. I'm doing that work in [bug 1541120] and hoping to land that in 2021q1. More on that later.

Does this help your work? Are there ways to improve this? If so, let me know!

This work was done in [bug 1675560].

Changes to crash report view

One of the things I don't like about the crash report view is that it's impossible to intuit where the data you're looking at is from. Further, some of the tabs were unclear about what bits of data were protected data and what bits weren't. I've been improving that over time.

The most recent step involved the following changes:

  1. The "Metadata" tab was renamed to "Crash Annotations". This tab holds the crash annotation data from the raw crash before processing as well as a few fields that the collector adds when accepting a crash report from the client. Most of the fields are defined in the CrashAnnotations.yaml file in mozilla-central. The ones that aren't, yet, should get added. I have that on my list of things to get to.

  2. The "Crash Annotations" tab is now split into public and protected data sections. I hope this makes it a little clearer which is which.

  3. I removed some unneeded fields that the collector adds at ingestion.

Does this help your work? Are there ways to improve this? If so, let me know!

What's in the queue

In addition to the day-to-day stuff, I'm working on the following big projects in 2021q1.

Remove the Email field

Last year, I looked into who's using the Email field and for what, whether the data was any good, and in what circumstances do we even get Email data. That work was done in [bug 1626277].

The consensus is that since not all of the crash reporter clients let a user enter in an email address, it doesn't seem like we use the data, and it's pretty toxic data to have, we should remove it.

The first step of that is to delete the field from the crash report at ingestion. I'll be doing that work in [bug 1688905].

The second step is to remove it from the webapp. I'll be doing that work in [bug 1688907].

Once that's done, I'll write up some bugs to remove it from the crash reporter clients and wherever else it is in products.

Does this affect you? If so, let me know!

Redo signature generation for Java crashes

Currently, signature generation for Java crashes is pretty basic and it's not flexible in the ways we need it. Now we can fix that.

I need some Java crash expertise to bounce ideas off of and to help me verify "goodness" of signatures. If you're interested in helping in any capacity or if you have opinions on how it should work or what you need out of it, please let me know.

I'm hoping to do this work in 2021q1.

The tracker bug is [bug 1541120].

Closing

Thank you to Roger Yang who implemented Breadcrumbs and JavaException reporting and Gabriele Svelto who advised on new annotations and how things should work! Thank you to everyone who submits signature generation changes--I really appreciate your efforts!

Categorieën: Mozilla-nl planet

Daniel Stenberg: curl 7.75.0 is smaller

Mozilla planet - wo, 03/02/2021 - 08:23

There’s been another 56 day release cycle and here’s another curl release to chew on!

Release presentation Numbers

the 197th release
6 changes
56 days (total: 8,357)

113 bug fixes (total: 6,682)
268 commits (total: 26,752)
0 new public libcurl function (total: 85)
1 new curl_easy_setopt() option (total: 285)

2 new curl command line option (total: 237)
58 contributors, 30 new (total: 2,322)
31 authors, 17 new (total: 860)
0 security fixes (total: 98)
0 USD paid in Bug Bounties (total: 4,400 USD)

Security

No new security advisories this time!

Changes

We added --create-file-mode to the command line tool. To be used for the protocols where curl needs to tell the remote server what “mode” to use for the file when created. libcurl already supported this, but now we expose the functionality to the tool users as well.

The --write-out option got five new “variables” to use. Detailed in this separate blog post.

The CURLOPT_RESOLVE option got an extended format that now allows entries to be inserted to get timed-out after the standard DNS cache expiry time-out.

gophers:// – the protocol GOPHER done over TLS – is now supported by curl.

As a new experimentally supported HTTP backend, you can now build curl to use Hyper. It is not quite up to 100% parity in features just yet.

AWS HTTP v4 Signature support. This is an authentication method for HTTP used by AWS and some other services. See CURLOPT_AWS_SIGV4 for libcurl and --aws-sigv4 for the curl tool.

Bug-fixes

Some of the notable things we’ve fixed this time around…

Reduced struct sizes

In my ongoing struggles to remove “dead weight” and ensure that curl can run on as small devices as possible, I’ve trimmed down the size of several key structs in curl. The memory foot-print of libcurl is now smaller than it has been for a very long time.

Reduced conn->data references

While itself not exactly a bug-fix, this is a step in a larger refactor of libcurl where we work on removing all references back from connections to the transfer. The grand idea is that transfers can point to connections, but since a connection can be owned and used by many transfers, we should remove all code that reference back to a transfer from the connection. To simplify internals. We’re not quite there yet.

Silly writeout time units bug

Many users found out that when asking the curl tool to output timing information with -w, I accidentally made it show microseconds instead of seconds in 7.74.0! This is fixed and we’re back to the way it always was now…

CURLOPT_REQUEST_TARGET works with HTTP proxy

The option that lets the user set the “request target” of a HTTP request to something custom (like for example “*” when you want to issue a request using the OPTIONS method) didn’t work over proxy!

CURLOPT_FAILONERROR fails after all headers

Often used with the tools --fail flag, this is feature that makes libcurl stop and return error if the HTTP response code is 400 or larger. Starting in this version, curl will still read and parse all the response headers before it stops and exists. This then allows curl to act on and store contents from the other headers that can be used for features in combination with --fail.

Proxy-Connection duplicated headers

In some circumstances, providing a custom “Proxy-Connection:” header for a HTTP request would still get curl’s own added header in the request as well, making the request get sent with a duplicate set!

CONNECT chunked encoding race condition

There was a bug in the code that handles proxy responses, when the body of the CONNECT responses was using chunked-encoding. curl could end up thinking the response had ended before it actually had…

proper User-Agent header setup

Back in 7.71.0 we fixed a problem with the user-agent header and made it get stored correctly in the transfer struct, from previously having been stored in the connection struct.

That cause a regression that we fixed now. The previous code had a mistake that caused the user-agent header to not get used when a connection was re-used or multiplexed, which from an outside perspective made it appear go missing in a random fashion…

add support for %if [feature] conditions in tests

Thanks to the new preprocessor we added for test cases some releases ago, we could now take the next step and offer conditionals in the test cases so that we can now better allow tests to run and behave differently depending on features and parameters. Previously, we have resorted to only make tests require a certain feature set to be able to run and otherwise skip the tests completely if the feature set could be satisfied, but with this new ability we can make tests more flexible and thus work with a wider variety of features.

if IDNA conversion fails, fallback to Transitional

A user reported that that curl failed to get the data when given a funny URL, while it worked fine in browsers (and wget):

The host name consists of a heart and a fox emoji in the .ws top-level domain. This is yet another URLs-are-weird issue and how to do International Domain Names with them is indeed a complicated matter, but starting now curl falls back and tries a more conservative approach if the first take fails and voilá, now curl too can get the heart-fox URL just fine… Regular readers of my blog might remember the smiley URLs of 2015, which were similar.

urldata: make magic first struct field

We provide types for the most commonly used handles in the libcurl API as typedef’ed void pointers. The handles are typically declared like this:

CURL *easy;
CURLM *multi;
CURLSH *shared;

… but since they’re typedefed void-pointers, the compiler cannot helpfully point out if a user passes in the wrong handle to the wrong libcurl function and havoc can ensue.

Starting now, all these three handles have a “magic” struct field in the same relative place within their structs so that libcurl can much better detect when the wrong kind of handle is passed to a function and instead of going bananas or even crash, libcurl can more properly and friendly return an error code saying the input was wrong.

Since you’d ask: using void-pointers like that was a mistake and we regret it. There are better ways to accomplish the same thing, but the train has left. When we’ve tried to correct this situation there have been cracks in the universe and complaints have been shouted through the ether.

SECURE_RENEGOTIATION support for wolfSSL

Turned out we didn’t support this before and it wasn’t hard to add…

openssl: lowercase the host name before using it for SNI

The SNI (Server Name Indication) field is data set in TLS that tells the remote server which host name we want to connect to, so that it can present the client with the correct certificate etc since the server might serve multiple host names.

The spec clearly says that this field should contain the DNS name and that it is case insensitive – like DNS names are. Turns out it wasn’t even hard to find multiple servers which misbehave and return the wrong cert if the given SNI name is not sent lowercase! The popular browsers typically always send the SNI names like that… In curl we keep the host name internally exactly as it was given to us in the URL.

With a silent protest that nobody cares about, we went ahead and made curl also lowercase the host name in the SNI field when using OpenSSL.

I did not research how all the other TLS libraries curl can use behaves when setting SNI. This same change is probably going to have to be done on more places, or perhaps we need to surrender and do the lowercasing once and for all at a central place… Time will tell!

Future!

We have several pull requests in the queue suggesting changes, which means the next release is likely to be named 7.76.0 and the plan is to ship that on March 31st, 2021.

Send us your bug reports and pull requests!

Categorieën: Mozilla-nl planet

This Week In Rust: This Week in Rust 376

Mozilla planet - wo, 03/02/2021 - 06:00

Hello and welcome to another issue of This Week in Rust! Rust is a systems language pursuing the trifecta: safety, concurrency, and speed. This is a weekly summary of its progress and community. Want something mentioned? Tweet us at @ThisWeekInRust or send us a pull request. Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub. If you find any errors in this week's issue, please submit a PR.

Updates from Rust Community

No official blog posts this week.

Newsletters Project/Tooling Updates Observations/Thoughts Rust Walkthroughs Miscellaneous Crate of the Week

This week's crate is fancy-regex a regex implementation using regex for speed and backtracking for fancy features.

Thanks to Benjamin Minixhofer for the suggestion!

Submit your suggestions and votes for next week!

Call for Participation

Always wanted to contribute to open-source projects but didn't know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

If you are a Rust project owner and are looking for contributors, please submit tasks here.

Updates from Rust Core

323 pull requests were merged in the last week

Rust Compiler Performance Triage

Another week dominated by rollups, most of which had relatively small changes with unclear causes embedded. Overall no major changes in performance this week.

Triage done by @simulacrum. Revision range: 1483e67addd37d9bd20ba3b4613b678ee9ad4d68.. f6cb45ad01a4518f615926f39801996622f46179

Link

2 Regressions, 1 Improvements, 1 Mixed

3 of them in rollups

See the full report for more.

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

Final Comment Period

Every week the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now.

RFCs

No RFCs are currently in the final comment period.

Tracking Issues & PRs New RFCs Upcoming Events Online North America

If you are running a Rust event please add it to the calendar to get it mentioned here. Please remember to add a link to the event too. Email the Rust Community Team for access.

Rust Jobs

Tweet us at @ThisWeekInRust to get your job offers listed here!

Quote of the Week

This time we had two very good quotes, I could not decide, so here are both:

What I have been learning ... was not Rust in particular, but how to write sound software in general, and that in my opinion is the largest asset that the rust community tough me, through the language and tools that you developed.

Under this prism, it was really easy for me to justify the step learning curve that Rust offers: I wanted to learn how to write sound software, writing sound software is really hard , and the Rust compiler is a really good teacher.

[...]

This ability to identify unsound code transcends Rust's language, and in my opinion is heavily under-represented in most cost-benefit analysis over learning Rust or not.

Jorge Leitao on rust-users

and

Having a fast language is not enough (ASM), and having a language with strong type guarantees neither (Haskell), and having a language with ease of use and portability also neither (Python/Java). Combine all of them together, and you get the best of all these worlds.

Rust is not the best option for any coding philosophy, it’s the option that is currently the best at combining all these philosophies.

/u/CalligrapherMinute77 on /r/rust

Thanks to 2e71828 and Rusty Shell for their respective suggestions.

Please submit quotes and vote for next week!

This Week in Rust is edited by: nellshamrell, llogiq, and cdmistman.

Discuss on r/rust

Categorieën: Mozilla-nl planet

The Talospace Project: Followup on Firefox 85 for POWER: new low-level fix

Mozilla planet - di, 02/02/2021 - 03:43
Shortly after posting my usual update on Firefox on POWER, I started to notice odd occasional tab crashes in Fx85 that weren't happening in Firefox 84. Dan Horák independently E-mailed me to report the same thing. After some digging, it turned out that our fix way back when for Firefox 70 was incomplete: although it renovated the glue that allows scripts to call native functions and fixed a lot of problems, it had an undiagnosed edge case where if we had a whole lot of float arguments we would spill parameters to the wrong place in the stack frame. Guess what type of function was now getting newly called?

This fix is now in the tree as bug 1690152; read that bug for the dirty details. You will need to apply it to Firefox 85 and rebuild, though I plan to ask to land this on beta 86 once it sticks and it will definitely be in Firefox 87. It should also be applied to ESR 78, though that older version doesn't exhibit the crashes to the frequency Fx85 does. This bug also only trips in optimized builds.

Categorieën: Mozilla-nl planet

Mozilla Addons Blog: addons.mozilla.org API v3 Deprecation

Mozilla planet - ma, 01/02/2021 - 17:05

The addons.mozilla.org (AMO) external API can be used by users and developers to get information about add-ons available on AMO, and to submit new add-on versions for signing.  It’s also used by Firefox for recommendations, among other things, by the web-ext tool, and internally within the addons.mozilla.org website.

We plan to shut down Version 3 (v3) of the AMO API on December 31, 2021. If you have any personal scripts that rely on v3 of the API, or if you interact with the API through other means, we recommend that you switch to the stable v4. You don’t need to take any action if you don’t use the AMO API directly. The AMO API v3 is entirely unconnected to manifest v3 for the WebExtensions API, which is the umbrella project for major changes to the extensions platform itself.

Roughly five years ago, we introduced v3 of the AMO API for add-on signing. Since then, we have continued developing additional versions of the API to fulfill new requirements, but have maintained v3 to preserve backwards compatibility. However, having to maintain multiple different versions has become a burden. This is why we’re planning to update dependent projects to use v4 of the API soon and shut down v3 at the end of the year.

You can find more information about v3 and v4 on our API documentation site.  When updating your scripts, we suggest just making the change from “/v3/” to “/v4” and seeing if everything still works – in most cases it will.

Feel free to contact us if you have any difficulties.

The post addons.mozilla.org API v3 Deprecation appeared first on Mozilla Add-ons Blog.

Categorieën: Mozilla-nl planet

Tiger Oakes: Turning junk phones into an art display

Mozilla planet - ma, 01/02/2021 - 01:00

Phones attached to wood panel with black wire wall art stemming from it

What do you do with your old phone when you get a new one? It probably goes to a pile in the back of the closest, or to the dump. I help my family with tech support and end up with any of their old devices, so my pile of junk phones got bigger and bigger.

I didn’t want to leave the phones lying around and collecting dust. One day I had the idea to stick them on the wall like digital photo frames. After some time with Velcro, paint, and programming, I had all the phones up and running.

What it does

These phones can do anything a normal phone does, but I’ve tweaked the use cases since they’re meant to be viewed and not touched.

  • Upload images to each individual cell phone or an image that spans multiple phones.
  • Show a drink list or other text across all phones.
  • Indicate at a glance if I or my partner is is in a meeting.
Example of CellWall running with different screens Parts and assembly

Each of these phones has a story of its own. Some have cracked screens, some can’t connect to the internet, and one I found in the woods. To build Cell Wall, I needed a physical board, software for the phones, and a way to mount the phones on the board. You might have some of this lying around already!

Wood plank base (the “Wall”)

First off: there needs to be a panel for the phones to sit on top of. You could choose to stick phones directly on your wall, but I live in an apartment and I wanted to make something I could remove. I previously tried using a foam board but decided to “upgrade” to a wood panel with paint.

I started off arranging the phones on the floor and figuring out how much space was needed. I took some measurements and estimated that the board needed to be 2 feet by 1 1/2 feet to comfortably fit all the phones and wires.

Once I had some rough measurements, I took a trip to Home Depot. Home Depot sells precut 2 feet by 2 feet wood panels, so I found a sturdy light piece. You can get wood cut for free inside the store by an employee or at a DIY station, so I took out a saw and cut off the extra 6-inch piece.

The edges can be a little sharp afterwards. Use a block of sandpaper to smooth them out.

I wanted the wood board to blend in with my wall and not look like…wood. At a craft store, I picked up a small bottle of white paint and a paintbrush. At home, on top of some trash bags, I started painting a few coats of white.

Mounting and connecting the phones (the “Cell"s)

To keep the phones from falling off, I use Velcro. It’s perfect for securely attaching the phones to the board while allowing them to be removed if needed.

Before sticking them on, I also double-checked that the phones turn on at all. Most do, and the ones that are busted make a nice extra decoration.

If the phone does turn on, enable developer mode. Open settings, open the System section, and go to “About phone”. Developer mode is hidden here - by tapping on “Build number” many times, you eventually get a prompt indicating you are now a true Android developer.

The wires are laid out with a bunch of tiny wire clips. $7 will get you 100 of these clips in a bag, and I’ve laid them out so each clip only contains 1 or 2 wires. The wires themselves are all standard phone USB cables you probably have lying around for charging. You can also buy extra cables for less than a dollar each at Monoprice.

All the wires feed into a USB hub. This hub lets me connect all the phones to a computer just using a single wire. I had one lying around, but similar hubs are on Amazon for $20. The hub needs a second cable that plugs directly into an outlet and provides extra power, since it needs to charge so many phones.

Software

With all the phones hooked up to the USB hub, I can connect them all to a single computer server. All of these phones are running Android, and I’ll use this computer to send commands to them.

How to talk to Android phones from a computer

Usually, phones communicate to a server through the internet over WiFi. But, some of the phones don’t have working WiFi, so I need to connect over the USB cable instead. The computer communicates with the phones using a program from Google called the Android Debug Bridge. This program, called ADB for short, lets you control an Android phone by sending commands, such as installing a new app, simulating a button, or starting an app.

You can check if ADB can connect to your devices by running the command adb devices. The first time this runs, each phone gets a prompt to check if you trust this computer. Check the “remember” box and hit OK.

Android uses a system called “intents” to open an app. The simplest example is tapping an icon on the home screen, which sends a “launch” intent. However, you can also send intents with additional data, such as an email subject and body when opening an email app, or the address of a website when opening a web browser. Using this system, I can send some data to a custom Android app over ADB that tells it which screen to display.

# Command to send an intent using ADB adb shell am start # The intent action type, such as viewing a website -a android.intent.action.VIEW # Data URI to pass in the intent -d https://example.com The Android client

Each phone is running a custom Android app that interprets intents then displays one of 3 screens.

  • The text screen shows large text on a coloured background.
  • The image screen shows one full-screen image loaded over the internet.
  • The website screen loads a website, which is rendered with GeckoView from Mozilla.

This doesn’t sound like a lot, but when all the devices are connected together to a single source, you can achieve complicated functionality.

The Node.js server

The core logic doesn’t run on the phones but instead runs on the computer all the phones are connected to. Any computer with a USB port can work as the server that the phones connect to, but the Raspberry Pi is nice and small and uses less power.

This computer runs server software that acts as the manager for all the connected devices, sending them different data. It will take a large photo to crop into little photos, then send them to each phone. It can also take a list of text, then send individual lines to each cell. A grocery list can be shown by spreading the text across multiple phones. Larger images can be displayed by cutting them up on the server and sending a cropped version to each cell.

The server software is written in TypeScript and creates an HTTP server to expose functionality through different web addresses. This allows other programs to communicate with the server and lets me make a bridge with a Google Home or smart home software.

The remote control

To control CellWall, I wrote a small JavaScript app served by the Node server. It includes a few buttons to turn each display on, controls for specific screens, and presets to display. These input elements all send HTTP requests to the server, which then converts them into ADB commands sent to the cells.

Remote control app with power buttons, device selection, and manual display controls Diagram of request flow from remote to server to ADB to CellWall Remote CellWall Server ADB Tell ADB to send VIEW intent HTTP request to show URL Send intent to each phone

As a nice final touch, I put some black masking tape to resemble wires coming out of the board. While this is optional, it makes a nice Zoom background for meetings. My partner’s desk is across the room, and I frequently hear her coworkers comment on the display behind her.

I hope you’re inspired to try something similar yourself. All of my project code is available on GitHub. Let me know how yours turns out! I’m happy to answer any questions on Twitter @Not_Woods.

NotWoods/cell-wall A multi-device display for showing interactive data, such as photos, weather information, calendar appointments, and more.
Categorieën: Mozilla-nl planet

Cameron Kaiser: Floodgap.com down due to domain squatter attack on Network Solutions

Mozilla planet - do, 28/01/2021 - 21:14
Floodgap sites are down because someone did a mass attack on NetSol (this also attacked Perl.com and others). I'm working to resolve this. More shortly.

Update: Looks like it was a social engineering attack. I spoke with a very helpful person in their security department (Beth) and she walked me through it. On the 26th someone initiated a webchat with their account representatives and presented official-looking but fraudulent identity documents (a photo ID, a business license and a utility bill), then got control of the account and logged in and changed everything. NetSol is in the process of reversing the damage and restoring the DNS entries. They will be following up with me for a post-mortem. I do want to say I appreciate how quickly and seriously they are taking this whole issue.

If you are on Network Solutions, check your domains this morning, please. I'm just a "little" site, and I bet a lot of them were attacked in a similar fashion.

Update the second: Domains should be back up, but it may take a while for them to propagate. The servers themselves were unaffected, and I don't store any user data anyway.

Categorieën: Mozilla-nl planet

The Firefox Frontier: Four ways to protect your data privacy and still be online

Mozilla planet - do, 28/01/2021 - 09:01

Today is Data Privacy Day, which is a good reminder that data privacy is a thing, and you’re in charge of it. The simple truth: your personal data is very … Read more

The post Four ways to protect your data privacy and still be online appeared first on The Firefox Frontier.

Categorieën: Mozilla-nl planet

Daniel Stenberg: What if GitHub is the devil?

Mozilla planet - do, 28/01/2021 - 08:28

Some critics think the curl project shouldn’t use GitHub. The reasons for being against GitHub hosting tend to be one or more of:

  1. it is an evil proprietary platform
  2. it is run by Microsoft and they are evil
  3. GitHub is American thus evil

Some have insisted on craziness like “we let GitHub hold our source code hostage”.

Why GitHub?

The curl project switched to GitHub (from Sourceforge) almost eleven years ago and it’s been a smooth ride ever since.

We’re on GitHub not only because it provides a myriad of practical features and is a stable and snappy service for hosting and managing source code. GitHub is also a developer hub for millions of developers who already have accounts and are familiar with the GitHub style of developing, the terms and the tools. By being on GitHub, we reduce friction from the contribution process and we maximize the ability for others to join in and help. We lower the bar. This is good for us.

I like GitHub.

Self-hosting is not better

Providing even close to the same uptime and snappy response times with a self-hosted service is a challenge, and it would take someone time and energy to volunteer that work – time and energy we now instead can spend of developing the project instead. As a small independent open source project, we don’t have any “infrastructure department” that would do it for us. And trust me: we already have enough infrastructure management to deal with without having to add to that pile.

… and by running our own hosted version, we would lose the “network effect” and convenience for people that already are on and know the platform. We would also lose the easy integration with cool services like the many different CI and code analyzer jobs we run.

Proprietary is not the issue

While git is open source, GitHub is a proprietary system. But the thing is that even if we would go with a competitor and get our code hosting done elsewhere, our code would still be stored on a machine somewhere in a remote server park we cannot physically access – ever. It doesn’t matter if that hosting company uses open source or proprietary code. If they decide to switch off the servers one day, or even just selectively block our project, there’s nothing we can do to get our stuff back out from there.

We have to work so that we minimize the risk for it and the effects from it if it still happens.

A proprietary software platform holds our code just as much hostage as any free or open source software platform would, simply by the fact that we let someone else host it. They run the servers our code is stored on.

If GitHub takes the ball and goes home

No matter which service we use, there’s always a risk that they will turn off the light one day and not come back – or just change the rules or licensing terms that would prevent us from staying there. We cannot avoid that risk. But we can make sure that we’re smart about it, have a contingency plan or at least an idea of what to do when that day comes.

If GitHub shuts down immediately and we get zero warning to rescue anything at all from them, what would be the result for the curl project?

Code. We would still have the entire git repository with all code, all source history and all existing branches up until that point. We’re hundreds of developers who pull that repository frequently, and many automatically, so there’s a very distributed backup all over the world.

CI. Most of our CI setup is done with yaml config files in the source repo. If we transition to another hosting platform, we could reuse them.

Issues. Bug reports and pull requests are stored on GitHub and a sudden exit would definitely make us lose some of them. We do daily “extractions” of all issues and pull-requests so a lot of meta-data could still be saved and preserved. I don’t think this would be a terribly hard blow either: we move long-standing bugs and ideas over to documents in the repository, so the currently open ones are likely possible to get resubmitted again within the nearest future.

There’s no doubt that it would be a significant speed bump for the project, but it would not be worse than that. We could bounce back on a new platform and development would go on within days.

Low risk

It’s a rare thing, that a service just suddenly with no warning and no heads up would just go black and leave projects completely stranded. In most cases, we get alerts, notifications and get a chance to transition cleanly and orderly.

There are alternatives

Sure there are alternatives. Both pure GitHub alternatives that look similar and provide similar services, and projects that would allow us to run similar things ourselves and host locally. There are many options.

I’m not looking for alternatives. I’m not planning to switch hosting anytime soon! As mentioned above, I think GitHub is a net positive for the curl project.

Nothing lasts forever

We’ve switched services several times before and I’m expecting that we will change again in the future, for all sorts of hosting and related project offerings that we provide to the work and to the developers and participators within the project. Nothing lasts forever.

When a service we use goes down or just turns sour, we will figure out the best possible replacement and take the jump. Then we patch up all the cracks the jump may have caused and continue the race into the future. Onward and upward. The way we know and the way we’ve done for over twenty years already.

Credits

Image by Elias Sch. from Pixabay

Updates

After this blog post went live, some users remarked than I’m “disingenuous” in the list of reasons at the top, that people have presented to me. This, because I don’t mention the moral issues staying on GitHub present – like for example previously reported workplace conflicts and their association with hideous American immigration authorities.

This is rather the opposite of disingenuous. This is the truth. Not a single person have ever asked me to leave GitHub for those reasons. Not me personally, and nobody has asked it out to the wider project either.

These are good reasons to discuss and consider if a service should be used. Have there been violations of “decency” significant enough that should make us leave? Have we crossed that line in the sand? I’m leaning to “no” now, but I’m always listening to what curl users and developers say. Where do you think the line is drawn?

Categorieën: Mozilla-nl planet

The Talospace Project: Firefox 85 on POWER

Mozilla planet - do, 28/01/2021 - 04:32
Firefox 85 declares war on supercookies, enables link preloading and adds improved developer tools (just in time, since Google's playing games with Chromium users again). This version builds and runs so far uneventfully on this Talos II. If you want a full PGO-LTO build, which I strongly recommend if you're going to bother building it yourself, grab the shell script from Firefox 82 if you haven't already and use this updated diff to enable PGO for gcc. Either way, the optimized and debug .mozconfigs I use are also unchanged from Fx82. At some point I'll get around to writing a upstreamable patch and then we won't have to keep carrying the diff around.
Categorieën: Mozilla-nl planet

Pagina's