Firefox Tooling Announcements: MozPhab 2.8.2 Released
Bugs resolved in Moz-Phab 2.8.2:
- bug 1999231 moz-phab patch -a here with jj is too verbose (prints status after every operation)
Discuss these changes in #engineering-workflow on Slack or #Conduit Matrix.
1 post - 1 participant
Firefox hacking in Fedora: Firefox & Linux in 2025
HDR YouTube video clip I used for testing
Last year brought a wealth of new features and fixes to Firefox on Linux. Besides numerous improvements and bug fixes, I want to highlight some major achievements: HDR video playback support, reworked rendering for fractionally scaled displays, and asynchronous rendering implementation. All this progress was enabled by advances in the Wayland compositor ecosystem, with new features implemented by Mutter and KWin.
The most significant news on the Wayland scene is HDR support, tracked by Bug 1642854. It’s disabled by default but can be enabled in recent Wayland compositors using the gfx.wayland.hdr preference at about:config (or by gfx.wayland.hdr.force-enabled if you don’t have an HDR display).
HDR mode uses a completely different rendering path, similar to the rendering used on Windows and macOS. It’s called native rendering or composited rendering, and it places specific application layers directly into the Wayland compositor as subsurfaces.
The first implementation was done by Robert Mader (presented at FOSDEM), and I unified the implementation for HDR and non-HDR rendering paths as new WaylandSurface object.
The Firefox application window is actually composited from multiple subsurfaces layered together. This design allows HDR content like video frames to be sent directly to the screen while the rest of the application (controls and HTML page) remains in SDR mode. It also enables power-efficient rendering when video frames are decoded on the graphics card and sent directly to the screen (zero-copy playback). In fullscreen mode, this rendering is similar to mpv or mplayer playback and uses minimal power resources.
I also received valuable feedback from AMD engineers who suggested various improvements to HDR playback. We removed unnecessary texture creation over decoded video frames (they’re now displayed directly as wl_buffers without any GL operations) and implemented wl_buffer recycling as mpv does.
For HDR itself (since composited rendering is available for any video playback), Firefox on Wayland uses the color-management-v1 protocol to display HDR content on screen, along with BT.2020 video color space and PQ color transfer function. It uses 10-bit color vectors, so you need VP9 version 2 to decode it in hardware. Firefox also implements software decoding and direct upload to dmabuf frames as a fallback.
The basic HDR rendering implementation is complete, and we’re now in the testing and bug-fixing phase. Layered rendering is quite tricky as it involves rapid wl_surface mapping/unmapping and quick wl_buffer switches, which are difficult to handle properly. HDR rendering of scaled surfaces is still missing—we need fractional-scale-v2 for this (see below), which allows positioning scaled subsurfaces directly in device pixels. We also need to test composited/layered rendering for regular web page rendering to ensure it doesn’t drain your battery. You’re very welcome to test it and report any bugs you find.
Fractional scale
The next major work was done for fractional scale rendering, which shipped in Firefox 147.0. We updated the rendering pipeline and widget sizing to support fractionally scaled displays (scales like 125%, etc.). This required reworking the widget size code to strictly upscale window/surface sizes and coordinates and never downscale them, as downscaling introduces rounding errors.
Another step was identifying the correct rounding algorithm for Wayland subsurfaces and implementing it. Wayland doesn’t define rounding for it, only for toplevel windows, so we’re in a gray area here. I was directed to Stable rounding by Michel Daenzer. It’s used by Mutter and Sway so Firefox implements it for those two compositors while using a different implementation for KWin. This may be updated to use the fractional-scale-v2 protocol when it becomes available.
Fractional scaling is enabled by default, and you should see crisp and clear output regardless of your desktop environment or screen scale.
Asynchronous renderingHistorically, Firefox disabled and re-enabled the rendering pipeline for scale changes, window create/destroy events, and hide/show sequences. This stems from Wayland’s architecture, where a Wayland surface is deleted when a window becomes invisible or is submitted to the compositor with mismatched size/scale (e.g., 111 pixels wide at 200% scale).
Such rendering disruptions cause issues with multi-threaded rendering—they need to be synchronized among threads, and we must ensure surfaces with the wrong scale aren’t sent to the screen, as this leads to application crashes due to protocol errors.
Firefox 149.0 (recent nightly) has a reworked Wayland painting pipeline (Bug 1739232) for both EGL and software rendering. Scale management was moved from wl_buffer fixed scale to wp_viewport, which doesn’t cause protocol errors when size/scale doesn’t match (producing only blurred output instead of crashes).
We also use a clever technique: the rendering wl_surface / wl_buffer / EGLWindow is created right after window creation and before it’s shown, allowing us to paint to it offscreen. When a window becomes visible, we only attach the wl_surface as a subsurface (making it visible) and remove the attachment when it’s hidden. This allows us to keep painting and updating the backbuffer regardless of the actual window status, and the synchronized calls can be removed.
This brings speed improvements when windows are opened and closed, and Linux rendering is now synchronized with the Windows and macOS implementations.
… and moreOther improvements include a screen lock update for audio playback, which allows the screen to dim but prevents sleep when audio is playing. We also added asynchronous Wayland object management to ensure we cleanly remove Wayland objects without pending callbacks, along with various stability fixes.
And there are even more challenges waiting for us Firefox Linux hackers:
- Wayland session restore (session-restore-v1) to restore Firefox windows to the correct workspace and position.
- Implement drag and drop for the Firefox main window, and possibly add a custom Wayland drag and drop handler to avoid Gtk3 limitations and race conditions.
- Utilize the fractional-scale-v2 protocol when it becomes available.
- Investigate using xdg-positioner directly instead of Gtk3 widget positioning to better handle popups.
- Vulkan video support via the ffmpeg decoder to enable hardware decoding on NVIDIA hardware.
And of course, we should plan properly before we even start. Ready, Scrum, Go!
Firefox Add-on Reviews: Firefox extensions for creatives
From designers to writers, multi-media producers and more — if you perform creative work on a computer there’s a good chance you can find a browser extension to improve your process. Here’s a mix of practical Firefox extensions for a wide spectrum of creative cases…
Extensions for visual artists, animators & designers Awesome Screenshot & Screen RecorderThere are a lot of screenshot and recording tools out there, but few offer the sweet combination of intuitive control and a deep feature set like Awesome Screenshot & Screen Recorder.
An ideal tool if you do a lot of screen recording for things like tutorials, the extension also integrates with your computer’s microphone should you need a voice component.
The easily accessible pop-up menu puts you in control of everything, including the screenshot feature (full page, selected area, or just the visible part). You can also annotate screenshots with text and graphics, blur unwanted images, highlight sections, and more.
Save and share everything with just a couple quick mouse clicks.
Image Max URLFind a great image online, but it’s too small or the resolution is poor quality? No problem. Image Max URL can help you find a batter version or even the original.
Scouring more than 10,000 websites in its database (including most social media sites, news outlets, WordPress sites and various image hosting services), Image Max URL will search for any image’s original version and short of that, look for high res alternatives.
Font FinderEvery designer has seen a beautiful font in the wild and thought — I need that font for my next project! But how to track it down? Try Font Finder.
Investigating your latest favorite font doesn’t require a major research project anymore. Font Finder gives you quick point-and-click access to:
- Typography analysis. Font Finder reveals all relevant typographical characteristics like color, spacing, alignment, and of course font name.
- Copy information. Any portion of the font analysis can be copied to a clipboard for convenient pasting anywhere.
- Inline editing. All font characteristics (e.g. color, size, type) on an active element can be changed directly on the page.
If you’re a designer who scours the web looking for images to use in your work, but gets bogged down researching aspects like intellectual property ownership or subject matter context, you might consider an image search extension like Search by Image.
If you’re unfamiliar with the concept of image search, it works like text-based search, except your search starts with an image instead of a word or phrase. The Search by Image extension leverages the power of 30+ image search engines like Tineye, Google, Bing, Yandex, Getty Images, Pinterest, and others. This tool can be an incredible time saver when you can’t leave any guesswork to images you want to repurpose.
Search by Image makes it simple to find the origins of almost any image you encounter on the web.
Extended Color Management
Built in partnership between Mozilla and Industrial Light & Magic, this niche extension performs an invaluable function for animation teams working remotely. Extended Color Management calibrates colors on Firefox so animators working from different home computer systems (which might display colors differently based on their operating systems) can trust the whole team is looking at the same exact shades of color through Firefox.
Like other browsers, Firefox by default utilizes color management (i.e. the optimization of color and brightness) from the distinct operating systems of the computers it runs on. The problem here for professional animators working remotely is they’re likely collaborating from different operating system — and seeing slight but critically different variations in color rendering. Extended Color Management simply disables the default color management tools so animators with different operating systems are guaranteed to see the same versions of all colors, as rendered by Firefox.
Measure-itWhat a handy tool for designers and developers — Measure-it lets you draw a ruler across any web page to get precise dimensions in pixels.
Access the ruler from a toolbar icon or keyboard shortcut. Other customization features include setting overlay colors, background opacity, and pop-up characteristics.
Extensions for writers LanguageToolIt’s like having a copy editor with you wherever you write on the web. Language Tool – Grammar and Spell Checker will make you a better writer in 25+ languages.
More than just a spell checker, LanguageTool also…
- Recognizes common misuses of similar sounding words (e.g. there/their, your/you’re)
- Works on social media sites and email
- Offers alternate phrasing and style suggestions for brevity and clarity
Please note LanguageTool’s full feature set is free during a 14-day trial period, then payment is required.
Dark Background and Light TextIf you spend all day (and maybe many nights) staring at a screen to scribe away, Dark Background and Light Text may ease strain on your eyes.
By default the extension flips the colors of every web page you visit, so your common light colored backgrounds become text colors and vice versa. But all color combinations are customizable, freeing you to adjust everything to taste. You can also set exceptions for certain websites that have a native look you prefer.
Dictionary AnywhereIt’s annoying when you have to navigate away from a page just to check a word definition elsewhere. Dictionary Anywhere fixes that by giving you instant access to word definitions without leaving the page you’re on.
Just double-click any word to get a pop-up definition right there on the page. Available in English, French, German, and Spanish. You can even save and download word definitions for later offline reference.
Dictionary Anywhere — no more navigating away from a page just to get a word check.
LeechBlock NG
Concentration is key for productive writing. Block time-wasting websites with LeechBlock NG.
This self-discipline aid lets you select websites that Firefox will restrict during time parameters you define — hours of the day, days of the week, or general time limits for specific sites. Even cooler, LeechBlock NG lets you block just portions of websites (for instance, you can allow yourself to see YouTube video pages but block YouTube’s homepage, which sucks you down a new rabbit hole every time!).
GyazoIf your writing involves a fair amount of research and cataloging content, consider Gyazo for a better way to organize all the stuff you clip and save on the web.
Clip entire web pages or just certain elements, save images, take screenshots, mark them up with notes, and much more. Everything you clip is automatically saved to your Gyazo account, making it accessible across devices and collaborative teams.
With its minimalist pop-up interface, Gyazo makes it easy to clip elements, sections, or entire web pages.
We hope one of these extensions improves your creative output on Firefox! Explore more great media extensions on addons.mozilla.org.
The Servo Blog: December in Servo: multiple windows, proxy support, better caching, and more!
Servo 0.0.4 and our December nightly builds now support multiple windows (@mrobinson, @mukilan, #40927, #41235, #41144)! This builds on features that landed in Servo’s embedding API last month. We’ve also landed support for several web platform features, both old and new:
- ‘contrast-color()’ in CSS color values (@webbeef, #41542)
- partial support for <meta charset> (@simonwuelker, #41376)
- partial support for encoding sniffing (@simonwuelker, #41435)
- ‘background’ and ‘bgcolor’ attributes on <table>, <thead>, <tbody>, <tfoot>, <tr>, <td>, <th> (@simonwuelker, #41272)
- tee() on readable byte streams (@Taym95, #35991)
Note: due to a known issue, servoshell on macOS may not be able to directly open new windows, depending on your system settings.
For better compatibility with older web content, we now support vendor-prefixed CSS properties like ‘-moz-transform’ (@mrobinson, #41350), as well as window.clientInformation (@Taym95, #41111).
We’ve continued shipping the SubtleCrypto API, with full support for ChaCha20-Poly1305, RSA-OAEP, RSA-PSS, and RSASSA-PKCS1-v1_5 (see below), plus importKey() for ML-KEM (@kkoyung, #41585) and several other improvements (@kkoyung, @PaulTreitel, @danilopedraza, #41180, #41395, #41428, #41442, #41472, #41544, #41563, #41587, #41039, #41292):
Algorithm ChaCha20-Poly1305 (@kkoyung, #40978, #41003, #41030) RSA-OAEP (@kkoyung, @TimvdLippe, @jdm, #41225, #41217, #41240, #41316) RSA-PSS (@kkoyung, @jdm, #41157, #41225, #41240, #41287) RSASSA-PKCS1-v1_5 (@kkoyung, @jdm, #41172, #41225, #41240, #41267)When using servoshell on Windows, you can now see --help and log output, as long as servoshell was started in a console (@jschwe, #40961).
Servo diagnostics options are now accessible in servoshell via the SERVO_DIAGNOSTICS environment variable (@atbrakhi, #41013), in addition to the usual -Z / --debug= arguments.
Servo’s devtools now partially support the Network > Security tab (@jiang1997, #40567), allowing you to inspect some of the TLS details of your requests. We’ve also made it compatible with Firefox 145 (@eerii, #41087), and use fewer IPC resources (@mrobinson, #41161).
We’ve fixed rendering bugs related to ‘float’, ‘order’, ‘max-width’, ‘max-height’, ‘:link’ selectors, <audio> layout, and getClientRects(), affecting intrinsic sizing (@Loirooriol, #41513), anonymous blocks (@Loirooriol, #41510), incremental layout (@Loirooriol, #40994), flex item sizing (@Loirooriol, #41291), selector matching (@andreubotella, #41478), replaced element layout (@Loirooriol, #41262), and empty fragments (@Loirooriol, #41477).
Servo now fires ‘toggle’ events on <dialog> (@lukewarlow, #40412). We’ve also improved the conformance of ‘wheel’ events (@mrobinson, #41182), ‘hashchange’ events (@Taym95, #41325), ‘dblclick’ events on <input> (@Taym95, #41319), ‘resize’ events on <video> (@tharkum, #40940), ‘seeked’ events on <video> and <audio> (@tharkum, #40981), and the ‘transform’ property in getComputedStyle() (@mrobinson, #41187).
Embedding APIServo now has basic support for HTTP proxies (@Narfinger, #40941). You can set the proxy URL in the http_proxy (@Narfinger, #41209) or HTTP_PROXY (@treeshateorcs, @yezhizhen, #41268) environment variables, or via --pref network_http_proxy_uri.
We now use the system root certificates by default (@Narfinger, @mrobinson, #40935, #41179), on most platforms. If you don’t want to trust the system root certificates, you can instead continue to use Mozilla’s root certificates with --pref network_use_webpki_roots. As always, you can also add your own root certificates via Opts::certificate_path (--certificate-path=).
We have a new SiteDataManager API for managing localStorage, sessionStorage, and cookies (@janvarga, #41236, #41255, #41378, #41523, #41528), and a new NetworkManager API for managing the cache (@janvarga, @mrobinson, #41255, #41474, #41386). To clear the cache, call NetworkManager::clear_cache, and to list cache entries, call NetworkManager::cache_entries.
Simple dialogs – that is alert(), confirm(), and prompt() – are now exposed to embedders via a new SimpleDialog type in EmbedderControl (@mrobinson, @mukilan, #40982). This new interface is harder to misuse, and no longer requires boilerplate for embedders that wish to ignore simple dialogs.
Web console messages, including messages from the Console API, are now accessible via ServoDelegate::show_console_message and WebViewDelegate::show_console_message (@atbrakhi, #41351).
Servo, the main handle for controlling Servo, is now cloneable for sharing within the same thread (@mukilan, @mrobinson, #41010). To shut down Servo, simply drop the last Servo handle or let it go out of scope. Servo::start_shutting_down and Servo::deinit have been removed (@mukilan, @mrobinson, #41012).
Several interfaces have also been renamed:
- Servo::clear_cookies is now SiteDataManager::clear_cookies (@janvarga, #41236, #41255)
- DebugOpts::disable_share_style_cache is now Preferences::layout_style_sharing_cache_enabled (@atbrakhi, #40959)
- The rest of DebugOpts has been moved to DiagnosticsLogging, and the options have been renamed (@atbrakhi, #40960)
We can now evict entries from our HTTP cache (@Narfinger, @gterzian, @Taym95, #40613), rather than having it grow forever (or get cleared by an embedder). about:memory now tracks SVG-related memory usage (@d-kraus, #41481), and we’ve fixed memory leaks in <video> and <audio> (@tharkum, #41131).
Servo now does less work when matching selectors (@webbeef, #41368), when focus changes (@mrobinson, @Loirooriol, #40984), and when reflowing boxes whose size did not change (@Loirooriol, @mrobinson, #41160).
To allow for smaller binaries, gamepad support is now optional at build time (@WaterWhisperer, #41451).
We’ve fixed some undefined behaviour around garbage collection (@sagudev, @jdm, @gmorenz, #41546, mozjs#688, mozjs#689, mozjs#692). To better avoid other garbage-collection-related bugs (@sagudev, mozjs#647, mozjs#638), we’ve continued our work on defining (and migrating to) safer interfaces between Servo and the SpiderMonkey GC (@sagudev, #41519, #41536, #41537, #41520, #41564).
We’ve fixed a crash that occurs when <link rel=“shortcut icon”> has an empty ‘href’ attribute, which affected chiptune.com (@webbeef, #41056), and we’ve also fixed crashes in:
- ‘background-repeat’ (@mrobinson, #41158)
- <audio> layout (@Loirooriol, #41262)
- custom elements (@mrobinson, #40743)
- AudioBuffer (@WaterWhisperer, #41253)
- AudioNode (@Taym95, #40954)
- ReportingObserver (@Taym95, #41261)
- Uint8Array (@jdm, #41228)
- the fonts system, on FreeType platforms (@simonwuelker, #40945)
- IME usage, on OpenHarmony (@jschwe, #41570)
Thanks again for your generous support! We are now receiving 7110 USD/month (+10.5% over November) in recurring donations. This helps us cover the cost of our speedy CI and benchmarking servers, one of our latest Outreachy interns, and funding maintainer work that helps more people contribute to Servo.
Servo is also on thanks.dev, and already 30 GitHub users (+2 over November) that depend on Servo are sponsoring us there. If you use Servo libraries like url, html5ever, selectors, or cssparser, signing up for thanks.dev could be a good way for you (or your employer) to give back to the community.
We now have sponsorship tiers that allow you or your organisation to donate to the Servo project with public acknowlegement of your support. A big thanks from Servo to our newest Bronze Sponsors: Anthropy, Niclas Overby, and RxDB! If you’re interested in this kind of sponsorship, please contact us at join@servo.org.
7110 USD/month 10000Use of donations is decided transparently via the Technical Steering Committee’s public funding request process, and active proposals are tracked in servo/project#187. For more details, head to our Sponsorship page.
Conference talks and blogsWe’ve recently published one talk and one blog post:
-
Web engine CI on a shoestring budget (slides; transcript) – Delan Azabani (@delan) spoke about the CI system that keeps our builds and tryjobs moving fast, running nearly two million tests in under half an hour.
-
Servo 2025 Stats – Manuel Rego (@mrego) wrote about the growth of the Servo project, and how our many new contributors have enabled that.
We also have two upcoming talks at FOSDEM 2026 in Brussels later this month:
-
The Servo project and its impact on the web platform ecosystem – Manuel Rego (@mrego) is speaking on Saturday 31 January at 14:00 local time (13:00 UTC), about Servo’s impact on spec issues, interop bugs, test cases, and the broader web platform ecosystem.
-
Implementing Streams Spec in Servo web engine – Taym Haddadi (@Taym95) is speaking on Saturday 31 January at 17:45 local time (16:45 UTC), about our experiences writing a new implementation of the Streams API that is independent of the one in SpiderMonkey.
Servo developers Martin Robinson (@mrobinson) and Delan Azabani (@delan) will also be attending FOSDEM 2026, so it would be a great time to come along and chat about Servo!
Firefox Add-on Reviews: YouTube your way — browser extensions put you in charge of your video experience
YouTube wants you to experience YouTube in prescribed ways. But with the right browser extension, you’re free to alter YouTube to taste. Change the way the site looks, behaves, and delivers your favorite videos.
Return YouTube DislikeDo you like the Dislike? YouTube removed the display that reveals the number of thumbs-down Dislikes a video has, but with Return YouTube Dislike you can bring back the brutal truth.
“Does exactly what the name suggests. Can’t see myself without this extension. Seriously, bad move on YouTube for removing such a vital tool.”
Firefox user OFG“i have never smashed 5 stars faster.”
Firefox user 12918016 YouTube High DefinitionThough its primary function is to automatically play all YouTube videos in their highest possible resolution, YouTube High Definition has a few other fine features to offer.
In addition to automatic HD, YouTube High Definition can…
- Customize video player size
- HD support for clips embedded on external sites
- Specify your ideal resolution (4k – 144p)
- Set a preferred volume level
- Also automatically plays the highest quality audio
So simple. So awesome. YouTube NonStop remedies the headache of interrupting your music with that awful “Video paused. Continue watching?” message.
Works on YouTube and YouTube Music. Now you’re free to navigate away from the YouTube tab for as long as you like and never worry about music interruption again.
YouTube Screenshot ButtonIf you take a lot of screenshots on YouTube, then the aptly titled YouTube Screenshot Button is worth your time.
You’ll find a “Screenshot” button conveniently located on the control panel of videos, or at the top of the screen on Shorts (or you can use custom keystrokes), so it’s always easy to snap a quick shot. Set preferences to automatically download screenshots as JPEG or PNG files.
Unhook: Remove YouTube Recommended Videos & CommentsInstant serenity for YouTube! Unhook strips away unwanted distractions like the promotional sidebar, end-screen suggestions, trending tab, and much more.
More than two dozen customization options make this an essential extension for anyone seeking escape from YouTube rabbit holes. You can even hide notifications and live chat boxes.
“This is the best extension to control YouTube usage, and not let YouTube control you.”
Firefox user Shubham Mandiya PocketTubeIf you subscribe to a lot of YouTube channels PocketTube is a fantastic way to organize all your subscriptions by themed collections.
Group your channel collections by subject, like “Sports,” “Cooking,” “Cat videos,” etc. Other key features include…
- Add custom icons to easily identify channel collections
- Customize your feed so you just see videos you haven’t watched yet and prioritize videos from certain channels
- Integrates seamlessly with YouTube homepage
- Sync collections across Firefox/Android/iOS using Google Drive and Chrome Profiler
PocketTube keeps your channel collections neatly tucked away to the side.
AdBlocker for YouTube
It’s not just you who’s noticed a lot more ads lately. Regain control with AdBlocker for YouTube.
The extension very simply and effectively removes both video and display ads from YouTube. Period. Enjoy a faster, more focused YouTube.
SponsorBlockIt’s a terrible experience when you’re enjoying a video or music on YouTube and you’re suddenly interrupted by a blaring ad. SponsorBlock solves this problem in a highly effective and original way.
Leveraging the power of crowd sourced information to locate where — precisely — interruptive sponsored segments appear in videos, SponsorBlock learns where to automatically skip sponsored segments with its ever growing database of videos. You can also participate in the project by reporting sponsored segments whenever you encounter them (it’s easy to report right there on the video page with the extension).
SponsorBlock can also learn to skip non-music portions of music videos and intros/outros, as well. If you’d like a deeper dive of SponsorBlock we profiled its developer and open source project on Mozilla Distilled.
We hope one of these extensions enhances the way you enjoy YouTube. Feel free to explore more great media extensions on addons.mozilla.org.
Mozilla Localization (L10N): L10n Report: January Edition 2026
Please note some of the information provided in this report may be subject to change as we are sometimes sharing information about projects that are still in early stages and are not final yet.
Happy New Year!What’s new or coming up in Firefox desktop Preferences updates for 148
A new set of strings intended for inclusion in the preferences page of 148 landed recently in Pontoon on January 16. These strings, focused around controls of AI features, landed ahead of the UX and functionality implementation so are not currently testable. These should be testable within the coming week in Nightly and Beta.
Split view coming in 149A new feature, called “split view”, is coming to Firefox 149. This feature and its related strings have already started landing at the end of 2025. You can test the feature now in Nightly, just right click a tab and select “Add Split View”. (If the option isn’t showing in your Nightly, then open about:config and ensure “browser.tabs.splitView.enabled” is set to true.
What’s new or coming up in mobile Android onboarding testing updatesIt is now possible to test the onboarding experience in Firefox for Android without using a simulator or wiping your existing data. We are currently waiting for engineers to update the default configuration to align with the onboarding experience in Firefox 148 and newer. We hope this update will land in time for the release of 148, and we will communicate the change via Pontoon as soon as that’s available.
In the meantime, please review the updated testing documentation to see how to trigger the onboarding flow. Note that some UI elements will display string identifiers instead of translations until the configuration is updated.
Firefox for iOS localization screenshotsWe heard your feedback about the screenshot process for Firefox for iOS. Thanks to everyone who answered the survey at the end of last year.
Screenshots are now available as a gallery for each locale. There is no longer a need to download and decompress a local zip file. You can browse the current screenshots for your locale, and use the links at the top to review the full history or compare changes between runs (generated roughly every two weeks).
A reminder that links to testing environments and instructions are always available from the project header in Pontoon.
What’s new or coming up in web projects Firefox.comWe’re planning some changes to how content is managed on firefox.com, and these updates will have an impact on our existing localization workflows. Once the details are finalized, we’ll share more information and notify you directly in Pontoon.
What’s new or coming up in Pontoon Pontoon infrastructure updateBehind the scenes, Pontoon has recently completed a major migration from Heroku to Google Cloud Platform. While this change should be largely invisible to localizers in day-to-day use, it brings noticeable improvements in performance, reliability, and scalability, helping ensure a smoother experience as contributor activity continues to grow. Huge thanks go to our Cloud Engineering partners for supporting this effort over the past months and helping make this important milestone possible.
Friends of the Lion
Image by Elio Qoshi
Since relaunching the contributor spotlight blog series, we’ve published two more stories highlighting the people behind our localization work.
We featured Robb, a professional translator from Romania, whose love for words and her desire to help her mom keep up with modern technology has grown into a day-to-day commitment to making products and technology accessible in language that everyday people can understand.
We also spotlighted Andika from Indonesia, a long-time open source contributor who joined the localization community to ensure Firefox and other products feel natural and accessible for Indonesian-speaking users. His steady, long-term commitment to quality speaks volumes about the impact of thoughtful localization.
We’ll be continuing this series and are always looking for contributors to feature. You can help us find the next localizer to spotlight by nominating one of your fellow community members. We’d love to hear from you!
Know someone in your l10n community who’s been doing a great job and should appear here? Contact us and we’ll make sure they get a shout-out!
Useful Links- #l10n-community channel on Element (chat.mozilla.org)
- Localization category on Discourse
- Fosstodon
- L10n blog
If you want to get involved, or have any question about l10n, reach out to:
- Francesco Lodolo (flod) – Engineering Manager
- Bryan – L10n Project Manager
- Delphine – L10n Project Manager for mobile
- Peiying (CocoMo) – L10n Project Manager for mozilla.org, marketing, and legal
- Francis – L10n Project Manager for Common Voice, Mozilla Foundation
- Théo Chevalier – L10n Project Manager for Mozilla Foundation
- Kiki – L10n Project Manager for SUMO
- Matjaž (mathjazz) – Pontoon dev
- Eemeli – Pontoon, Fluent dev
The Mozilla Blog: How Mozilla builds now
Mozilla has always believed that technology should empower people.
That belief shaped the early web, when browsers were still new and the idea of an open internet felt fragile. Today, the technology is more powerful, more complex, and more opaque, but the responsibility is the same. The question isn’t whether technology can do more. It’s whether it helps people feel capable, informed, and in control.
As we build new products at Mozilla today, that question is where we start.
I joined Mozilla to lead New Products almost one year ago this week because this is one of the few places still willing to take that responsibility seriously. Not just in what we ship, but in how we decide what’s worth building in the first place — especially at a moment when AI, platforms, and business models are all shifting at once.
Our mission — and mine — is to find the next set of opportunities for Mozilla and help shape the internet that all of us want to see.
Writing up to usersOne of Mozilla’s longest-held principles is respect for the people who use our products. We assume users are thoughtful. We accept skepticism as a given (it forces product development rigor — more on that later). And we design accordingly.
That respect shows up not just in how we communicate, but in the kinds of systems we choose to build and the role we expect people to play in shaping them.
You can see this in the way we’re approaching New Products work across Mozilla today: Our current portfolio includes tools like Solo, which makes it easy for anyone to own their presence on the web; Tabstack, which helps developers enable agentic experiences; 0DIN, which pools the collective expertise of over 1400 researchers from around the globe to help identify and surface AI vulnerabilities; and an enterprise version of Firefox that treats the browser as critical infrastructure for modern work, not a data collection surface.
None of this is about making technology simpler than it is. It’s about making it legible. When people understand the systems they’re using, they can decide whether those systems are actually serving them.
Experimentation that respects people’s timeMozilla experiments. A lot. But we try to do it without treating talent and attention as an unlimited resource. Building products that users love isn’t easy and requires us to embrace the uncertainty and ambiguity that comes with zero-to-one exploration.
Every experiment should answer a real question. It should be bounded. And it should be clear to the people interacting with it what’s being tested and why. That discipline matters, especially now. When everything can be prototyped quickly, restraint becomes part of the craft.
Fewer bets, made deliberately. A willingness to stop when something isn’t working. And an understanding that experimentation doesn’t have to feel chaotic to be effective.
Creating space for more kinds of buildersMozilla has always believed that who builds is just as important as what gets built. But let’s be honest: The current tech landscape often excludes a lot of brilliant people, simply because the system is focused on only rewarding certain kinds of outcomes.
We want to unlock those meaningful ideas by making experimentation more practical for people with real-world perspectives. We’re focused on lowering the barriers to building — because we believe that making tech more inclusive isn’t just a nice-to-have, it’s how you build better products.
A practical expression of this approachOne expression of this philosophy is a new initiative we’ll be sharing more about soon: Mozilla Pioneers.
Pioneers isn’t an accelerator, and it isn’t a traditional residency. It’s a structured, time-limited way for experienced builders to work with Mozilla on early ideas without requiring them to put the rest of their lives on hold.
The structure is intentional. Pioneers is paid. It’s flexible. It’s hands-on. And it’s bounded. Participants work closely with Mozilla engineers, designers, and product leaders to explore ideas that could become real Mozilla products — or could simply clarify what shouldn’t be built.
Some of that work will move forward. Some won’t. Both outcomes are valuable. Pioneers exists because we believe that good ideas don’t only come from founders or full-time employees, and that meaningful contribution deserves real support.
Applications open Jan. 26. For anyone interested (and I hope that’s a lot of you) please follow us, share and apply. In the meantime, know that what’s ahead is just one more example of how we’re trying to build with intention.
Looking aheadMozilla doesn’t pretend to have all the answers. But we’re clear about our commitments.
As we build new products, programs, and systems, we’re choosing clarity over speed, boundaries over ambiguity, and trust that compounds over time instead of short-term gains.
The future of the internet won’t be shaped only by what technology can do — but by what its builders choose to prioritize. Mozilla intends to keep choosing people.
The post How Mozilla builds now appeared first on The Mozilla Blog.
The Rust Programming Language Blog: Announcing Rust 1.93.0
The Rust team is happy to announce a new version of Rust, 1.93.0. Rust is a programming language empowering everyone to build reliable and efficient software.
If you have a previous version of Rust installed via rustup, you can get 1.93.0 with:
$ rustup update stableIf you don't have it already, you can get rustup from the appropriate page on our website, and check out the detailed release notes for 1.93.0.
If you'd like to help us out by testing future releases, you might consider updating locally to use the beta channel (rustup default beta) or the nightly channel (rustup default nightly). Please report any bugs you might come across!
What's in 1.93.0 stable Update bundled musl to 1.2.5The various *-linux-musl targets now all ship with musl 1.2.5. This primarily affects static musl builds for x86_64, aarch64, and powerpc64le which bundled musl 1.2.3. This update comes with several fixes and improvements, and a breaking change that affects the Rust ecosystem.
For the Rust ecosystem, the primary motivation for this update is to receive major improvements to musl's DNS resolver which shipped in 1.2.4 and received bug fixes in 1.2.5. When using musl targets for static linking, this should make portable Linux binaries that do networking more reliable, particularly in the face of large DNS records and recursive nameservers.
However, 1.2.4 also comes with a breaking change: the removal of several legacy compatibility symbols that the Rust libc crate was using. A fix for this was shipped in libc 0.2.146 in June 2023 (2.5 years ago), and we believe has sufficiently widely propagated that we're ready to make the change in Rust targets.
See our previous announcement for more details.
Allow the global allocator to use thread-local storageRust 1.93 adjusts the internals of the standard library to permit global allocators written in Rust to use std's thread_local! and std::thread::current without re-entrancy concerns by using the system allocator instead.
See docs for details.
cfg attributes on asm! linesPreviously, if individual parts of a section of inline assembly needed to be cfg'd, the full asm! block would need to be repeated with and without that section. In 1.93, cfg can now be applied to individual statements within the asm! block.
asm!( // or global_asm! or naked_asm! "nop", #[cfg(target_feature = "sse2")] "nop", // ... #[cfg(target_feature = "sse2")] a = const 123, // only used on sse2 ); Stabilized APIs- <[MaybeUninit<T>]>::assume_init_drop
- <[MaybeUninit<T>]>::assume_init_ref
- <[MaybeUninit<T>]>::assume_init_mut
- <[MaybeUninit<T>]>::write_copy_of_slice
- <[MaybeUninit<T>]>::write_clone_of_slice
- String::into_raw_parts
- Vec::into_raw_parts
- <iN>::unchecked_neg
- <iN>::unchecked_shl
- <iN>::unchecked_shr
- <uN>::unchecked_shl
- <uN>::unchecked_shr
- <[T]>::as_array
- <[T]>::as_mut_array
- <*const [T]>::as_array
- <*mut [T]>::as_mut_array
- VecDeque::pop_front_if
- VecDeque::pop_back_if
- Duration::from_nanos_u128
- char::MAX_LEN_UTF8
- char::MAX_LEN_UTF16
- std::fmt::from_fn
- std::fmt::FromFn
Check out everything that changed in Rust, Cargo, and Clippy.
Contributors to 1.93.0Many people came together to create Rust 1.93.0. We couldn't have done it without all of you. Thanks!
The Rust Programming Language Blog: crates.io: development update
Time flies! Six months have passed since our last crates.io development update, so it's time for another one. Here's a summary of the most notable changes and improvements made to crates.io over the past six months.
Security TabCrate pages now have a new "Security" tab that displays security advisories from the RustSec database. This allows you to quickly see if a crate has known vulnerabilities before adding it as a dependency.

The tab shows known vulnerabilities for the crate along with the affected version ranges.
This feature is still a work in progress, and we plan to add more functionality in the future. We would like to thank the OpenSSF (Open Source Security Foundation) for funding this work and Dirkjan Ochtman for implementing it.
Trusted Publishing EnhancementsIn our July 2025 update, we announced Trusted Publishing support for GitHub Actions. Since then, we have made several enhancements to this feature.
GitLab CI/CD SupportTrusted Publishing now supports GitLab CI/CD in addition to GitHub Actions. This allows GitLab users to publish crates without managing API tokens, using the same OIDC-based authentication flow.
Note that this currently only works with GitLab.com. Self-hosted GitLab instances are not supported yet. The crates.io implementation has been refactored to support multiple CI providers, so adding support for other platforms like Codeberg/Forgejo in the future should be straightforward. Contributions are welcome!
Trusted Publishing Only ModeCrate owners can now enforce Trusted Publishing for their crates. When enabled in the crate settings, traditional API token-based publishing is disabled, and only Trusted Publishing can be used to publish new versions. This reduces the risk of unauthorized publishes from leaked API tokens.
Blocked TriggersThe pull_request_target and workflow_run GitHub Actions triggers are now blocked from Trusted Publishing. These triggers have been responsible for multiple security incidents in the GitHub Actions ecosystem and are not worth the risk.
Source Lines of CodeCrate pages now display source lines of code (SLOC) metrics, giving you insight into the size of a crate before adding it as a dependency. This metric is calculated in a background job after publishing using the tokei crate. It is also shown on OpenGraph images:

Thanks to XAMPPRocky for maintaining the tokei crate!
Publication Time in IndexA new pubtime field has been added to crate index entries, recording when each version was published. This enables several use cases:
- Cargo can implement cooldown periods for new versions in the future
- Cargo can replay dependency resolution as if it were a past date, though yanked versions remain yanked
- Services like Renovate can determine release dates without additional API requests
Thanks to Rene Leonhardt for the suggestion and Ed Page for driving this forward on the Cargo side.
Svelte Frontend MigrationAt the end of 2025, the crates.io team evaluated several options for modernizing our frontend and decided to experiment with porting the website to Svelte. The goal is to create a one-to-one port of the existing functionality before adding new features.
This migration is still considered experimental and is a work in progress. Using a more mainstream framework should make it easier for new contributors to work on the frontend. The new Svelte frontend uses TypeScript and generates type-safe API client code from our OpenAPI description, so types flow from the Rust backend to the TypeScript frontend automatically.
Thanks to eth3lbert for the helpful reviews and guidance on Svelte best practices. We'll share more details in a future update.
MiscellaneousThese were some of the more visible changes to crates.io over the past six months, but a lot has happened "under the hood" as well.
-
Cargo user agent filtering: We noticed that download graphs were showing a constant background level of downloads even for unpopular crates due to bots, scrapers, and mirrors. Download counts are now filtered to only include requests from Cargo, providing more accurate statistics.
-
HTML emails: Emails from crates.io now support HTML formatting.
-
Encrypted GitHub tokens: OAuth access tokens from GitHub are now encrypted at rest in the database. While we have no evidence of any abuse, we decided to improve our security posture. The tokens were never included in the daily database dump, and the old unencrypted column has been removed.
-
Source link: Crate pages now display a "Browse source" link in the sidebar that points to the corresponding docs.rs page. Thanks to Carol Nichols for implementing this feature.
-
Fastly CDN: The sparse index at index.crates.io is now served primarily via Fastly to conserve our AWS credits for other use cases. In the past month, static.crates.io served approximately 1.6 PB across 11 billion requests, while index.crates.io served approximately 740 TB across 19 billion requests. A big thank you to Fastly for providing free CDN services through their Fast Forward program!
-
OpenGraph image improvements: We fixed emoji and CJK character rendering in OpenGraph images, which was caused by missing fonts on our server.
-
Background worker performance: Database indexes were optimized to improve background job processing performance.
-
CloudFront invalidation improvements: Invalidation requests are now batched to avoid hitting AWS rate limits when publishing large workspaces.
We hope you enjoyed this update on the development of crates.io. If you have any feedback or questions, please let us know on Zulip or GitHub. We are always happy to hear from you and are looking forward to your feedback!
Data@Mozilla: This Week in Data: There’s No Such Thing as a Normal Month
(“This Week in Data” is a series of blog posts that the Data Team at Mozilla is using to communicate about our work. Posts in this series could be release notes, documentation, hopes, dreams, or whatever: so long as it’s about data.)
At the risk of reminding you of a Nickleback song, look at this graph:
I’ve erased the y-axis because the absolute values don’t actually matter for this discussion, but this is basically a sparkline plot of active users of Firefox Desktop for 2025. The line starts and ends basically at the same height but wow does it have a lot of ups and downs between.
I went looking at this shape recently while trying to estimate the costs of continuing to collect Legacy Telemetry in Firefox Desktop. We’re at the point in our migration to Glean where you really ought to start removing your Legacy Telemetry probes unless you have some ongoing analyses that depend on them. I was working out a way to get a back-of-the-envelope dollar figure to scare teams into prioritizing such removals to be conducted sooner rather than later.
Our ingestion metadata (how many bytes were processed by which pieces of the pipeline) only goes back sixty days, and I was worried that basing my cost estimate on numbers from December 2025 would make them unusually low compared to “a normal month”.
But what’s “normal”? Which of these months could be considered “normal” by any measure? I mean:
- January: Beginning-of-year holiday slump
- February: Only twenty-eight days long
- March: Easter (sometimes), DST begins
- April: Easter (sometimes), something that really starts suppressing activity
- May: What’s with that big rebound in the second half?
- June: Last day of school
- July: School’s out, Northern Hemisphere Summer means less time on the ‘net and more time touching grass
- August: Typical month for vacations in Europe
- September: Back-to-school
- October: Maybe “normal”?
- November: US Thanksgiving
- December: End-of-year holiday slump
October and maybe May are perhaps the closest things we have to “normal” months, and by being the only “normal”-ish months that makes them rather abnormal, don’t you think?
Now, I’ve been lying to you with data visualization here. If you’re exceedingly clever you’ll notice that, in the sparkline plot above, not only did I take the y-axis labels off, I didn’t start the y-axis at 0 (we had far more than zero active users of Firefox Desktop at the end of August, after all). I chose this to be illustrative of the differences from month to month, exaggerating them for effect. But if you look at, say, the Monthly Active Users (now combined Mobile + Desktop) on data.firefox.com it paints a rather more sedate picture, doesn’t it:
This isn’t a 100% fair comparison as data.firefox.com goes back years, and I stretched 2025 to be the same width, above… but you see what data visualization choices can do to help or hinder the story you’re hoping to tell.
At any rate, I hope you found it as interesting as I did to learn that December’s abnormality makes it just as “normal” as the rest of the months for my cost estimation purposes.
:chutten
(this is a syndicated copy of the original blog post.)
Chris H-C: This Week in Data: There’s No Such Thing as a Normal Month
(“This Week in Data” is a series of blog posts that the Data Team at Mozilla is using to communicate about our work. Posts in this series could be release notes, documentation, hopes, dreams, or whatever: so long as it’s about data.)
At the risk of reminding you of a Nickleback song, look at this graph:
I’ve erased the y-axis because the absolute values don’t actually matter for this discussion, but this is basically a sparkline plot of active users of Firefox Desktop for 2025. The line starts and ends basically at the same height but wow does it have a lot of ups and downs between.
I went looking at this shape recently while trying to estimate the costs of continuing to collect Legacy Telemetry in Firefox Desktop. We’re at the point in our migration to Glean where you really ought to start removing your Legacy Telemetry probes unless you have some ongoing analyses that depend on them. I was working out a way to get a back-of-the-envelope dollar figure to scare teams into prioritizing such removals to be conducted sooner rather than later.
Our ingestion metadata (how many bytes were processed by which pieces of the pipeline) only goes back sixty days, and I was worried that basing my cost estimate on numbers from December 2025 would make them unusually low compared to “a normal month”.
But what’s “normal”? Which of these months could be considered “normal” by any measure? I mean:
- January: Beginning-of-year holiday slump
- February: Only twenty-eight days long
- March: Easter (sometimes), DST begins
- April: Easter (sometimes), something that really starts suppressing activity
- May: What’s with that big rebound in the second half?
- June: Last day of school
- July: School’s out, Northern Hemisphere Summer means less time on the ‘net and more time touching grass
- August: Typical month for vacations in Europe
- September: Back-to-school
- October: Maybe “normal”?
- November: US Thanksgiving
- December: End-of-year holiday slump
October and maybe May are perhaps the closest things we have to “normal” months, and by being the only “normal”-ish months that makes them rather abnormal, don’t you think?
Now, I’ve been lying to you with data visualization here. If you’re exceedingly clever you’ll notice that, in the sparkline plot above, not only did I take the y-axis labels off, I didn’t start the y-axis at 0 (we had far more than zero active users of Firefox Desktop at the end of August, after all). I chose this to be illustrative of the differences from month to month, exaggerating them for effect. But if you look at, say, the Monthly Active Users (now combined Mobile + Desktop) on data.firefox.com it paints a rather more sedate picture, doesn’t it:
This isn’t a 100% fair comparison as data.firefox.com goes back years, and I stretched 2025 to be the same width, above… but you see what data visualization choices can do to help or hinder the story you’re hoping to tell.
At any rate, I hope you found it as interesting as I did to learn that December’s abnormality makes it just as “normal” as the rest of the months for my cost estimation purposes.
:chutten
Firefox Nightly: Introducing Mozilla’s Firefox Nightly .rpm package for RPM-based linux distributions!
After introducing Debian packages for Firefox Nightly, we’re now excited to extend that to RPM-based distributions.
Just like with the Debian packages, switching to Mozilla’s RPM repository allows Firefox to be installed and updated like any other application, using your favorite package manager. It also provides a number of improvements:
- Better performance thanks to our advanced compiler-based optimizations,
- Updates as fast as possible because the .rpm management is integrated into Firefox’s release process,
- Hardened binaries with all security flags enabled during compilation,
- No need to create your own .desktop file.
To install Firefox Nightly, follow these steps:
If you are on fedora (41+), or any other distribution using dnf5 as the package managersudo dnf config-manager addrepo --id=mozilla --set=baseurl=https://packages.mozilla.org/rpm/firefox --set=gpgcheck=0 --set=repo_gpgcheck=0
sudo dnf makecache --refresh
sudo dnf install firefox-nightly
sudo zypper ar -G https://packages.mozilla.org/rpm/firefox mozilla
sudo zypper refresh
sudo zypper install firefox-nightly
sudo tee /etc/yum.repos.d/mozilla.repo > /dev/null << EOF
[mozilla]
name=Mozilla Packages
baseurl=https://packages.mozilla.org/rpm/firefox
enabled=1
repo_gpgcheck=0
gpgcheck=0
EOF
# For dnf users
sudo dnf makecache --refresh
sudo dnf install firefox-nightly
# For zypper users
sudo zypper refresh
sudo zypper install firefox-nightly
Note: gpgcheck is currently disabled until Bug 2009927 is addressed.
It is worth noting that the firefox-nightly package will not conflict with your distribution’s Firefox package if you have it installed, you can have both at the same time!
Adding language packsIf your distribution language is set to a supported language, language packs for it should automatically be installed. You can also install them manually with the following command (replace fr with the language code of your choice):
sudo dnf install firefox-nightly-l10n-fr
You can list the available languages with the following command:
dnf search firefox-nightly-l10n
Don’t hesitate to report any problem you encounter to help us make your experience better.
Mozilla GFX: Experimental High Dynamic Range video playback on Windows in Firefox Nightly 148
Modern computer displays have gained more colorful capabilities in recent years with High Dynamic Range (HDR) being a headline feature. These displays can show vibrant shades of red, purple and green that were outside the capability of past displays, as well as higher brightness for portions of the displayed videos.
We are happy to announce that Firefox is gaining support for HDR video on Windows, now enabled in Firefox Nightly 148. This is experimental for the time being, as we want to gather feedback on what works and what does not across varied hardware in the wild before we deploy it for all Firefox users broadly. HDR video has already been live on macOS for some time now, and is being worked on for Wayland on Linux.
To get the full experience, you will need an HDR display, and the HDR feature needs to be turned on in Windows (Settings -> Display Settings) for that display. This release also changes how HDR video looks on non-HDR displays in some cases: this used to look very washed out, but it should be improved now. Feedback on whether this is a genuine improvement is also welcome. Popular streaming websites may be checking for this HDR capability, so they may now offer HDR video content to you, but only if HDR is enabled on the display.
We are actively working on HDR support for other web functionality such as WebGL, WebGPU, Canvas2D and static images, but have no current estimates on when those features will be ready: this is a lot of work, and relevant web standards are still in flux.
Note for site authors: Websites can use the CSS video-dynamic-range functionality to make separate HDR and SDR videos available for the same video element. This functionality detects if the user has the display set to HDR, not necessarily whether the display is capable of HDR mode. Displaying an HDR video on an SDR display is expected to work reasonably but requires more testing – we invite feedback on that.
Notes and limitations:
- Some streaming sites offer HDR video only if the page is on an HDR-enabled display at the time the page is loaded. Refreshing the page will update that status if you have enabled/disabled HDR mode on the display or moved the window to another display with different capabilities. On the other hand, you can use this behavior to make side-by-side comparisons of HDR and non-HDR versions of a video on these streaming sites if that interests you.
- Some streaming sites do not seem to offer HDR video to Firefox users at this time. This is not necessarily a problem with the HDR video functionality in Firefox; they may simply use codecs we do not currently support.
- Viewing videos in HEVC format on Windows may require obtaining ‘HEVC Video Extensions’ format support from the Microsoft Store. This is a matter of codec support and not directly related to HDR, but some websites may use this codec for HDR content.
- If you wish to not be offered HDR video by websites, you can set the pref ‘layout.css.video-dynamic-range.allows-high’ to false in about:config, we may decide to add this pref to the general browser settings if there is interest. Local files and websites that only offer HDR videos will still be HDR if the encoding is HDR.
- If you wish to experiment with the previous ‘washed out’ look for HDR video, you can set the pref ‘gfx.color_management.hdr_video’ to false. This is unlikely to be useful, but if you find you need to use it for some reason we would like to know (file a bug on Bugzilla).
- No attempt has been made to read and use HDR metadata in video streams at this time. Windows seems to do something smart with tonemapping for this in our testing, but we will want to implement full support as in other browsers.
- On the technical side: we’re defining HDR video as video using the BT2020 colorspace with the Perceptual Quantizer (PQ) transfer function defined in BT2100. In our observations, all HDR video on the web uses this exact combination of colorspace and transfer function, so we assume all BT2020 video is PQ as a matter of convenience. We’ve been making this assumption for a few years on macOS already. The ‘washed out’ HDR video look arose from using the stock BT2020 transfer function rather than PQ, as well as the use of a BGRA8 overlay. Now we use the RGB10A2 format if the colorspace is BT2020, as HDR requires at least 10 bits to match the quality of SDR video. Videos are assumed to be opaque (alpha channel not supported): we’re not aware of any use of transparency in videos in the wild. It would be interesting to know if that feature is used anywhere.
Spidermonkey Development Blog: Flipping Responsibility for Jobs in SpiderMonkey
This blog post is written both as a heads-up to embedders of SpiderMonkey, and an explanation of why the changes are coming
As an embedder of SpiderMonkey one of the decisions you have to make is whether or not to provide your own implementation of the job queue.
The responsibility of the job queue is to hold pending jobs for Promises, which in the HTML spec are called ‘microtasks’. For embedders, the status quo of 2025 was two options:
- Call JS::UseInternalJobQueues, and then at the appropriate point for your embedding, call JS::RunJobs. This uses an internal job queue and drain function.
- Subclass and implement the JS::JobQueue type, storing and invoking your own jobs. An embedding might want to do this if they wanted to add their own jobs, or had particular needs for the shape of jobs and data carried alongside them.
The goal of this blog post is to indicate that SpiderMonkey’s handling of Promise jobs is changing over the next little while, and explain a bit of why.
If you’ve chosen to use the internal job queue, almost nothing should change for your embedding. If you’ve provided your own job queue, read on:
What’s Changing- The actual type of a job from the JS engine is changing to be opaque.
- The responsibility for actually storing the Promise jobs is moving from the embedding, even in the case of an embedding provided JobQueue.
- As a result of (1), the interface to run a job from the queue is also changing.
I’ll cover this in a bit more detail, but a good chunk of the interface discussed is in MicroTask.h (this link is to a specific revision because I expect the header to move).
For most embeddings the changes turn out to be very mechanical. If you have specific challenges with your embedding please reach out.
Job TypeThe type of a JS Promise job has been a JSFunction, and thus invoked with JS::Call. The job type is changing to an opaque type. The external interface to this type will be JS::Value (typedef’d as JS::GenericMicroTask);
This means that if you’re an embedder who had been storing your own tasks in the same queue as JS tasks you’ll still be able to, but you’ll need to use the queue access APIs in MicroTask.h. A queue entry is simply a JS::Value and so an arbitrary C address can be stored in it as a JS::PrivateValue.
Jobs now are split into two types: JSMicroTasks (enqueued by the JS engine) and GenericMicroTasks (possibly JS engine provided, possibly embedding provided).
Storage ResponsibilityIt used to be that if an embedding provided its own JobQueue, we’d expect them to store the jobs and trace the queue. Now that an embedding finds that the queue is inside the engine, the model is changing to one where the embedding must ask the JS engine to store jobs it produces outside of promises if it would like to share the job queue.
Running Micro TasksThe basic loop of microtask execution now looks like this:
JS::Rooted<JSObject*> executionGlobal(cx) JS::Rooted<JS::GenericMicroTask> genericTask(cx); JS::Rooted<JS::JSMicroTask> jsTask(cx); while (JS::HasAnyMicroTasks(cx)) { genericTask = JS::DequeueNextMicroTask(cx); if (JS::IsJSMicroTask(genericTask)) { jsMicroTask = JS::ToMaybeWrappedJSMicroTask(genericMicroTask); executionGlobal = JS::GetExecutionGlobalFromJSMicroTask(jsMicroTask); { AutoRealm ar(cx, executionGlobal); if (!JS::RunJSMicroTask(cx, jsMicroTask)) { // Handle job execution failure in the // same way JS::Call failure would have been // handled } } continue; } // Handle embedding jobs as appropriate. }The abstract separation of the execution global is required to handle cases with many compartments and complicated realm semantics (aka a web browser).
An exampleIn order to see roughly what the changes would look like, I attempted to patch GJS, the GNOME JS embedding which uses SpiderMonkey.
The patch is here. It doesn’t build due to other incompatibilities I found, but this is the rough shape of a patch for an embedding. As you can see, it’s fairly self contained with not too much work to be done.
Why Change?In a word, performance. The previous form of Promise job management is very heavyweight with lots of overhead, causing performance to suffer.
The changes made here allow us to make SpiderMonkey quite a bit faster for dealing with Promises, and unlock the potential to get even faster.
How do the changes help?Well, perhaps the most important change here is making the job representation opaque. What this allows us to do is use pre-existing objects as stand-ins for the jobs. This means that rather than having to allocate a new object for every job (which is costly) we can some of the time actually allocate nothing, simply enqueing an existing job with enough information to run.
Owning the queue will also allow us to choose the most efficient data structure for JS execution, potentially changing opaquely in the future as we find better choices.
Empirically, changing from the old microtask queue system to the new in Firefox led to an improvement of up to 45% on Promise heavy microbenchmarks.
Is this it?I do not think this is the end of the story for changes in this area. I plan further investment. Aspirationally I would like this all to be stabilized by the next ESR release which is Firefox 153, which will ship to beta in June, but only time will tell what we can get done.
Future changes I can predict are things like
- Renaming JS::JobQueue which is now more of a ‘jobs interface’
- Renaming the MicroTask header to be less HTML specific
However, I can also imagine making more changes in the pursuit of performance.
What’s the bug for this workYou can find most of the work related to this under Bug 1983153 (sm-µ-task)
An ApologyMy apologies to those embedders who will have to do some work during this transition period. Thank you for sticking with SpiderMonkey!
The Mozilla Blog: How founders are meeting the moment: Lessons from Mozilla Ventures’ 2025 portfolio convening
At Mozilla, we’ve long believed that technology can be built differently — not only more openly, but more responsibly, more inclusively, and more in service of the people who rely on it. As AI reshapes nearly every layer of the internet, those values are being tested in real time.
Our 2025 Mozilla Ventures Portfolio Convening Report captures how a new generation of founders is meeting that moment.
At the Mozilla Festival 2025 in Barcelona, from Nov. 7–9, we brought together 50 founders from 30 companies across our portfolio to grapple with some of the most pressing questions in technology today: How do we build AI that is trustworthy and governable? How do we protect privacy at scale? What does “better social” look like after the age of the global feed? And how do we ensure that the future of technology is shaped by people and communities far beyond today’s centers of power?
Over three days of panels, talks, and hands-on sessions, founders shared not just what they’re building, but what they’re learning as they push into new terrain. What emerged is a vivid snapshot of where the industry is heading — and the hard choices required to get there.
Open source as strategy, not sloganA major theme emerging across conversations with our founders was that open source is no longer a “nice to have.” It’s the backbone of trust, adoption, and long‑term resilience in AI, and a critical pillar for the startup ecosystem. But these founders aren’t naïve about the challenges. Training frontier‑scale models costs staggering sums, and the gravitational pull of a few dominant labs is real. Yet companies like Union.ai, Jozu, and Oumi show that openness can still be a moat — if it’s treated as a design choice, not a marketing flourish.
Their message is clear: open‑washing won’t cut it. True openness means clarity about what’s shared —weights, data, governance, standards — and why. It means building communities that outlast any single company. And it means choosing investors who understand that open‑source flywheels take time to spin up.
Community as the real competitive edgeAcross November’s sessions, founders returned to a simple truth: community is the moat. Flyte’s growth into a Linux Foundation project, Jozu’s push for open packaging standards, and Lelapa’s community‑governed language datasets all demonstrate that the most durable advantage isn’t proprietary code — it’s shared infrastructure that people trust.
Communities harden technology, surface edge cases, and create the kind of inertia that keeps systems in place long after competitors appear. But they also require care: documentation, governance, contributor experience, and transparency. As one founder put it, “You can’t build community overnight. It’s years of nurturing.”
Ethics as infrastructureOne of the most powerful threads came from Lelapa AI, which reframes data not as raw material to be mined but as cultural property. Their licensing model, inspired by Māori data sovereignty, ensures that African languages — and the communities behind them — benefit from the value they create. This is openness with accountability, a model that challenges extractive norms and points toward a more equitable AI ecosystem.
It’s a reminder that ethical design isn’t a layer on top of technology — it’s part of the architecture.
The real competitor: fearFounders spoke candidly about the biggest barrier to adoption: fear. Enterprises default to hyperscalers because no one gets fired for choosing the biggest vendor. Overcoming that inertia requires more than values. It requires reliability, security features, SSO, RBAC, audit logs — the “boring” but essential capabilities that make open systems viable in real organizations.
In other words, trust is built not only through ideals but through operational excellence.
A blueprint for buildersAcross all 16 essays, a blueprint started to emerge for founders and startups committed to building responsible technology and open source AI:
- Design openness as a strategic asset, not a giveaway.
- Invest in community early, even before revenue.
- Treat data ethics as non‑negotiable, especially when working with marginalized communities.
- Name inertia as a competitor, and build the tooling that makes adoption feel safe.
- Choose aligned investors, because misaligned capital can quietly erode your mission.
Taken together, the 16 essays in this report point to something larger than any single technology or trend. They show founders wrestling with how AI is governed, how trust is earned, how social systems can be rebuilt at human scale, and how innovation looks different when it starts from Lagos or Johannesburg instead of Silicon Valley.
The future of AI doesn’t have to be centralized, extractive or opaque. The founders in this portfolio are proving that openness, trustworthiness, diversity, and public benefit can reinforce one another — and that competitive companies can be built on all four.
We hope you’ll dig into the report, explore the ideas these founders are surfacing, and join us in backing the people building what comes next.
The post How founders are meeting the moment: Lessons from Mozilla Ventures’ 2025 portfolio convening appeared first on The Mozilla Blog.
Tarek Ziadé: The Economics of AI Coding: A Real-World Analysis
My whole stream in the past months has been about AI coding. From skeptical engineers who say it creates unmaintainable code, to enthusiastic (or scared) engineers who say it will replace us all, the discourse is polarized. But I’ve been more interested in a different question: what does AI coding actually cost, and what does it actually save?
I recently had Claude help me with a substantial refactoring task: splitting a monolithic Rust project into multiple workspace repositories with proper dependency management. The kind of task that’s tedious, error-prone, and requires sustained attention to detail across hundreds of files. When it was done, I asked Claude to analyze the session: how much it cost, how long it took, and how long a human developer would have taken.
The answer surprised me. Not because AI was faster or cheaper (that’s expected), but because of how much faster and cheaper.
The Task: Repository Split and Workspace SetupThe work involved:
- Planning and researching the codebase structure
- Migrating code between three repositories
- Updating thousands of import statements
- Configuring Cargo workspaces and dependencies
- Writing Makefiles and build system configuration
- Setting up CI/CD workflows with GitHub Actions
- Updating five different documentation files
- Running and verifying 2300+ tests
- Creating branches and writing detailed commit messages
This is real work. Not a toy problem, not a contrived benchmark. The kind of multi-day slog that every engineer has faced: important but tedious, requiring precision but not creativity.
The Numbers AI Execution TimeTotal: approximately 3.5 hours across two sessions
- First session (2-3 hours): Initial setup, file operations, dependency configuration, build testing, CI/CD setup
- Second session (15-20 minutes): Documentation updates, branch creation, final commits, todo tracking
Total tokens: 72,146 tokens
- Input tokens: ~45,000 (context, file reads, system prompts)
- Output tokens: ~27,000 (tool calls, code generation, documentation)
Estimated marginal cost: approximately $4.95
- Input: ~$0.90 (at ~$3/M tokens for Sonnet 4.5)
- Output: ~$4.05 (at ~$15/M tokens for Sonnet 4.5)
This is the marginal execution cost for this specific task. It doesn’t include my Claude subscription, the time I spent iterating on prompts and reviewing output, or the risk of having to revise or fix AI-generated changes. For a complete accounting, you’d also need to consider those factors, though for this task they were minimal.
Human Developer Time EstimateConservative estimate: 2-3 days (16-24 hours)
This is my best guess based on experience with similar tasks, but it comes with uncertainty. A senior engineer deeply familiar with this specific codebase might work faster. Someone encountering similar patterns for the first time might work slower. Some tasks could be partially templated or parallelized across a team.
Breaking down the work:
- Planning and research (2-4 hours): Understanding codebase structure, planning dependency strategy, reading PyO3/Maturin documentation
- Code migration (4-6 hours): Copying files, updating all import statements, fixing compilation errors, resolving workspace conflicts
- Build system setup (2-3 hours): Writing Makefile, configuring Cargo.toml, setting up pyproject.toml, testing builds
- CI/CD configuration (2-4 hours): Writing GitHub Actions workflows, testing syntax, debugging failures, setting up matrix builds
- Documentation updates (2-3 hours): Updating multiple documentation files, ensuring consistency, writing migration guides
- Testing and debugging (3-5 hours): Running test suites, fixing unexpected failures, verifying tests pass, testing on different platforms
- Git operations and cleanup (1-2 hours): Creating branches, writing commit messages, final verification
Even if we’re generous and assume a very experienced developer could complete this in 8 hours of focused work, the time and cost advantages remain substantial. The economics don’t depend on the precise estimate.
The Bottom Line- AI: ~3.5 hours, ~$5 marginal cost
- Human: ~16-24 hours, ~$800-$2,400 (at $50-100/hr developer rate)
- Savings: approximately 85-90% time reduction, approximately 99% marginal cost reduction
These numbers compare execution time and per-task marginal costs. They don’t capture everything (platform costs, review time, long-term maintenance implications), but they illustrate the scale of the difference for this type of systematic refactoring work.
Why AI Was FasterThe efficiency gains weren’t magic. They came from specific characteristics of how AI approaches systematic work:
No context switching fatigue. Claude maintained focus across three repositories simultaneously without the cognitive load that would exhaust a human developer. No mental overhead from jumping between files, no “where was I?” moments after a break.
Instant file operations. Reading and writing files happens without the delays of IDE loading, navigation, or search. What takes a human seconds per file took Claude milliseconds.
Pattern matching without mistakes. Updating thousands of import statements consistently, without typos, without missing edge cases. No ctrl-H mistakes, no regex errors that you catch three files later.
Parallel mental processing. Tracking multiple files at once without the working memory constraints that force humans to focus narrowly.
Documentation without overhead. Generating comprehensive, well-structured documentation in one pass. No switching to a different mindset, no “I’ll document this later” debt.
Error recovery. When workspace conflicts or dependency issues appeared, Claude fixed them immediately without the frustration spiral that can derail a human’s momentum.
Commit message quality. Detailed, well-structured commit messages generated instantly. No wrestling with how to summarize six hours of work into three bullet points.
What Took LongerAI wasn’t universally faster. Two areas stood out:
Initial codebase exploration. Claude spent time systematically understanding the structure before implementing. A human developer might have jumped in faster with assumptions (though possibly paying for it later with rework).
User preference clarification. Some back-and-forth on git dependencies versus crates.io, version numbering conventions. A human working alone would just make these decisions implicitly based on their experience.
These delays were minimal compared to the overall time savings, but they’re worth noting. AI coding isn’t instantaneous magic. It’s a different kind of work with different bottlenecks.
The Economics of CodingLet me restate those numbers because they still feel surreal:
- 85-90% time reduction
- 99% marginal cost reduction
For this type of task, these are order-of-magnitude improvements over solo human execution. And they weren’t achieved through cutting corners or sacrificing immediate quality. The tests passed, the documentation was comprehensive, the commits were well-structured, the code compiled cleanly.
That said, tests passing and documentation existing are necessary but not sufficient signals of quality. Long-term maintainability, latent bugs that only surface later, or future refactoring friction are harder to measure immediately. The code is working, but it’s too soon to know if there are subtle issues that will emerge over time.
This creates strange economics for a specific class of work: systematic, pattern-based refactoring with clear success criteria. For these tasks, the time and cost reductions change how we value engineering effort and prioritize maintenance work.
I used to avoid certain refactorings because the payoff didn’t justify the time investment. Clean up import statements across 50 files? Update documentation after a restructure? Write comprehensive commit messages? These felt like luxuries when there was always more pressing work.
But at $5 marginal cost and 3.5 hours for this type of systematic task, suddenly they’re not trade-offs anymore. They’re obvious wins. The economics shift from “is this worth doing?” to “why haven’t we done this yet?”
What This Doesn’t MeanBefore the “AI will replace developers” crowd gets too excited, let me be clear about what this data doesn’t show:
This was a perfect task for AI. Systematic, pattern-based, well-scoped, with clear success criteria. The kind of work where following existing patterns and executing consistently matters more than creative problem-solving or domain expertise.
AI did not:
- Design the architecture (I did)
- Decide on the repository structure (I did)
- Choose the dependency strategy (we decided together)
- Understand the business context (I provided it)
- Know whether the tests passing meant the code was correct (I validated)
The task was pure execution. Important execution, skilled execution, but execution nonetheless. A human developer would have brought the same capabilities to the table, just slower and at higher cost.
Where This GoesI keep thinking about that 85-90% time reduction for this specific type of task. Not simple one-liners where AI already shines, but systematic maintenance work with high regularity, strong compiler or test feedback, and clear end states.
Tasks with similar characteristics might include:
- Updating deprecated APIs across a large codebase
- Migrating from one framework to another with clear patterns
- Standardizing code style and patterns
- Refactoring for testability where tests guide correctness
- Adding comprehensive logging and monitoring
- Writing and updating documentation
- Creating detailed migration guides
Many maintenance tasks are messier: ambiguous semantics, partial test coverage, undocumented invariants, organizational constraints. The economics I observed here don’t generalize to all refactoring work. But for the subset that is systematic and well-scoped, the shift is significant.
All the work that we know we should do but often defer because it doesn’t feel like progress. What if the economics shifted enough for these specific tasks that deferring became the irrational choice?
I’m not suggesting AI replaces human judgment. Someone still needs to decide what “good” looks like, validate the results, understand the business context. But if the execution of systematic work becomes 10x cheaper and faster, maybe we stop treating certain categories of technical debt like unavoidable burdens and start treating them like things we can actually manage.
The Real CostThere’s one cost the analysis didn’t capture: my time. I wasn’t passive during those 3.5 hours. I was reading Claude’s updates, reviewing file changes, answering questions, validating decisions, checking test results.
I don’t know exactly how much time I spent, but it was less than the 3.5 hours Claude was working. Maybe 2 hours of active engagement? The rest was Claude working autonomously while I did other things.
So the real comparison isn’t 3.5 AI hours versus 16-24 human hours. It’s 2 hours of human guidance plus 3.5 hours of AI execution versus 16-24 hours of human solo work. Still a massive win, but different from pure automation.
This feels like the right model: AI as an extremely capable assistant that amplifies human direction rather than replacing human judgment. The economics work because you’re multiplying effectiveness, not substituting one for the other.
Final ThoughtsFive dollars marginal cost. Three and a half hours. For systematic refactoring work that would have taken me days and cost hundreds or thousands of dollars in my time.
These numbers make me think differently about certain kinds of work. About how we prioritize technical debt in the systematic, pattern-based category. About what “too expensive to fix” really means for these specific tasks. About whether we’re approaching some software maintenance decisions with outdated economic assumptions.
I’m still suspicious of broad claims that AI fundamentally changes how we work. But I’m less suspicious than I was. When the economics shift this dramatically for a meaningful class of tasks, some things that felt like pragmatic trade-offs start to look different.
The tests pass. The documentation is up to date. And I paid less than the cost of a fancy coffee drink.
Maybe the skeptics and the enthusiasts are both right. Maybe AI doesn’t replace developers and maybe it does change some things meaningfully. Maybe it just makes certain kinds of systematic work cheap enough that we can finally afford to do them right.
What About Model and Pricing Changes?One caveat worth noting: these economics depend on Claude Sonnet 4.5 at January 2026 pricing. Model pricing can change, model performance can regress or improve with updates, tool availability can shift, and organizational data governance constraints might limit what models you can use or what tasks you can delegate to them.
For individuals and small teams, this might not matter much in the short term. For larger organizations making long-term planning decisions, these factors matter. The specific numbers here are a snapshot, not a guarantee.
References- Claude Code - The AI coding assistant used for this project
- rustnn project - The repository that was split
- Token pricing based on Claude API pricing as of January 2026
The Rust Programming Language Blog: What does it take to ship Rust in safety-critical?
This is another post in our series covering what we learned through the Vision Doc process. In our first post, we described the overall approach and what we learned about doing user research. In our second post, we explored what people love about Rust. This post goes deep on one domain: safety-critical software.
When we set out on the Vision Doc work, one area we wanted to explore in depth was safety-critical systems: software where malfunction can result in injury, loss of life, or environmental harm. Think vehicles, airplanes, medical devices, industrial automation. We spoke with engineers at OEMs, integrators, and suppliers across automotive (mostly), industrial, aerospace, and medical contexts.
What we found surprised us a bit. The conversations kept circling back to a single tension: Rust's compiler-enforced guarantees support much of what Functional Safety Engineers and Software Engineers in these spaces spend their time preventing, but once you move beyond prototyping into the higher-criticality parts of a system, the ecosystem support thins out fast. There is no MATLAB/Simulink Rust code generation. There is no OSEK or AUTOSAR Classic-compatible RTOS written in Rust or with first-class Rust support. The tooling for qualification and certification is still maturing.
Quick context: what makes software "safety-critical"If you've never worked in these spaces, here's the short version. Each safety-critical domain has standards that define a ladder of integrity levels: ISO 26262 in automotive, IEC 61508 in industrial, IEC 62304 in medical devices, DO-178C in aerospace. The details differ, but the shape is similar: as you climb the ladder toward higher criticality, the demands on your development process, verification, and evidence all increase, and so do the costs.1
This creates a strong incentive for decomposition: isolate the highest-criticality logic into the smallest surface area you can, and keep everything else at lower levels where costs are more manageable and you can move faster.
We'll use automotive terminology in this post (QM through ASIL D) since that's where most of our interviews came from, but the patterns generalize. These terms represent increasing levels of safety-criticality, with QM being the lowest and ASIL D being the highest. The story at low criticality looks very different from the story at high criticality, regardless of domain.
Rust is already in production for safety-critical systemsBefore diving into the challenges, it is worth noting that Rust is not just being evaluated in these domains. It is deployed and running in production.
We spoke with a principal firmware engineer working on mobile robotics systems certified to IEC 61508 SIL 2:
"We had a new project coming up that involved a safety system. And in the past, we'd always done these projects in C using third party stack analysis and unit testing tools that were just generally never very good, but you had to do them as part of the safety rating standards. Rust presented an opportunity where 90% of what the stack analysis stuff had to check for is just done by the compiler. That combined with the fact that now we had a safety qualified compiler to point to was kind of a breakthrough." -- Principal Firmware Engineer (mobile robotics)
We also spoke with an engineer at a medical device company deploying IEC 62304 Class B software to intensive care units:
"All of the product code that we deploy to end users and customers is currently in Rust. We do EEG analysis with our software and that's being deployed to ICUs, intensive care units, and patient monitors." -- Rust developer at a medical device company
"We changed from this Python component to a Rust component and I think that gave us a 100-fold speed increase." -- Rust developer at a medical device company
These are not proofs of concept. They are shipping systems in regulated environments, going through audits and certification processes. The path is there. The question is how to make it easier for the next teams coming through.
Rust adoption is easiest at QM, and the constraints sharpen fastAt low criticality, teams described a pragmatic approach: use Rust and the crates ecosystem to move quickly, then harden what you ship. One architect at an automotive OEM told us:
"We can use any crate [from crates.io] [..] we have to take care to prepare the software components for production usage." -- Architect at Automotive OEM
But at higher levels, third-party dependencies become difficult to justify. Teams either rewrite, internalize, or strictly constrain what they use. An embedded systems engineer put it bluntly:
"We tend not to use 3rd party dependencies or nursery crates [..] solutions become kludgier as you get lower in the stack." -- Firmware Engineer
Some teams described building escape hatches, abstraction layers designed for future replacement:
"We create an interface that we'd eventually like to have to simplify replacement later on [..] sometimes rewrite, but even if re-using an existing crate we often change APIs, write more tests." -- Team Lead at Automotive Supplier (ASIL D target)
Even teams that do use crates from crates.io described treating that as a temporary accelerator, something to track carefully and remove from critical paths before shipping:
"We use crates mainly for things in the beginning where we need to set up things fast, proof of concept, but we try to track those dependencies very explicitly and for the critical parts of the software try to get rid of them in the long run." -- Team lead at an automotive software company developing middleware in Rust
In aerospace, the "control the whole stack" instinct is even stronger:
"In aerospace there's a notion of we must own all the code ourselves. We must have control of every single line of code." -- Engineering lead in aerospace
This is the first big takeaway: a lot of "Rust in safety-critical" is not just about whether Rust compiles for a target. It is about whether teams can assemble an evidence-friendly software stack and keep it stable over long product lifetimes.
The compiler is doing work teams used to do elsewhereMany interviewees framed Rust's value in terms of work shifted earlier and made more repeatable by the compiler. This is not just "nice," it changes how much manual review you can realistically afford. Much of what was historically process-based enforcement through coding standards like MISRA C and CERT C becomes a language-level concern in Rust, checked by the compiler rather than external static analysis or manual review.
"Roughly 90% of what we used to check with external tools is built into Rust's compiler." -- Principal Firmware Engineer (mobile robotics)
We heard variations of this from teams dealing with large codebases and varied skill levels:
"We cannot control the skill of developers from end to end. We have to check the code quality. Rust by checking at compile time, or Clippy tools, is very useful for our domain." -- Engineer at a major automaker
Even on smaller teams, the review load matters:
"I usually tend to work on teams between five and eight. Even so, it's too much code. I feel confident moving faster, a certain class of flaws that you aren't worrying about." -- Embedded systems engineer (mobile robotics)
Closely related: people repeatedly highlighted Rust's consistency around error handling:
"Having a single accepted way of handling errors used throughout the ecosystem is something that Rust did completely right." -- Automotive Technical Lead
For teams building products with 15-to-20-year lifetimes and "teams of teams," compiler-enforced invariants scale better than "we will just review harder."
Teams want newer compilers, but also stability they can explainA common pattern in safety-critical environments is conservative toolchain selection. But engineers pointed out a tension: older toolchains carry their own defect history.
"[..] traditional wisdom is that after something's been around and gone through motions / testing then considered more stable and safer [..] older compilers used tend to have more bugs [and they become] hard to justify" -- Software Engineer at an Automotive supplier
Rust's edition system was described as a real advantage here, especially for incremental migration strategies that are common in automotive programs:
"[The edition system is] golden for automotive, where incremental migration is essential." -- Software Engineer at major Automaker
In practice, "stability" is also about managing the mismatch between what the platform supports and what the ecosystem expects. Teams described pinning Rust versions, then fighting dependency drift:
"We can pin the Rust toolchain, but because almost all crates are implemented for the latest versions, we have to downgrade. It's very time-consuming." -- Engineer at a major automaker
For safety-critical adoption, "stability" is operational. Teams need to answer questions like: What does a Rust upgrade change, and what does it not change? What are the bounds on migration work? How do we demonstrate we have managed upgrade risk?
Target support matters in practical waysSafety-critical software often runs on long-lived platforms and RTOSs. Even when "support exists," there can be caveats. Teams described friction around targets like QNX, where upstream Rust support exists but with limitations (for example, QNX 8.0 support is currently no_std only).2
This connects to Rust's target tier policy: the policy itself is clear, but regulated teams still need to map "tier" to "what can I responsibly bet on for this platform and this product lifetime."
"I had experiences where all of a sudden I was upgrading the compiler and my toolchain and dependencies didn't work anymore for the Tier 3 target we're using. That's simply not acceptable. If you want to invest in some technology, you want to have a certain reliability." -- Senior software engineer at a major automaker
core is the spine, and it sets expectationsIn no_std environments, core becomes the spine of Rust. Teams described it as both rich enough to build real products and small enough to audit.
A lot of Rust's safety leverage lives there: Option and Result, slices, iterators, Cell and RefCell, atomics, MaybeUninit, Pin. But we also heard a consistent shape of gaps: many embedded and safety-critical projects want no_std-friendly building blocks (fixed-size collections, queues) and predictable math primitives, but do not want to rely on "just any" third-party crate at higher integrity levels.
"Most of the math library stuff is not in core, it's in std. Sin, cosine... the workaround for now has been the libm crate. It'd be nice if it was in core." -- Principal Firmware Engineer (mobile robotics)
Async is appealing, but the long-run story is not settledSome safety-critical-adjacent systems are already heavily asynchronous: daemons, middleware frameworks, event-driven architectures. That makes Rust's async story interesting.
But people also expressed uncertainty about ecosystem lock-in and what it would take to use async in higher-criticality components. One team lead developing middleware told us:
"We're not sure how async will work out in the long-run [in Rust for safety-critical]. [..] A lot of our software is highly asynchronous and a lot of our daemons in the AUTOSAR Adaptive Platform world are basically following a reactor pattern. [..] [C++14] doesn't really support these concepts, so some of this is lack of familiarity." -- Team lead at an automotive software company developing middleware in Rust
And when teams look at async through an ISO 26262 lens, the runtime question shows up immediately:
"If we want to make use of async Rust, of course you need some runtime which is providing this with all the quality artifacts and process artifacts for ISO 26262." -- Team lead at an automotive software company developing middleware in Rust
Async is not "just a language feature" in safety-critical contexts. It pulls in runtime choices, scheduling assumptions, and, at higher integrity levels, the question of what it would mean to certify or qualify the relevant parts of the stack.
RecommendationsFind ways to help the safety-critical community support their own needs. Open source helps those who help themselves. The Ferrocene Language Specification (FLS) shows this working well: it started as an industry effort to create a specification suitable for safety-qualification of the Rust compiler, companies invested in the work, and it now has a sustainable home under the Rust Project with a team actively maintaining it.3
Contrast this with MC/DC coverage support in rustc. Earlier efforts stalled due to lack of sustained engagement from safety-critical companies.4 The technical work was there, but without industry involvement to help define requirements, validate the implementation, and commit to maintaining it, the effort lost momentum. A major concern was that the MC/DC code added maintenance burden to the rest of the coverage infrastructure without a clear owner. Now in 2026, there is renewed interest in doing this the right way: companies are working through the Safety-Critical Rust Consortium to create a Rust Project Goal in 2026 to collaborate with the Rust Project on MC/DC support. The model is shared ownership of requirements, with primary implementation and maintenance done by companies with a vested interest in safety-critical, done in a way that does not impede maintenance of the rest of the coverage code.
The remaining recommendations follow this pattern: the Safety-Critical Rust Consortium can help the community organize requirements and drive work, with the Rust Project providing the deep technical knowledge of Rust Project artifacts needed for successful collaboration. The path works when both sides show up.
Establish ecosystem-wide MSRV conventions. The dependency drift problem is real: teams pin their Rust toolchain for stability, but crates targeting the latest compiler make this difficult to sustain. An LTS release scheme, combined with encouraging libraries to maintain MSRV compatibility with LTS releases, could reduce this friction. This would require coordination between the Rust Project (potentially the release team) and the broader ecosystem, with the Safety-Critical Rust Consortium helping to articulate requirements and adoption patterns.
Turn "target tier policy" into a safety-critical onramp. The friction we heard is not about the policy being unclear, it is about translating "tier" into practical decisions. A short, target-focused readiness checklist would help: Which targets exist? Which ones are no_std only? What is the last known tested OS version? What are the top blockers? The raw ingredients exist in rustc docs, release notes, and issue trackers, but pulling them together in one place would lower the barrier. Clearer, consolidated information also makes it easier for teams who depend on specific targets to contribute to maintaining them. The Safety-Critical Rust Consortium could lead this effort, working with compiler team members and platform maintainers to keep the information accurate.
Document "dependency lifecycle" patterns teams are already using. The QM story is often: use crates early, track carefully, shrink dependencies for higher-criticality parts. The ASIL B+ story is often: avoid third-party crates entirely, or use abstraction layers and plan to replace later. Turning those patterns into a reusable playbook would help new teams make the same moves with less trial and error. This seems like a natural fit for the Safety-Critical Rust Consortium's liaison work.
Define requirements for a safety-case friendly async runtime. Teams adopting async in safety-critical contexts need runtimes with appropriate quality and process artifacts for standards like ISO 26262. Work is already happening in this space.5 The Safety-Critical Rust Consortium could lead the effort to define what "safety-case friendly" means in concrete terms, working with the async working group and libs team on technical feasibility and design.
Treat interop as part of the safety story. Many teams are not going to rewrite their world in Rust. They are going to integrate Rust into existing C and C++ systems and carry that boundary for years. Guidance and tooling to keep interfaces correct, auditable, and in sync would help. The compiler team and lang team could consider how FFI boundaries are surfaced and checked, informed by requirements gathered through the Safety-Critical Rust Consortium.
"We rely very heavily on FFI compatibility between C, C++, and Rust. In a safety-critical space, that's where the difficulty ends up being, generating bindings, finding out what the problem was." -- Embedded systems engineer (mobile robotics)
ConclusionTo sum up the main points in this post:
- Rust is already deployed in production for safety-critical systems, including mobile robotics (IEC 61508 SIL 2) and medical devices (IEC 62304 Class B). The path exists.
- Rust's defaults (memory safety, thread safety, strong typing) map directly to much of what Functional Safety Engineers spend their time preventing. But ecosystem support thins out as you move toward higher-criticality software.
- At low criticality (QM), teams use crates freely and harden later. At higher levels (ASIL B+), third-party dependencies become difficult to justify, and teams rewrite, internalize, or build abstraction layers for future replacement.
- The compiler is doing work that used to require external tools and manual review. Much of what was historically process-based enforcement through standards like MISRA C and CERT C becomes a language-level concern, checked by the compiler. That can scale better than "review harder" for long-lived products with large teams and supports engineers in these domains feeling more secure in the systems they ship.
- Stability is operational: teams need to explain what upgrades change, manage dependency drift, and map target tier policies to their platform reality.
- Async is appealing for middleware and event-driven systems, but the runtime and qualification story is not settled for higher-criticality use.
We make six recommendations: find ways to help the safety-critical community support their own needs, establish ecosystem-wide MSRV conventions, create target-focused readiness checklists, document dependency lifecycle patterns, define requirements for safety-case friendly async runtimes, and treat C/C++ interop as part of the safety story.
Get involvedIf you're working in safety-critical Rust, or you want to help make it easier, check out the Rust Foundation's Safety-Critical Rust Consortium and the in-progress Safety-Critical Rust coding guidelines.
Hearing concrete constraints, examples of assessor feedback, and what "evidence" actually looks like in practice is incredibly helpful. The goal is to make Rust's strengths more accessible in environments where correctness and safety are not optional.
-
If you're curious about how rigor scales with cost in ISO 26262, this Feabhas guide gives a good high-level overview. ↩
-
See the QNX target documentation for current status. ↩
-
The FLS team was created under the Rust Project in 2025. The team is now actively maintaining the specification, reviewing changes and keeping the FLS in sync with language evolution. ↩
-
See the MC/DC tracking issue for context. The initial implementation was removed due to maintenance concerns. ↩
-
Eclipse SDV's Eclipse S-CORE project includes an Orchestrator written in Rust for their async runtime, aimed at safety-critical automotive software. ↩
Firefox Nightly: Phasing Out the Older Version of Firefox Sidebar in 2026
Over a year ago, we introduced an updated version of the sidebar that offers easy access to multiple tools – bookmarks, history, tabs from other devices, and a selection of chatbots – all in one place. As the new version has gained popularity and we plan our future work, we have made a decision to retire the older version in 2026.
Old sidebar version

Updated sidebar version
We know that changes like this can be disruptive – especially when they affect established workflows you rely on every day. While use of the older version has been declining, it remains a familiar and convenient tool for many – especially long-time Firefox users who have built workflows around it.
Unfortunately, supporting two versions means dividing the time and attention of a very small team. By focusing on a single updated version, we can fix issues more quickly, incorporate feedback more efficiently, and deliver new features more consistently for everyone. For these reasons, in 2026, we will focus on improving the updated sidebar to provide many of the conveniences of the older version, then transition everyone to the updated version.
Here’s what to expect:
- Starting with Firefox Nightly 148, we have turned on the new sidebar by default for Nightly users. The new default will remain Nightly-only for a few releases to allow us to implement planned improvements, existing community requests, and collect additional feedback.
- In Q2 2026, all users of the older version in release will be migrated to the updated sidebar. After the switch, for a period of time, we will keep the option to return to the older version to support folks who may be affected by bugs we fail to discover during Nightly testing. During this period, you will still be able to temporarily switch back to the old sidebar by going to Firefox Settings > General > Browser Layout and unchecking the Show sidebar option.
- In Q3 2026, we will fully retire the original sidebar and remove the associated pref as we complete the transition.
Our goal is to make our transition plans transparent and implement suggested improvements that are feasible within the new interaction model, while preserving the speed and flexibility that long-time sidebar users value. Several implemented and planned improvements to the updated sidebar were informed by your feedback, and we expect that to continue throughout the transition:
- Hide sidebar launcher on panel close if it was hidden prior to panel open (planned)
- Full screen mode doesn’t hide sidebar tools (planned)
If you’d like to share what functionality you’ve been missing in the new sidebar and what challenges you’ve experienced when you tried to adopt it, please share your thoughts in this Mozilla Connect thread or file a bug in Bugzilla’s Sidebar component, so your feedback can continue shaping Firefox.
Firefox Developer Experience: Firefox WebDriver Newsletter 147
WebDriver is a remote control interface that enables introspection and control of user agents. As such it can help developers to verify that their websites are working and performing well with all major browsers. The protocol is standardized by the W3C and consists of two separate specifications: WebDriver classic (HTTP) and the new WebDriver BiDi (Bi-Directional).
This newsletter gives an overview of the work we’ve done as part of the Firefox 147 release cycle.
ContributionsFirefox is an open source project, and we are always happy to receive external code contributions to our WebDriver implementation. We want to give special thanks to everyone who filed issues, bugs and submitted patches.
In Firefox 147, two WebDriver bugs were fixed by contributors:
- Sajid Anwar fixed an issue with browsingContext.navigate which could return a payload with an incorrect URL.
- Khalid AlHaddad added new WebDriver classic tests for the “Get All Cookies” command.
WebDriver code is written in JavaScript, Python, and Rust so any web developer can contribute! Read how to setup the work environment and check the list of mentored issues for Marionette, or the list of mentored JavaScript bugs for WebDriver BiDi. Join our chatroom if you need any help to get started!
General- Fixed the new session response to include the required setWindowRect property.
- Implemented the input.fileDialogOpened event, which is emitted whenever a file picker is triggered by the content page, for instance after clicking on an input with type="file".
- Implemented the emulation.setScreenSettingsOverride command to allow clients to emulate the screen dimensions for a list of browsing contexts or user contexts.
- Fixed an issue where browsingContext.navigate with wait=none didn’t always contain the real target URL.
- Updated script.evaluate and script.callFunction to bypass Content Security Policy (CSP).
- Fixed missing script.realmCreated event for new browsing contexts created via window.open.
- Updated emulation.setLocaleOverride to override the Accept-Language header.
- Updated emulation.setLocaleOverride to throw an error when called with the locale argument equal to undefined.
Firefox Tooling Announcements: Engineering Effectiveness Newsletter (Q4 2025 Edition)
-
Windows tests now start twice as fast! Thanks to improvements in how we provision Windows machines in the cloud, Yaraslau Kurmyza and RelOps cut startup delays dramatically. Since December 9th, it now takes 3 times less time to get a Windows worker ready, which has reduced Windows test wait times by half.
-
AGENTS.md and CLAUDE.md were added to the Firefox repository.
-
Calixte implemented most of the backend functionality to support reorganizing pages, splitting and merging PDFs.
-
Arthur Silber
-
Tim van der Meij
-
AGENTS.md and CLAUDE.md were added to the Firefox repository.
-
An AI coding policy was published in the Firefox source docs.
-
Suhaib Mujahid built an MCP server to facilitate the integration of AI assistants with the Firefox development tooling, whichIt enables AI assistants to search using Searchfox, read Bugzilla bugs and Phabricator revisions, access Firefox source documentation, and streamline patch review workflows.
-
Suhaib Mujahid extended the test selection system to work with local changes, enabling AI assistants to leverage our ML-based test selection for automatic identification of relevant tests, allowing them to iterate faster during development.
-
Suhaib Mujahid implemented improvements to the Review Helper tool to improve the accuracy of suggested review comments.
-
Thanks to Kohei, when a user enters a comment on the show bug page, it will update the page instantly without a reload. (see Bug 1993761)
-
Thanks to external contributor Logan Rosen for updating Bugzilla to use a newer version of libcmark-gfm which will solve some issues with rendering of Markdown in comments. (see Bug 1802047)
-
The dependency on Makefile.in has been reduced. The path is still long, but it’s getting a bit closer (see Bug 847009 )
-
Faster configure step thanks to faster warning flag checks (see Bug 1985940 )
-
Alex Hochheiden upgraded the JavaScript minifier from jsmin to Terser and enabled minification for pdf.js to improve loading performance.
-
Alex Hochheiden optimized glean-gradle-plugin and NimbusGradlePlugin configuration. Gained ~10s configuration time speedup and ~200MB disk space saved.
-
Your CI tasks are going to start faster! After many changes of different sizes, the entire Release Engineering team is proud to announce that the decision task is as fast as the best record from 2019 and even faster than ever before on autoland. We intend to beat the record on try with a few more patches close to landing.
-
Windows tests now start twice as fast! Thanks to improvements in how we provision Windows machines in the cloud, Yaraslau Kurmyza and RelOps cut startup delays dramatically. Since December 9th, it now takes 3 times less time to get a Windows worker ready, which has reduced Windows test wait times by half.
-
Ever wondered if your try-push scheduled the right tasks? Treeherder now shows unscheduled jobs too. Hit s to toggle visibility and cut down CI guesswork!
-
Abhishek Madan made various performance improvements to the decision tasks totalling to around 25% improvement
-
Abhishek Madan switched Decision tasks to a faster worker-type
-
Andrew Halberstadt kicked off the CI migration from hg.mozilla.org → Github, implementing:
-
Shallow clone support in run-task
-
A dedicated Decision task that responds to Github events
-
-
Ben Hearsum added support for outputting the relationships between taskgraph kinds as Mermaid diagrams, making it easier to visualize the relationships between tasks.
-
Matt Boris added the finishing touches on D2G (Docker Worker to Generic Worker translation layer) to enable Julien Cristau to begin rolling changes out to L3 pools.
-
New include linter through mach lint -lincludes . Unused MFBT and standard C++ headers are reported.
-
Alex Hochheiden fixed many lint warnings and upgraded them to errors.
-
Alex Hochheiden replaced black with ruff-format.
-
Calixte implemented most of the backend functionality to support reorganizing pages, splitting and merging PDFs.
-
Calixte added support for tagged math in PDFs in order to make math content accessible.
-
Tim van der Meij helped with maintenance and improvements to pdf.js CI, like using OICD trusted publishing.
-
Aditi made it so we serialize pattern data into ArrayBuffer, paving the way for moving pdf.js rendering in worker threads.
-
Arthur Silber improved text rendering performance by skipping unnecessary pattern calculations, leading to up to 84% reduction in pdfpaint time for some PDFs.
-
Calixte added support for the pdfium jbig2 decoder compiled in wasm in order to replace the pure JS version.
-
(Bug 1975487, 1994794, 1995403) Erik Nordin shipped significant improvements to the Translations experience when translating web pages between left-to-right and right-to-left languages.
-
(Bug 1967758) Erik Nordin improved the algorithm for page-language detection, centralizing the behavior in the parent process, instead of creating a separate language detector instance per content process.
-
Evgeny Pavlov trained Chinese Traditional
-
Sergio Ortiz Rojas trained English to Vietnamese
-
Evgeny Pavlov created new evaluation dashboards with expanded metrics, datasets and LLM explanations
-
Evgeny Pavlov migrated the model registry from Github to Google Cloud Storage with the updated UI (new models JSON)
-
Zeid and Olivier implemented various changes in Lando to support the GitHub pull request pilot project.
-
Zeid added support for short hash when querying git2hg commit maps in Lando.
-
Connor Sheehan implemented uplift requests as background jobs, providing many improvements to the uplift request workflow in Lando:
-
Merge conflict detection at job completion time, instead of at landing time.
-
Uplift to multiple trains at once, with failure notification emails that provide step-by-step commands to resolve the conflict and re-submit.
-
Uplift assessment form linking workflow to avoid re-submitting the same form when manually resolving merge conflicts for an uplift.
-
-
Connor Sheehan made it possible to select individual commits in the stack for uplift, instead of always uplifting the parent commits for a given revision.
-
Connor Sheehan added a new uplift assessment linking view and hooked it into moz-phab uplift, removing a few steps between submitting an uplift request and opening the form for submission or linking to the new request.
-
moz-phab had several new releases.
-
Mathew Hodson restored the --upstream argument to moz-phab submit.
-
Jujutsu support saw improvements to moz-phab patch, better handling of working copy changes and a minimum jj version bump to 0.33.
-
moz-phab uplift saw a few changes to enable better integration with the Lando-side changes.
-
See the release notes here:
-
https://discourse.mozilla.org/t/mozphab-2-6-0-released/146283
-
https://discourse.mozilla.org/t/mozphab-2-7-0-released/146293
-
https://discourse.mozilla.org/t/mozphab-2-7-1-released/146295
-
https://discourse.mozilla.org/t/mozphab-2-7-2-released/146339
-
https://discourse.mozilla.org/t/mozphab-2-8-0-released/146434
-
https://discourse.mozilla.org/t/mozphab-2-8-1-released/146774
-
-
-
Connor Sheehan added clonebundle buckets in the us-east1 GCP region to improve clone times in CI.
-
Julien Cristau added the new tags Mercurial branches to mozilla-unified.
-
Julien Cristau and Olivier Mehani took steps to reduce OOM issues on the hg push server.
-
Julien Cristau resolved a Kafka issue by pruning try heads and resolving issues with try heads alerting, and Greg Cox increased the storage in Kafka in support of the mitigation.
-
Greg Cox implemented staggered auto-updating with reboots on the load balancers in front of hg.mozilla.org.
Thanks for reading and see you next month!
1 post - 1 participant



