mozilla

Mozilla Nederland LogoDe Nederlandse
Mozilla-gemeenschap

Mozilla Addons Blog: Understanding Extension Permission Requests

Mozilla planet - do, 01/02/2018 - 20:10

An extension is software developed by a third party that modifies how you experience the web in Firefox. Since they work by tapping into the inner workings of Firefox, but are not built by Mozilla, it’s good practice to understand the permissions they ask for and how to make decisions about what to install. While rare, a malicious extension can do things like steal your data or track your browsing across the web without you realizing it.

We have been taking steps to reduce the risk of extensions, the most significant of which was moving to a WebExtensions architecture with the release of Firefox 57 last fall. The new APIs limit an extension’s ability to access certain parts of the browser and the information they process. We also have a variety of security measures in place, such as a review process that is designed to make it difficult for malicious developers to publish extensions. Nevertheless, these systems cannot guarantee that extensions will be 100% safe.

Here’s where you come in

We want to make it easier for you to make informed decisions about the extensions you install, by providing transparency about what individual extensions can do. Since transitioning to the WebExtensions API, we have been displaying a permissions message corresponding to the extension you are installing.

Extensions have always had access to this type of information, but by showing you what they are (and telling you what they mean), we hope to help you become more savvy about choosing safe extensions.

How about the scary-sounding one?

There is one permission in particular, “Access your data for all websites”, that we’ve gotten many questions about since the feature launched. The reason why it’s worded this way is because a web page can contain virtually anything, and some extensions need to read everything on it in order to perform an action based on what the page contains.

For example, an ad blocker needs to read all web page content to identify and remove ad code. A password manager needs to detect and write to username and password fields. A shopping extension might need to read details of the products you’re searching for.

Since these types of extensions wouldn’t know whether any particular web page contains the bit it needs to modify until it’s loaded, and neither does Firefox, it needs access to everything on a page so it can look for and modify the appropriate parts. This means that in theory, while rare, a malicious developer could tell you their extension does one thing while it actually does something else.

How do I stay safe?

While there is an element of risk to installing any third-party software, there are a few simple best practices you can follow to reduce it. Is the extension made by a reputable developer? Are the user ratings high? Are the permission requests consistent with the features of the extension?

We’ve compiled a short checklist of questions to consider in our support forum. These best practices can help you evaluate any potential software you install, and feel safer and better informed wherever you are on the web.

The post Understanding Extension Permission Requests appeared first on Mozilla Add-ons Blog.

Categorieën: Mozilla-nl planet

Air Mozilla: Reps Weekly Meeting, 01 Feb 2018

Mozilla planet - do, 01/02/2018 - 17:00

Reps Weekly Meeting This is a weekly call with some of the Reps to discuss all matters about/affecting Reps and invite Reps to share their work with everyone.

Categorieën: Mozilla-nl planet

Air Mozilla: Reps Weekly Meeting, 01 Feb 2018

Mozilla planet - do, 01/02/2018 - 17:00

Reps Weekly Meeting This is a weekly call with some of the Reps to discuss all matters about/affecting Reps and invite Reps to share their work with everyone.

Categorieën: Mozilla-nl planet

Mozilla Release Management Team: Janitor project - Newsletter 10

Mozilla planet - do, 01/02/2018 - 14:00

Jan Keromnes is a Senior Software Engineer working for the Release Management team on tools and automation and is the lead developer for Janitor.

Janitor offers developer environments as a service for Firefox, Servo and other open source projects. It uses Cloud9 IDE (front-end), Docker servers (back-end), and is 100% web-based so you can jump straight into fresh on-demand environments that are pre-configured and ready for work, without wasting time setting up yet another local checkout or VM.

This newsletter was initially published on the Janitor Discourse forum.

Happy 2018 everyone!

We hope you’ve had a smooth start into the year, and wish you all the best in your life and projects. This is your recurrent burst of good news about Janitor.

First Survey

We have big plans for 2018, and about 500 people now use Janitor to contribute to open source software. We’d love to understand what you’re getting out of Janitor, and what we could improve to make your life easier.

2018 Janitor Survey (should take < 3 minutes)

Please help us do our best work this year. In return, we’ll publicly share the stats and insights via our blog.

Towards Windows Support

Last month at Mozilla’s All Hands in Austin, we announced Windows environments in Janitor for mid-2018. You can watch the lightning talk and the slides online.

Since then, we’ve iterated on a prototype Windows image for Firefox (based on a Windows 10 VM in Azure) and we’re now looking into using Azure’s REST API to allow Janitor users to spawn and automatically configure new VMs based on our Firefox Windows image. This is similar to spawning and auto-configuring new Docker containers based on our Linux images today.

It’s still early days, but if you’re excited about Windows support, you can track our progress with the new Janitor Windows roadmap.

Announcing Janitor 0.0.10

We’ve improved, upgraded and extended Janitor in many cool ways. So much that the next release should hopefully take us from Alpha to Beta, which will bring even more exciting features, supported open source projects, users, speed, stability and scalability.

Here is what we did since 0.0.9 was released 4 months ago:

  • Quick preview URLs in the IDE (notriddle)
  • Improved Run scripts for most projects (janx)
  • Enabled collaborative editing in the IDE (janx)
  • New website design for Janitor to be released soon (ntim, arshad, notriddle)
  • New containers page with a cool SSH one-liner (ntim)
  • New blog page populated directly from our Discourse (notriddle)
  • New OVH1 Docker server, our most powerful yet (16 CPU, 64GB RAM, 2TB SSD)
  • Added the PeerTube project (janx, bnjbvr, Chocobozzz)
  • Added the Yuzu Emulator project (etiennewan)
  • Refactored most of our Node.js modules to async/await
  • Tested Janitor on an iPad and it works! (Flaki)
  • Supported UTF-8 in all recent containers
  • Supported multiple email addresses per user, allowing imports from GitHub
  • Supported validation functions and ‘*’ URL parameters in our self-testing API system
  • Latest LLVM toolchain (clang 6.0, lld 6.0, lldb 6.0)
  • Latest Rust toolchain (stable 1.23.0, nightly 1.25.0)
  • Latest Git (2.16.1)
  • Latest Mercurial (4.4.1)
  • Latest Node.js (node 8.9.4, npm 5.6.0, nvm 0.33.8)
  • Latest fd (6.2.0)
  • Latest rg (0.7.1)
  • Latest rr (5.1.0)
  • Latest Vim 8 + latest Neovim
  • Latest Cloud9 SDK and noVNC
  • … plus many more upgrades, bug fixes, stability and performance improvements

And that’s a wrap! As always, please feel free to stop by our IRC channel and Discourse forum to learn more about this project. We’d love to meet you.

Thanks for your time!

Team Janitor

Categorieën: Mozilla-nl planet

Hacks.Mozilla.Org: MDN browser compatibility data: Taking the guesswork out of web compatibility

Mozilla planet - do, 01/02/2018 - 10:53
Building the web is hard

The most powerful aspect of the web is also what makes it so challenging to build for: its universality. When you create a website, you’re writing code that needs to be understood by a plethora of browsers on different devices and operating systems. It’s difficult.

To make the web evolve in a sane and sustainable way for both users and developers, browser vendors work together to standardize new features, whether it’s a new HTML element, CSS property, or JavaScript API. But different vendors have different priorities, resources, and release cycles — so it’s very unlikely that a new feature will land on all the major browsers at once. As a web developer, this is something you must consider if you’re relying on a feature to build your site.

Even more challenging than waiting for browsers to implement a feature you need is catering for those that never will. Building for the web means supporting older – but sometimes still widely used – versions of browsers, with quirks and feature sets locked in time forever. For all this, choosing to use one particular feature over another on a website effectively means making a decision about which browsers it will optimally support.

Enter the Browser Compatibility Project

To help developers make these decisions consciously rather than accidentally, MDN Web Docs provides browser compatibility tables in its documentation pages, so that when looking up a feature you’re considering for your project, you know exactly which browsers will support it. Whilst unquestionably helpful, this resource alone doesn’t eliminate all challenges. First of all, it requires people to actively search for compatibility data for each feature they use, which may not be trivial for less experienced developers. In addition, there is the question of what happens when you inherit a project that already has hundreds or thousands of lines of code. Inspecting each line individually to look for compatibility issues doesn’t seem like a reasonable option.

To allow for browser compatibility data to be accessed programmatically rather than requiring developers to manually search for it, the MDN community is working on migrating the compatibility information currently stored on thousands of wiki pages to a machine-readable JSON format in a GitHub repository. This opens the door to a myriad use cases, such as displaying compatibility information on a text editor as the developer writes the code, or an audit tool that scans an entire codebase and flags compatibility issues that the developer might not be aware of.

The data is available as an npm package and you can get compatibility information for a given web feature like this:

const compat = require('mdn-browser-compat-data'); let textJustify = compat.css.properties[‘text-justify’]; // returns a compat data object

It will return the data stored in the repository, in this case from the text-justify.json file. You can query the data set, like for example finding out which Firefox version added support for text-justify:

textJustify.support.firefox; // returns “55”

The compat data structure is described by a JSON schema; as well as the version numbers, it also contains information about prefixes or flags that might need to be set for a feature to work.

The data is kept up-to-date and maintained by the MDN community and browser vendors who, working together formed the MDN Product Advisory Board last year. Board members agreed to drive updates to compatibility data for Edge, Chrome, and Samsung Internet after each stable release and to help review compatibility data PRs that are flagged for their browser.

Finding compatibility issues from DevTools

An interesting example of the possibilities created by this data is a project called compat-report, an extension made by developer Eduardo Boucas as a side project, that adds a Compatibility panel to the browser developer tools. Inside, a table shows how any given site is expected to perform on each of the main browsers, showing each version where no compatibility issues are found in green, those with a few issues in yellow, and those with several problems in red. This gives developers an initial overview of how well their site stacks up against the different browsers on the market.

Screenshot of the compat-report extension. Showing a compatibility tab that is added to Firefox DevTools.

Clicking on a browser version will launch a more detailed report, highlighting lines with compatibility issues for each style sheet on the page. Where applicable, the extension will also suggest ways to address the problem, such as adding a vendor prefix, as shown in the image below.

This project is still in its early stages, but the scope is immense and the roadmap is ambitious. Currently it only analyzes CSS, but in the future it could also flag compatibility problems with HTML and JavaScript. It could even leverage the vast number of polyfills on MDN to not only flag compat issues but also offer ways to address them.

The extension itself is built using just HTML, CSS and JavaScript and its source is available on GitHub. If you like the idea and would like to help us take it further, we’d love for you to contribute.

Categorieën: Mozilla-nl planet

Daniel Glazman: LibreOffice and EPUB

Mozilla planet - do, 01/02/2018 - 10:31

LibeOffice 6.0 is now available. And it's through the inevitable Korben I discovered this morning it has a builtin EPUB export. So let's take a closer look at that new beast and evaluate how it deals with that painful task. Conformant EPUB? And which version of EPUB? Reusable XHTML and CSS? We'll see.

After installation (on a Mac), I created a new trivial text document; it contains a paragraph, a level 1 header, an image, a table, and a unordered list of three items. I did not touch at all fonts, styles, margins, etc.

Trivial text document in LibreOffice 6.0

Then I discovered LibreOffice now has two new menu items: File > Export As... > Export directly as EPUB and File > Export As... > Export as EPUB... .

Export directly as EPUB

It directly opens a filepicker to select a destination *.epub file. Let's unzip the saved package and take a look at its guts:

  • the mimetype file is correctly placed as first file in the package and it's correctly stored without compression
  • other files are correctly stored using Deflate
  • the META-INF/container.xml is stored in last position in the zip, which is probably a mistake
  • the OPF file says it's a EPUB 3.0 package and its metadata are clean ; AFAICT, the OPF file is conformant to the spec
  • XML and XHTML files in the package are serialized without carriage returns (if you except one after the XML prolog) or indentation...
  • a NCX is present
  • the Navigation Document (called toc.xhtml) and the NCX live side by side in a OEBPS folder (sigh)
  • there is a  empty OEBPS/styles/stylesheet.css file
  • the content files are in a OEBPS/sections folder
  • that folder contains 2 files (!) section001.xhtml and section002.xhtml
  • looking at these files, LibreOffice seems to have split the original document at section breaks, hence the two sections found in the EPUB package
  • there is no title element in these files
  • there is clearly a problem with exported CSS styles, the body of each generated document having no margins, paddings. And since there is no CSS-reset either...
  • the set of LibreOffice styles (the leftmost dropdown in the toolbar) are not exported to CSS; the whole export relies on CSS inline styles (style attributes) and not on classes
  • the original document uses the "Liberation Serif" font, that is not registered under that name into the OS X fontbook (old issue well known in the OOXML world...). The final rendition in a browser is then buggy, font-wise. The font-family declarations in the document don't use a fallback to serif.
  • there is a very weird font-effect: outline property serialized on all paragraphs in table cells
  • strangely again, all these paragraphs have text-decoration: overline; text-shadow: 1px 1px 1px #666666; while the original text is not overlined nor shadowed
  • when a paragraph (a p in terms of OOXML) contains one single run of text (a r in OOXML), the output could be optimized getting rid of a span and adding its inline styles to the parent paragraph. The output is too verbose and will trigger issues in html editors, Wysiwyg or not.
  • the margin values in the document use a mix of inches and pixels, which is kind of weird
  • the image in the original document is lost in the EPUB package
  • headers are not generated as h1, h2, ... but as p elements with styles.
  • the EPUB version does not correctly deal with the unordered list and all list items become regular paragraphs. No ol or ul, bullet, no counter, no list-style-type. Semantics is lost.

Firefox Quantum viewing the resulting section002.xhtml file. You can clearly see where the html+CSS export is buggy:

Firefox Quantum viewing the resulting section002.xhtml file

How iBooks sees that EPUB:

how iBooks sees that EPUB

Export as EPUB...

Aaah, that one is quite different since it first opens the following dialog:

Export as EPUB... dialog

The dialog offers the following choices:

  1. export as EPUB2 or EPUB3 (nice!)
  2. Split at Page Breaks or Headings (very nice feature but why not also a "Don't split" option?)

and validating the dialog goes to the aforementioned *.epub filepicker.

Conclusion

This is an excellent start, really, and splitting the document at headers or page breaks is an excellent idea. Unfortunately, there are too many holes in the xhtml+CSS export at this time to make it really usable unless your document has almost unstyled paragraphs only. Some generated styles (overline?!?) are not present in the original document, it generates only paragraphs and tables losing the header or list semantics, the LibreOffice styles are not serialized in a CSS stylesheet (bug?) and more. This will help some individuals but I am not sure it will help EPUB publication chains, at least for now.

Update: Wow. I added an extra test: I compared the result of "Export to XHTML" and the XHTML inside a "Export to EPUB". In the former, styles are correctly exported as a stylesheet, classes are correctly used, h1 and ol/li are correctly used, the image is preserved, and the general rendering is MUCH better. So the Export to EPUB has one of the two following problems: it reuses the "Export to XHTML" code and splitting introduced a lot of bugs, OR it has its own export-to-xhtml code and it's a mistake since the existing one does quite a decent job...

Second update: LibreOffice's trunk does a significantly better job: stylesheet is correctly generated, xhtml files are CR'd and indented correctly, images are preserved.

Categorieën: Mozilla-nl planet

Daniel Stenberg: Reducing 2038-problems in curl

Mozilla planet - do, 01/02/2018 - 09:56

tldr: we’ve made curl handle dates beyond 2038 better on systems with 32 bit longs.

libcurl is very portable and is built and used on virtually all current widely used operating systems that run on 32bit or larger architectures (and on a fair amount of not so widely used ones as well).

This offers some challenges. Keeping the code stellar and working on as many platforms as possible at the same time is hard work.

How long is a long?

The C variable type “long” has existed since the dawn of time and used to be 32 bit big already back in the days most systems were 32 bits. With the introduction of 64 bit systems in the 1990s, something went wrong and when most operating systems went with 64 bit longs, some took the odd route and stuck with a 32 bit long… The windows world even chose to not support “long long” for 64 bit types but instead it insists on calling them “__int64”!

(Thankfully, ints have at least remained 32 bit!)

Two less clever API decisions

Back in the days when humans still lived in caves, we decided for the libcurl API to use ‘long’ for a whole range of function arguments. In hindsight, that was naive and not too bright. (I say “we” to make it less obvious that it of course is mostly me who’s to blame for this.)

Another less clever design idea was to use vararg functions to set (all) options. This is convenient in the way we have one function to set a huge amount of different option, but it is also quirky and error-prone because when you pass on a numeric expression in C it typically gets sent as an ‘int’ unless you tell it otherwise. So on systems with differently sized ints vs longs, it is destined to cause some mistakes that, thanks to use of varargs, the compiler can’t really help us detect! (We actually have a gcc-only hack that provides type-checking even for the varargs functions, but it is not portable.)

libcurl has both an option to pass time to libcurl using a long (CURLOPT_TIMEVALUE) and an option to extract a time from libcurl using a long (CURLINFO_FILETIME).

We stick to using our “not too bright” API for stability and compatibility. We deem it to be even more work and trouble for us and our users to change to another API rather than to work and live with the existing downsides.

Time may exist after 2038

There’s also this movement to transition the time_t variable type from 32 to 64 bit. time_t of course being the preferred type for C and C++ programs to store timestamps in. It is the number of seconds since January 1st, 1970. Sometimes called the unix epoch. A signed 32 bit time_t can be used to store timestamps with second accuracy from roughly 1903 to 2038. As more and more things will start to refer to dates after 2038, this is of course becoming a problem. We need to move to 64 bit time_t all over.

We’re now less than 20 years away from the signed 32bit tip-over point: 03:14:07 UTC, 19 January 2038.

To complicate matters even more, there are odd systems out there with unsigned time_t variables. Such systems then cannot easily refer to dates before 1970, but can instead hold dates up to the year 2106 even with just 32 bits. Oh and there are some systems with 64 bit long that feature a 32 bit time_t, and 32 bit systems with 64 bit time_t!

Most modern systems today have 64 bit time_t – including win64, and 64 bit time_t can handle dates up to about year 292,471,210,647.

int – long – time_t
  1. We cannot move data between ints and longs in the code and assume it doesn’t overflow
  2. We can’t move data losslessly between ints and time_t
  3. We must not move data between long and time_t

Recently we’ve been working on making sure we live up to these three rules in libcurl. One could say it was about time! (pun intended)

In particular number (3) has required us to add new entry points to the API so that even 32 bit long systems can set/read 64 bit time. Starting in libcurl 7.59.0, applications can pass 64 bit times to libcurl with CURLOPT_TIMEVALUE_LARGE and extract 64 bit times with CURLINFO_FILETIME_T. For compatibility reasons, the old versions will of course be kept around but newer applications should really consider the new options.

We also recently did an overhaul of our time and date parser (externally accessible as curl_getdate() ) which we learned erroneously used a ‘long’ in the calculation which made it not work proper beyond 2038 on systems with 32 bit longs. This fix will also ship in 7.59.0 (planned release date: March 21, 2018).

If you find anything in curl that doesn’t deal with times after 2038 correctly, please file a bug!

Categorieën: Mozilla-nl planet

Mike Conley: Things I’ve Learned This Week (May 25 – May 29, 2015)

Thunderbird - ma, 01/06/2015 - 07:49
MozReview will now create individual attachments for child commits

Up until recently, anytime you pushed a patch series to MozReview, a single attachment would be created on the bug associated with the push.

That single attachment would link to the “parent” or “root” review request, which contains the folded diff of all commits.

We noticed a lot of MozReview users were (rightfully) confused about this mapping from Bugzilla to MozReview. It was not at all obvious that Ship It on the parent review request would cause the attachment on Bugzilla to be r+’d. Consequently, reviewers used a number of workarounds, including, but not limited to:

  1. Manually setting the r+ or r- flags in Bugzilla for the MozReview attachments
  2. Marking Ship It on the child review requests, and letting the reviewee take care of setting the reviewer flags in the commit message
  3. Just writing “r+” in a MozReview comment

Anyhow, this model wasn’t great, and caused a lot of confusion.

So it’s changed! Now, when you push to MozReview, there’s one attachment created for every commit in the push. That means that when different reviewers are set for different commits, that’s reflected in the Bugzilla attachments, and when those reviewers mark “Ship It” on a child commit, that’s also reflected in an r+ on the associated Bugzilla attachment!

I think this makes quite a bit more sense. Hopefully you do too!

See gps’s blog post for the nitty gritty details, and some other cool MozReview announcements!

Categorieën: Mozilla-nl planet

Rumbling Edge - Thunderbird: 2015-05-26 Calendar builds

Thunderbird - wo, 27/05/2015 - 10:26

Common (excluding Website bugs)-specific: (23)

  • Fixed: 735253 – JavaScript Error: “TypeError: calendar is null” {file: “chrome://calendar/content/calendar-task-editing.js” line: 102}
  • Fixed: 768207 – Make the cache checkbox default-on in the new calendar dialog
  • Fixed: 1049591 – Fix lots of strict warnings
  • Fixed: 1086573 – Lightning and Thunderbird disagree about timezone support in ics files
  • Fixed: 1099592 – Make JS callers of ios.newChannel call ios.newChannel2 in calendar/
  • Fixed: 1149423 – Add Windows timezone names to list of aliases
  • Fixed: 1151011 – Calendar events show up on wrong day when printing
  • Fixed: 1151440 – Choose a color not responsive when creating a New calendar in Lightning 4.0b1
  • Fixed: 1153327 – Run compare-locales with merging for Lightning
  • Fixed: 1156015 – Email scheduling fails for recipients with URN id
  • Fixed: 1158036 – Support sendMailTo for URN type attendees
  • Fixed: 1159447 – TEST-UNEXPECTED-FAIL | xpcshell-icaljs.ini:calendar/test/unit/test_extract.js
  • Fixed: 1159638 – Getter fails in calender-migration-dialog on first run after installation
  • Fixed: 1159682 – Provide a more appropriate “learn more” page on integrated Lightning firstrun
  • Fixed: 1159698 – Opt-out dialog has a button for “disable”, but actually the addon is removed
  • Fixed: 1160728 – Unbreak Lightning 4.0b4 beta builds
  • Fixed: 1162300 – TEST-UNEXPECTED-FAIL | xpcshell-libical.ini:calendar/test/unit/test_alarm.js | xpcshell return code: 0
  • Fixed: 1163306 – Re-enable libical tests and disable ical.js in nightly builds when binary compatibility is back
  • Fixed: 1165002 – Lightning broken, tries to load libical backend although “calendar.icaljs” defaults to “true”
  • Fixed: 1165315 – TEST-UNEXPECTED-FAIL | xpcshell-icaljs.ini:calendar/test/unit/test_bug759324.js | xpcshell return code: 1 | ###!!! ASSERTION: Deprecated, use NewChannelFromURI2 providing loadInfo arguments!
  • Fixed: 1165497 – TEST-UNEXPECTED-FAIL | xpcshell-icaljs.ini:calendar/test/unit/test_alarmservice.js | xpcshell return code: -11
  • Fixed: 1165726 – TEST-UNEXPECTED-FAIL | /builds/slave/test/build/tests/mozmill/testBasicFunctionality.js | testBasicFunctionality.js::testSmokeTest
  • Fixed: 1165728 – TEST-UNEXPECTED-FAIL | xpcshell-icaljs.ini:calendar/test/unit/test_bug494140.js | xpcshell return code: -11

Sunbird will no longer be actively developed by the Calendar team.

Windows builds Official Windows

Linux builds Official Linux (i686), Official Linux (x86_64)

Mac builds Official Mac

Categorieën: Mozilla-nl planet

Rumbling Edge - Thunderbird: 2015-05-26 Thunderbird comm-central builds

Thunderbird - wo, 27/05/2015 - 10:25

Thunderbird-specific: (54)

  • Fixed: 401779 – Integrate Lightning Into Thunderbird by Default and Ship Thunderbird with Lightning Enabled
  • Fixed: 717292 – Spell check language setting for subject and body not synchronized, but temporarily appears so when changing language and depending on focus (confusing ux)
  • Fixed: 914225 – Support hotfix add-on in Thunderbird
  • Fixed: 1025547 – newmailaccount/jquery.tmpl.js, line 123: reference to undefined property def[1]
  • Fixed: 1088975 – Answering mail with sendername containing encoded special chars and comma creates two “To”-entries
  • Fixed: 1101237 – Remove distribution directory during install
  • Fixed: 1109178 – Thunderbird OAuth implementation does not work with Evernote
  • Fixed: 1110166 – Port |Bug 1102219 – Rename String.prototype.contains to String.prototype.includes| to comm-central
  • Fixed: 1113097 – Fix misuse of fixIterator
  • Fixed: 1130854 – Package Lightning with Thunderbird
  • Fixed: 1131997 – Adapt for Debugger Server code for changes in bug 1059308
  • Fixed: 1135291 – Update chat log entries added to Gloda since bug 955292 to use relative paths
  • Fixed: 1135588 – New conversations get indexed twice by gloda, leading to duplicate search results
  • Fixed: 1138154 – Plugins default to “always activate” in Thunderbird
  • Fixed: 1142879 – [meta] track Mozilla-central (Core) issues that we want to have fixed in TB38
  • Fixed: 1146698 – Chat Messages added to logs just before shutdown may not be indexed by gloda
  • Fixed: 1148330 – Font indicator doesn’t update when cursor is placed in text where core returns sans-serif (Windows). Serif and monospace don’t work (Linux).
  • Fixed: 1148512 – TEST-UNEXPECTED-FAIL | mailnews/imap/test/unit/test_dod.js | xpcshell return code: 0||1 | streamMessages – [streamMessages : 94] false == true | application crashed [@ mozalloc_abort(char const * const)]
  • Fixed: 1149059 – splitter in compose window can be resized down to completely obscure composition area
  • Fixed: 1151206 – Using a theme hides minimize, maximize and close button in composer window [Mac]
  • Fixed: 1151475 – Remove use of expression closures in mail/
  • Fixed: 1152299 – [autoconfig] Cosmetic changes for WEB.DE config
  • Fixed: 1152706 – Upgrade to Correspondents column (combined To/From column) too agressive
  • Fixed: 1152796 – chrome://messenger/content/folderDisplay.js, line 697: TypeError: this._savedColumnStates.correspondentCol is undefined
  • Fixed: 1152926 – New mail sound preview doesn’t work for default system sound on Mac OS X
  • Fixed: 1154737 – Permafail: TEST-UNEXPECTED-FAIL | toolkit/components/telemetry/tests/unit/test_TelemetryPing.js | xpcshell return code: 0
  • Fixed: 1154747 – TEST-UNEXPECTED-FAIL | /builds/slave/test/build/tests/mozmill/session-store/test-session-store.js | test-session-store.js::test_message_pane_height_persistence
  • Fixed: 1156669 – Trash folder duplication while using IMAP with localized TB
  • Fixed: 1157236 – In-content dialogs: Port bug 1043612, bug 1148923 and bug 1141031 to TB
  • Fixed: 1157649 – TEST-UNEXPECTED-FAIL | dom/push/test/xpcshell/test_clearAll_successful.js (and most other push tests)
  • Fixed: 1158824 – Port bug 138009 to fix packaging errors | Missing file(s): bin/defaults/autoconfig/platform.js
  • Fixed: 1159448 – Thunderbird ignores proxy settings on POP3S protocol
  • Fixed: 1159627 – resource:///modules/dbViewWrapper.js, line 560: SyntaxError: unreachable code after return statement
  • Fixed: 1159630 – components/glautocomp.js, line 155: SyntaxError: unreachable code after return statement
  • Fixed: 1159676 – mailnews/mime/jsmime/test/test_custom_headers.js | run_next_test 0 – TypeError: _gRunningTest is undefined at /builds/slave/test/build/tests/xpcshell/head.js:1435 (and other jsmime tests)
  • Fixed: 1159688 – After switching/changing the window layout, dragging the splitter between threadpane and messagepane can create gray/grey area/space (misplaced notificationbox)
  • Fixed: 1159815 – Take bug 1154791 “Inline spell checker loses red underlines after a backspace is used – take two” in Thunderbird 38
  • Fixed: 1159817 – Take “Bug 1100966 – Inline spell checker loses red underlines after a backspace is used” in Thunderbird 38
  • Fixed: 1159834 – Consider taking “Bug 756984 – Changing location in editor doesn’t preserve the font when returning to end of text/line” in Thunderbird 38
  • Fixed: 1159923 – Take bug 1140105 “Can’t query for a specific font face when the selection is collapsed” in TB 38
  • Fixed: 1160105 – Fix strict mode warnings in protovis-r2.6-modded.js
  • Fixed: 1160106 – “Searching…” spinner at the bottom of gloda search results never goes away
  • Fixed: 1160114 – Strict mode warnings on faceted search
  • Fixed: 1160805 – Missing Windows and Linux nightly builds, build step set props: previous_buildid fails
  • Fixed: 1161162 – “Join Chat” doesn’t focus the newly joined MUC
  • Fixed: 1162396 – Take bug 1140617 “Pasting an image loses the composition style” in TB38
  • Fixed: 1163086 – Take bug 967494 “changing spellcheck language in one composition window affects all open and new compositions” in TB38
  • Fixed: 1163299 – “TypeError: getBrowser(…) is null” in contentAreaClick with Lightning installed and started in calendar view
  • Fixed: 1163343 – Incorrectly formatted error message “sending failed”
  • Fixed: 1164415 – Error in comment for imapEnterServerPasswordPrompt
  • Fixed: 1164658 – TypeError: Cc[‘@mozilla.org/weave/service;1’] is undefined at resource://gre/modules/FxAccountsWebChannel.jsm:227
  • Fixed: 1164707 – missing toolkit_perfmonitoring.xpt in aurora builds
  • Fixed: 1165152 – Take bug 1154894 in TB 38 branch: Disable test_plugin_default_state.js so Thunderbird can ship with plugins disabled by default
  • Fixed: 1165320 – TEST-UNEXPECTED-FAIL | /builds/slave/test/build/tests/mozmill/notification/test-notification.js

MailNews Core-specific: (30)

  • Fixed: 610533 – crash [@ nsMsgDatabase::GetSearchResultsTable(char const*, int, nsIMdbTable**)] with virtual folder
  • Fixed: 745664 – Rename Address book aaa to aaa_test, delete another address book bbb, and renamed address book aaa_test will lose its name and appear deleted after restart (dataloss! involving localized names)
  • Fixed: 777770 – get rid of nsVoidArray from /mailnews
  • Fixed: 786141 – Use nsIFile.exists() instead of stat to check the existence of the file
  • Fixed: 1069790 – Email addresses with parenthesis are not pretty-printed anymore
  • Fixed: 1072611 – Ctrl+P not working from Composition’s Print Preview window
  • Fixed: 1099587 – Make JS callers of ios.newChannel call ios.newChannel2 in mail/ and mailnews/
  • Fixed: 1130248 – |To: “foo@example.com” <foo@example.com>| becomes |”foo@example.comfoo”@example.com| when I compose mail to it
  • Fixed: 1138220 – some headers are not not properly capitalized
  • Fixed: 1141446 – Behaviour of malformed rfc2047 encoded From message header inconsistent
  • Fixed: 1143569 – User-agent error when posting to NNTP due to RFC5536 violation of Tb (user-agent header is folded just after user-agent:, “user-agent:[CRLF][SP]Mozilla…”)
  • Fixed: 1144693 – Disable libnotify usage on Linux by default for new-mail notifications (doesn’t always work after bug 858919)
  • Fixed: 1149320 – fix compile warnings in mailnews/extensions/
  • Fixed: 1150891 – Port package-manifest.in changes from Bug 1115495 – Part 2: PAC generator for browsing and system wide proxy
  • Fixed: 1151782 – Inputting 29th Feb as a birthday in the addressbook contact replaces it with 1st Mar.
  • Fixed: 1152364 – crash in Address Book via nsAbBSDirectory::GetChildNodes nsCOMArrayEnumerator::operator new(unsigned int, nsCOMArray_base const&)
  • Fixed: 1152989 – Account Manager Extensions broken in Thunderbird 37/38
  • Fixed: 1154521 – jsmime fails on long references header and e-mail gets sent and stored in Sent without headers
  • Fixed: 1155491 – Support autoconfig and manual config of gmail IMAP OAuth2 authentication
  • Fixed: 1155952 – Nesting level does not match indentation
  • Fixed: 1156691 – GUI “Edit filters”: Conditions/actions (for specfic accounts) not visible
  • Fixed: 1156777 – nsParseMailbox.cpp:505:55: error: ‘do_QueryObject’ was not declared in this scope
  • Fixed: 1158501 – Port bug 1039866 (metro code removal) and bug 1085557 (addition of socorro symbol upload API)
  • Fixed: 1158751 – Port NO_JS_MANIFEST changes | mozbuild.frontend.reader.SandboxValidationError: calendar/base/backend/icaljs/moz.build
  • Fixed: 1159255 – Build error: MSVC_ENABLE_PGO = True is not permitted to be used in mailnews/intl/moz.build
  • Fixed: 1159626 – chrome://messenger/content/accountUtils.js, line 455: SyntaxError: unreachable code after return statement
  • Fixed: 1160647 – Port |Bug 1159972 – Remove the fallible version of PL_DHashTableInit()| to comm-central
  • Fixed: 1163347 – Don’t require scope in ispdb config for OAuth2
  • Fixed: 1165737 – Fix usage of NS_LITERAL_CSTRING in mailnews, port Bug 1155963 to comm-central
  • Fixed: 1166842 – Re-enable binary extensions for comm-central

Windows builds Official Windows, Official Windows installer

Linux builds Official Linux (i686), Official Linux (x86_64)

Mac builds Official Mac

Categorieën: Mozilla-nl planet

Andrew Sutherland: Talk Script: Firefox OS Email Performance Strategies

Thunderbird - do, 30/04/2015 - 22:11

Last week I gave a talk at the Philly Tech Week 2015 Dev Day organized by the delightful people at technical.ly on some of the tricks/strategies we use in the Firefox OS Gaia Email app.  Note that the credit for implementing most of these techniques goes to the owner of the Email app’s front-end, James Burke.  Also, a special shout-out to Vivien for the initial DOM Worker patches for the email app.

I tried to avoid having slides that both I would be reading aloud as the audience read silently, so instead of slides to share, I have the talk script.  Well, I also have the slides here, but there’s not much to them.  The headings below are the content of the slides, except for the one time I inline some code.  Note that the live presentation must have differed slightly, because I’m sure I’m much more witty and clever in person than this script would make it seem…

Cover Slide: Who!

Hi, my name is Andrew Sutherland.  I work at Mozilla on the Firefox OS Email Application.  I’m here to share some strategies we used to make our HTML5 app Seem faster and sometimes actually Be faster.

What’s A Firefox OS (Screenshot Slide)

But first: What is a Firefox OS?  It’s a multiprocess Firefox gecko engine on an android linux kernel where all the apps including the system UI are implemented using HTML5, CSS, and JavaScript.  All the apps use some combination of standard web APIs and APIs that we hope to standardize in some form.

Firefox OS homescreen screenshot Firefox OS clock app screenshot Firefox OS email app screenshot

Here are some screenshots.  We’ve got the default home screen app, the clock app, and of course, the email app.

It’s an entirely client-side offline email application, supporting IMAP4, POP3, and ActiveSync.  The goal, like all Firefox OS apps shipped with the phone, is to give native apps on other platforms a run for their money.

And that begins with starting up fast.

Fast Startup: The Problems

But that’s frequently easier said than done.  Slow-loading websites are still very much a thing.

The good news for the email application is that a slow network isn’t one of its problems.  It’s pre-loaded on the phone.  And even if it wasn’t, because of the security implications of the TCP Web API and the difficulty of explaining this risk to users in a way they won’t just click through, any TCP-using app needs to be a cryptographically signed zip file approved by a marketplace.  So we do load directly from flash.

However, it’s not like flash on cellphones is equivalent to an infinitely fast, zero-latency network connection.  And even if it was, in a naive app you’d still try and load all of your HTML, CSS, and JavaScript at the same time because the HTML file would reference them all.  And that adds up.

It adds up in the form of event loop activity and competition with other threads and processes.  With the exception of Promises which get their own micro-task queue fast-lane, the web execution model is the same as all other UI event loops; events get scheduled and then executed in the same order they are scheduled.  Loading data from an asynchronous API like IndexedDB means that your read result gets in line behind everything else that’s scheduled.  And in the case of the bulk of shipped Firefox OS devices, we only have a single processor core so the thread and process contention do come into play.

So we try not to be a naive.

Seeming Fast at Startup: The HTML Cache

If we’re going to optimize startup, it’s good to start with what the user sees.  Once an account exists for the email app, at startup we display the default account’s inbox folder.

What is the least amount of work that we can do to show that?  Cache a screenshot of the Inbox.  The problem with that, of course, is that a static screenshot is indistinguishable from an unresponsive application.

So we did the next best thing, (which is) we cache the actual HTML we display.  At startup we load a minimal HTML file, our concatenated CSS, and just enough Javascript to figure out if we should use the HTML cache and then actually use it if appropriate.  It’s not always appropriate, like if our application is being triggered to display a compose UI or from a new mail notification that wants to show a specific message or a different folder.  But this is a decision we can make synchronously so it doesn’t slow us down.

Local Storage: Okay in small doses

We implement this by storing the HTML in localStorage.

Important Disclaimer!  LocalStorage is a bad API.  It’s a bad API because it’s synchronous.  You can read any value stored in it at any time, without waiting for a callback.  Which means if the data is not in memory the browser needs to block its event loop or spin a nested event loop until the data has been read from disk.  Browsers avoid this now by trying to preload the Entire contents of local storage for your origin into memory as soon as they know your page is being loaded.  And then they keep that information, ALL of it, in memory until your page is gone.

So if you store a megabyte of data in local storage, that’s a megabyte of data that needs to be loaded in its entirety before you can use any of it, and that hangs around in scarce phone memory.

To really make the point: do not use local storage, at least not directly.  Use a library like localForage that will use IndexedDB when available, and then fails over to WebSQLDatabase and local storage in that order.

Now, having sufficiently warned you of the terrible evils of local storage, I can say with a sorta-clear conscience… there are upsides in this very specific case.

The synchronous nature of the API means that once we get our turn in the event loop we can act immediately.  There’s no waiting around for an IndexedDB read result to gets its turn on the event loop.

This matters because although the concept of loading is simple from a User Experience perspective, there’s no standard to back it up right now.  Firefox OS’s UX desires are very straightforward.  When you tap on an app, we zoom it in.  Until the app is loaded we display the app’s icon in the center of the screen.  Unfortunately the standards are still assuming that the content is right there in the HTML.  This works well for document-based web pages or server-powered web apps where the contents of the page are baked in.  They work less well for client-only web apps where the content lives in a database and has to be dynamically retrieved.

The two events that exist are:

DOMContentLoaded” fires when the document has been fully parsed and all scripts not tagged as “async” have run.  If there were stylesheets referenced prior to the script tags, the script tags will wait for the stylesheet loads.

load” fires when the document has been fully loaded; stylesheets, images, everything.

But none of these have anything to do with the content in the page saying it’s actually done.  This matters because these standards also say nothing about IndexedDB reads or the like.  We tried to create a standards consensus around this, but it’s not there yet.  So Firefox OS just uses the “load” event to decide an app or page has finished loading and it can stop showing your app icon.  This largely avoids the dreaded “flash of unstyled content” problem, but it also means that your webpage or app needs to deal with this period of time by displaying a loading UI or just accepting a potentially awkward transient UI state.

(Trivial HTML slide)

<link rel=”stylesheet” ...> <script ...></script> DOMContentLoaded!

This is the important summary of our index.html.

We reference our stylesheet first.  It includes all of our styles.  We never dynamically load stylesheets because that compels a style recalculation for all nodes and potentially a reflow.  We would have to have an awful lot of style declarations before considering that.

Then we have our single script file.  Because the stylesheet precedes the script, our script will not execute until the stylesheet has been loaded.  Then our script runs and we synchronously insert our HTML from local storage.  Then DOMContentLoaded can fire.  At this point the layout engine has enough information to perform a style recalculation and determine what CSS-referenced image resources need to be loaded for buttons and icons, then those load, and then we’re good to be displayed as the “load” event can fire.

After that, we’re displaying an interactive-ish HTML document.  You can scroll, you can press on buttons and the :active state will apply.  So things seem real.

Being Fast: Lazy Loading and Optimized Layers

But now we need to try and get some logic in place as quickly as possible that will actually cash the checks that real-looking HTML UI is writing.  And the key to that is only loading what you need when you need it, and trying to get it to load as quickly as possible.

There are many module loading and build optimizing tools out there, and most frameworks have a preferred or required way of handling this.  We used the RequireJS family of Asynchronous Module Definition loaders, specifically the alameda loader and the r-dot-js optimizer.

One of the niceties of the loader plugin model is that we are able to express resource dependencies as well as code dependencies.

RequireJS Loader Plugins

var fooModule = require('./foo'); var htmlString = require('text!./foo.html'); var localizedDomNode = require('tmpl!./foo.html');

The standard Common JS loader semantics used by node.js and io.js are the first one you see here.  Load the module, return its exports.

But RequireJS loader plugins also allow us to do things like the second line where the exclamation point indicates that the load should occur using a loader plugin, which is itself a module that conforms to the loader plugin contract.  In this case it’s saying load the file foo.html as raw text and return it as a string.

But, wait, there’s more!  loader plugins can do more than that.  The third example uses a loader that loads the HTML file using the ‘text’ plugin under the hood, creates an HTML document fragment, and pre-localizes it using our localization library.  And this works un-optimized in a browser, no compilation step needed, but it can also be optimized.

So when our optimizer runs, it bundles up the core modules we use, plus, the modules for our “message list” card that displays the inbox.  And the message list card loads its HTML snippets using the template loader plugin.  The r-dot-js optimizer then locates these dependencies and the loader plugins also have optimizer logic that results in the HTML strings being inlined in the resulting optimized file.  So there’s just one single javascript file to load with no extra HTML file dependencies or other loads.

We then also run the optimizer against our other important cards like the “compose” card and the “message reader” card.  We don’t do this for all cards because it can be hard to carve up the module dependency graph for optimization without starting to run into cases of overlap where many optimized files redundantly include files loaded by other optimized files.

Plus, we have another trick up our sleeve:

Seeming Fast: Preloading

Preloading.  Our cards optionally know the other cards they can load.  So once we display a card, we can kick off a preload of the cards that might potentially be displayed.  For example, the message list card can trigger the compose card and the message reader card, so we can trigger a preload of both of those.

But we don’t go overboard with preloading in the frontend because we still haven’t actually loaded the back-end that actually does all the emaily email stuff.  The back-end is also chopped up into optimized layers along account type lines and online/offline needs, but the main optimized JS file still weighs in at something like 17 thousand lines of code with newlines retained.

So once our UI logic is loaded, it’s time to kick-off loading the back-end.  And in order to avoid impacting the responsiveness of the UI both while it loads and when we’re doing steady-state processing, we run it in a DOM Worker.

Being Responsive: Workers and SharedWorkers

DOM Workers are background JS threads that lack access to the page’s DOM, communicating with their owning page via message passing with postMessage.  Normal workers are owned by a single page.  SharedWorkers can be accessed via multiple pages from the same document origin.

By doing this, we stay out of the way of the main thread.  This is getting less important as browser engines support Asynchronous Panning & Zooming or “APZ” with hardware-accelerated composition, tile-based rendering, and all that good stuff.  (Some might even call it magic.)

When Firefox OS started, we didn’t have APZ, so any main-thread logic had the serious potential to result in janky scrolling and the impossibility of rendering at 60 frames per second.  It’s a lot easier to get 60 frames-per-second now, but even asynchronous pan and zoom potentially has to wait on dispatching an event to the main thread to figure out if the user’s tap is going to be consumed by app logic and preventDefault called on it.  APZ does this because it needs to know whether it should start scrolling or not.

And speaking of 60 frames-per-second…

Being Fast: Virtual List Widgets

…the heart of a mail application is the message list.  The expected UX is to be able to fling your way through the entire list of what the email app knows about and see the messages there, just like you would on a native app.

This is admittedly one of the areas where native apps have it easier.  There are usually list widgets that explicitly have a contract that says they request data on an as-needed basis.  They potentially even include data bindings so you can just point them at a data-store.

But HTML doesn’t yet have a concept of instantiate-on-demand for the DOM, although it’s being discussed by Firefox layout engine developers.  For app purposes, the DOM is a scene graph.  An extremely capable scene graph that can handle huge documents, but there are footguns and it’s arguably better to err on the side of fewer DOM nodes.

So what the email app does is we create a scroll-region div and explicitly size it based on the number of messages in the mail folder we’re displaying.  We create and render enough message summary nodes to cover the current screen, 3 screens worth of messages in the direction we’re scrolling, and then we also retain up to 3 screens worth in the direction we scrolled from.  We also pre-fetch 2 more screens worth of messages from the database.  These constants were arrived at experimentally on prototype devices.

We listen to “scroll” events and issue database requests and move DOM nodes around and update them as the user scrolls.  For any potentially jarring or expensive transitions such as coordinate space changes from new messages being added above the current scroll position, we wait for scrolling to stop.

Nodes are absolutely positioned within the scroll area using their ‘top’ style but translation transforms also work.  We remove nodes from the DOM, then update their position and their state before re-appending them.  We do this because the browser APZ logic tries to be clever and figure out how to create an efficient series of layers so that it can pre-paint as much of the DOM as possible in graphic buffers, AKA layers, that can be efficiently composited by the GPU.  Its goal is that when the user is scrolling, or something is being animated, that it can just move the layers around the screen or adjust their opacity or other transforms without having to ask the layout engine to re-render portions of the DOM.

When our message elements are added to the DOM with an already-initialized absolute position, the APZ logic lumps them together as something it can paint in a single layer along with the other elements in the scrolling region.  But if we start moving them around while they’re still in the DOM, the layerization logic decides that they might want to independently move around more in the future and so each message item ends up in its own layer.  This slows things down.  But by removing them and re-adding them it sees them as new with static positions and decides that it can lump them all together in a single layer.  Really, we could just create new DOM nodes, but we produce slightly less garbage this way and in the event there’s a bug, it’s nicer to mess up with 30 DOM nodes displayed incorrectly rather than 3 million.

But as neat as the layerization stuff is to know about on its own, I really mention it to underscore 2 suggestions:

1, Use a library when possible.  Getting on and staying on APZ fast-paths is not trivial, especially across browser engines.  So it’s a very good idea to use a library rather than rolling your own.

2, Use developer tools.  APZ is tricky to reason about and even the developers who write the Async pan & zoom logic can be surprised by what happens in complex real-world situations.  And there ARE developer tools available that help you avoid needing to reason about this.  Firefox OS has easy on-device developer tools that can help diagnose what’s going on or at least help tell you whether you’re making things faster or slower:

– it’s got a frames-per-second overlay; you do need to scroll like mad to get the system to want to render 60 frames-per-second, but it makes it clear what the net result is

– it has paint flashing that overlays random colors every time it paints the DOM into a layer.  If the screen is flashing like a discotheque or has a lot of smeared rainbows, you know something’s wrong because the APZ logic is not able to to just reuse its layers.

– devtools can enable drawing cool colored borders around the layers APZ has created so you can see if layerization is doing something crazy

There’s also fancier and more complicated tools in Firefox and other browsers like Google Chrome to let you see what got painted, what the layer tree looks like, et cetera.

And that’s my spiel.

Links

The source code to Gaia can be found at https://github.com/mozilla-b2g/gaia

The email app in particular can be found at https://github.com/mozilla-b2g/gaia/tree/master/apps/email

(I also asked for questions here.)

Categorieën: Mozilla-nl planet

Thunderbird Blog: Thunderbird 38 goes to beta!

Thunderbird - vr, 03/04/2015 - 11:13

The next major release of Thunderbird, version 38, is now in beta and available for testing. You may download Thunderbird 38.0b1 here.

This version of Thunderbird is the first that is mostly managed by volunteer community members rather than by Mozilla staff. We have many new features, including:

  • Message filtering when a message is sent or archived
  • File-per-message local storage available for new accounts (maildir)
  • Contact search over multiple address books
  • Internationalized domain names for RSS feeds
  • Allow expanded columns to the folder pane for folder size and counts

Release notes are available here.

There are still a couple of features missing from this beta that we hope to ship in the final version of Thunderbird 38. Those are:

  • Ship Lightning calendar addon with Thunderbird with an opt-out dialog
  • Use OAUTH authentication with Gmail IMAP accounts

 

Categorieën: Mozilla-nl planet

Joshua Cranmer: Breaking news

Thunderbird - wo, 01/04/2015 - 09:00
It was brought to my attention recently by reputable sources that the recent announcement of increased usage in recent years produced an internal firestorm within Mozilla. Key figures raised alarm that some of the tech press had interpreted the blog post as a sign that Thunderbird was not, in fact, dead. As a result, they asked Thunderbird community members to make corrections to emphasize that Mozilla was trying to kill Thunderbird.

The primary fear, it seems, is that knowledge that the largest open-source email client was still receiving regular updates would impel its userbase to agitate for increased funding and maintenance of the client to help forestall potential threats to the open nature of email as well as to innovate in the space of providing usable and private communication channels. Such funding, however, would be an unaffordable luxury and would only distract Mozilla from its central goal of building developer productivity tooling. Persistent rumors that Mozilla would be willing to fund Thunderbird were it renamed Firefox Email were finally addressed with the comment, "such a renaming would violate our current policy that all projects be named Persona."

Categorieën: Mozilla-nl planet

Thunderbird Blog: Thunderbird Usage Continues to Grow

Thunderbird - vr, 27/02/2015 - 23:44

We’re happy to report that Thunderbird usage continues to expand.

Mozilla measures program usage by Active Daily Installations (ADI), which is the number of pings that Mozilla servers receive as installations do their daily plugin block-list update. This is not the same as the number of active users, since some users don’t access their program each day, and some installations are behind firewalls. An estimate of active monthly users is typically done by multiplying the ADI by a factor of 3.

To plot changes in Thunderbird usage over time, I’ve picked the peak ADI for each month for the last few years. Here’s the result:

Thunderbird Active Daily Installations, peak value per month.

Germany has long been our #1 country for usage, but in 4th quarter 2014, Japan exceeded US as the #2 country. Here’s the top 10 countries, taken from the ADI count of February 24, 2015:

Rank Country ADI 2015-02-24 1 Germany 1,711,834 2 Japan 1,002,877 3 United States 927,477 4 France 777,478 5 Italy 514,771 6 Russian Federation 494,645 7 Poland 480,496 8 Spain 282,008 9 Brazil 265,820 10 United Kingdom 254,381 All Others 2,543,493 Total 9,255,280

Country Rankings for Thunderbird Usage, February 24, 2015

The Thunderbird team is now working hard preparing our next major release, which will be Thunderbird 38 in May 2015. We’ll be blogging more about that release in the next few weeks, including reporting on the many new features that we have added.

Categorieën: Mozilla-nl planet

Joshua Cranmer: Why email is hard, part 8: why email security failed

Thunderbird - di, 13/01/2015 - 05:38
This post is part 8 of an intermittent series exploring the difficulties of writing an email client. Part 1 describes a brief history of the infrastructure. Part 2 discusses internationalization. Part 3 discusses MIME. Part 4 discusses email addresses. Part 5 discusses the more general problem of email headers. Part 6 discusses how email security works in practice. Part 7 discusses the problem of trust. This part discusses why email security has largely failed.

At the end of the last part in this series, I posed the question, "Which email security protocol is most popular?" The answer to the question is actually neither S/MIME nor PGP, but a third protocol, DKIM. I haven't brought up DKIM until now because DKIM doesn't try to secure email in the same vein as S/MIME or PGP, but I still consider it relevant to discussing email security.

Unquestionably, DKIM is the only security protocol for email that can be considered successful. There are perhaps 4 billion active email addresses [1]. Of these, about 1-2 billion use DKIM. In contrast, S/MIME can count a few million users, and PGP at best a few hundred thousand. No other security protocols have really caught on past these three. Why did DKIM succeed where the others fail?

DKIM's success stems from its relatively narrow focus. It is nothing more than a cryptographic signature of the message body and a smattering of headers, and is itself stuck in the DKIM-Signature header. It is meant to be applied to messages only on outgoing servers and read and processed at the recipient mail server—it completely bypasses clients. That it bypasses clients allows it to solve the problem of key discovery and key management very easily (public keys are stored in DNS, which is already a key part of mail delivery), and its role in spam filtering is strong motivation to get it implemented quickly (it is 7 years old as of this writing). It's also simple: this one paragraph description is basically all you need to know [2].

The failure of S/MIME and PGP to see large deployment is certainly a large topic of discussion on myriads of cryptography enthusiast mailing lists, which often like to partake in propositions of new end-to-end encryption of email paradigms, such as the recent DIME proposal. Quite frankly, all of these solutions suffer broadly from at least the same 5 fundamental weaknesses, and I see it unlikely that a protocol will come about that can fix these weaknesses well enough to become successful.

The first weakness, and one I've harped about many times already, is UI. Most email security UI is abysmal and generally at best usable only by enthusiasts. At least some of this is endemic to security: while it mean seem obvious how to convey what an email signature or an encrypted email signifies, how do you convey the distinctions between sign-and-encrypt, encrypt-and-sign, or an S/MIME triple wrap? The Web of Trust model used by PGP (and many other proposals) is even worse, in that inherently requires users to do other actions out-of-band of email to work properly.

Trust is the second weakness. Consider that, for all intents and purposes, the email address is the unique identifier on the Internet. By extension, that implies that a lot of services are ultimately predicated on the notion that the ability to receive and respond to an email is a sufficient means to identify an individual. However, the entire purpose of secure email, or at least of end-to-end encryption, is subtly based on the fact that other people in fact have access to your mailbox, thus destroying the most natural ways to build trust models on the Internet. The quest for anonymity or privacy also renders untenable many other plausible ways to establish trust (e.g., phone verification or government-issued ID cards).

Key discovery is another weakness, although it's arguably the easiest one to solve. If you try to keep discovery independent of trust, the problem of key discovery is merely picking a protocol to publish and another one to find keys. Some of these already exist: PGP key servers, for example, or using DANE to publish S/MIME or PGP keys.

Key management, on the other hand, is a more troubling weakness. S/MIME, for example, basically works without issue if you have a certificate, but managing to get an S/MIME certificate is a daunting task (necessitated, in part, by its trust model—see how these issues all intertwine?). This is also where it's easy to say that webmail is an unsolvable problem, but on further reflection, I'm not sure I agree with that statement anymore. One solution is just storing the private key with the webmail provider (you're trusting them as an email client, after all), but it's also not impossible to imagine using phones or flash drives as keystores. Other key management factors are more difficult to solve: people who lose their private keys or key rollover create thorny issues. There is also the difficulty of managing user expectations: if I forget my password to most sites (even my email provider), I can usually get it reset somehow, but when a private key is lost, the user is totally and completely out of luck.

Of course, there is one glaring and almost completely insurmountable problem. Encrypted email fundamentally precludes certain features that we have come to take for granted. The lesser known is server-side search and filtration. While there exist some mechanisms to do search on encrypted text, those mechanisms rely on the fact that you can manipulate the text to change the message, destroying the integrity feature of secure email. They also tend to be fairly expensive. It's easy to just say "who needs server-side stuff?", but the contingent of people who do email on smartphones would not be happy to have to pay the transfer rates to download all the messages in their folder just to find one little email, nor the energy costs of doing it on the phone. And those who have really large folders—Fastmail has a design point of 1,000,000 in a single folder—would still prefer to not have to transfer all their mail even on desktops.

The more well-known feature that would disappear is spam filtration. Consider that 90% of all email is spam, and if you think your spam folder is too slim for that to be true, it's because your spam folder only contains messages that your email provider wasn't sure were spam. The loss of server-side spam filtering would dramatically increase the cost of spam (a 10% reduction in efficiency would double the amount of server storage, per my calculations), and client-side spam filtering is quite literally too slow [3] and too costly (remember smartphones? Imagine having your email take 10 times as much energy and bandwidth) to be a tenable option. And privacy or anonymity tends to be an invitation to abuse (cf. Tor and Wikipedia). Proposed solutions to the spam problem are so common that there is a checklist containing most of the objections.

When you consider all of those weaknesses, it is easy to be pessimistic about the possibility of wide deployment of powerful email security solutions. The strongest future—all email is encrypted, including metadata—is probably impossible or at least woefully impractical. That said, if you weaken some of the assumptions (say, don't desire all or most traffic to be encrypted), then solutions seem possible if difficult.

This concludes my discussion of email security, at least until things change for the better. I don't have a topic for the next part in this series picked out (this part actually concludes the set I knew I wanted to discuss when I started), although OAuth and DMARC are two topics that have been bugging me enough recently to consider writing about. They also have the unfortunate side effect of being things likely to see changes in the near future, unlike most of the topics I've discussed so far. But rest assured that I will find more difficulties in the email infrastructure to write about before long!

[1] All of these numbers are crude estimates and are accurate to only an order of magnitude. To justify my choices: I assume 1 email address per Internet user (this overestimates the developing world and underestimates the developed world). The largest webmail providers have given numbers that claim to be 1 billion active accounts between them, and all of them use DKIM. S/MIME is guessed by assuming that any smartcard deployment supports S/MIME, and noting that the US Department of Defense and Estonia's digital ID project are both heavy users of such smartcards. PGP is estimated from the size of the strong set and old numbers on the reachable set from the core Web of Trust.
[2] Ever since last April, it's become impossible to mention DKIM without referring to DMARC, as a result of Yahoo's controversial DMARC policy. A proper discussion of DMARC (and why what Yahoo did was controversial) requires explaining the mail transmission architecture and spam, however, so I'll defer that to a later post. It's also possible that changes in this space could happen within the next year.
[3] According to a former GMail spam employee, if it takes you as long as three minutes to calculate reputation, the spammer wins.

Categorieën: Mozilla-nl planet

Pagina's