Back in March, we posted that we had started building nightly builds from mozilla-central/comm-central, but because the version of CentOS we had been using was too old, we were unable to continue providing Linux nightly builds. That has now changed and (as of today) we have both 32-bit and 64-bit Linux nightlies! Since this involved us installing a new operating system (CentOS 6.2) and tweaking some of the build configuration for Linux, please let us know if you see any issues! Additionally, some more up-to-date features that have been available in Mozilla Firefox for a while should now be available in Instantbird (e.g. dbus and pulse audio support) and even some minor bugs were fixed!
Sorry that this took so long, but go grab your updated copy now!
I am happy to announce that Lightning 3.3, a new major release, is out of the door. Here are a few release highlights:
- Various components have been made asynchronous, allowing for better perceived performance. This means less hanging when Lightning is busy.
- Improved invitation processing, as well as a few new features:
- Restrict sending invitations to newly added attendees
- Send one invitation email per attendee, not disclosing other attendees
- Consider default BCC and CC of configured email identity when sending invitations
- More actions when viewing invitations, e.g. tentative accept, accepting only occurrences.
- When accessing Google Calendar via CalDAV, the authentication dialog doesn’t constantly reappear.
There have also been a lot of changes in the backend that are not visible to the user. This includes better testing framework support, which will help avoid regressions in the future. A total of 103 bugs have been fixed since Lightning 2.6.
When installing or updating to Thunderbird 31, you should automatically receive the upgrade to Lightning 3.3. If something goes wrong, you can get the new versions here:
Should you be using Seamonkey, you will have to wait for the 2.28 release, which is postponed as per this thread.
If you encounter any major issues, please comment on this blog post. Support issues are handled on support.mozilla.org. Feature requests and bug reports can be made on bugzilla.mozilla.org in the product Calendar. Be sure to search for existing bugs before you file them.Addons Update:
There are a number of addons that have compatibility issues with Lightning 3.3. The authors have been notified and a few first fixes are available:
- Calendar Tweaks: This addon causes constant flashing and makes Lightning unusable. The author has been notified and has released version 6.0 which shoul fix all issues. Plase notify him in case there is more trouble.
- Thunderbird Conversations: There is an issue with the Lightning invitations plugin. It has been fixed in one of the nightly builds and will be available as a release soon.
- There was an issue with addons.mozilla.org that caused all Lightning downloads to fail. It has been fixed in the meanwhile.
What will it take to keep Thunderbird stable and vibrant? Although there is a dedicated, hard-working team of volunteers trying hard to keep Thunderbird alive, there has been very little progress on improvements since Mozilla drastically reduced their funding. I’ve been an advocate for some time that Thunderbird needs income to fulfill its potential, and that the best way to generate that income would be to appeal directly to its users for donations.
One internet organization that has done this successfully has been Wikipedia. How much income could Thunderbird generate if they received the same income per user as Wikipedia? Surely our users, who rely on Thunderbird for critical daily communications, are at least as willing to donate as Wikipedia users.
Estimates of income from Wikipedia’s annual fund raising drive to users are around $20,000,000 per year. Recently Wikipedia is reporting 11824 M pageviews per month and 5 pageviews per user. That results in a daily user count of 78 million users. Thunderbird by contrast has about 6 million daily users (using hits per day to update checks), or about 8% of the daily users of Wikipedia.
If Thunderbird were willing to directly engage users asking for donations, at the same rate per user as Wikipedia, there is a potential to raise $1,600,000 per year. That would certainly be enough income to maintain a serious team to move forward.
Wikipedia’s donation requests were fairly intrusive, with large banners at the top of all Wikipedia pages. When Firefox did a direct appeal to users early this year, the appeal was very subtle (did you even notice it?). I tried to scale the Firefox results to Thunderbird, and estimated that a similar subtle appeal might raise $50,000 – $100,000 per year in Thunderbird. That is not sufficient to make a significant impact. We would have to be willing to be a little intrusive, like Wikipedia, it we are going to be successful. This will generate pushback, as has Wikipedia’s campaign, so we would have to be willing to live with the pushback.
But is it really in the best interest of our users to spare them an annual, slightly intrusive appeal for donations, while letting the product that they depend on each day slowly wither away? I believe that if we truly care about our users, we will take the necessary steps to insure that we give them the best product possible, including undertaking fundraising to keep the product stable and vibrant.
For the first time in a while, the Thunderbird build tree is all green. That means that all platforms are building, and passing all tests:
Many thanks to Joshua Cranmer for all of his hard work to make it so!
We just released the second beta of Thunderbird 31. Please help us improve Thunderbird quality by uncovering bugs now in Thunderbird 31 beta so that developers have time to fix them.
There are two ways you can help
- Use Thunderbird 31 beta in your daily activities. For problems that you find, file a bug report that blocks our tracking bug 1008543.
- Use Thunderbird 31 beta to do formal testing. Use the moztrap testing system to tests : choose run test - find the Thunderbird product and choose 31 test run.
Visit https://etherpad.mozilla.org/tbird31testing for additional information, and to post your testing questions and results.
Thanks for contributing and helping!
Ludo for the QA team
Common (excluding Website bugs)-specific: (2)
- Fixed: 782670 – Allow to save items to calendar, even though they are invitations (override REQUEST with PUBLISH)
- Fixed: 1026382 – TB 31 beta 1 – Scrollbar partially hidden behind Lightnings Todays pane
Sunbird will no longer be actively developed by the Calendar team.
- Fixed: 788137 – Nick for XMPP chatrooms becomes ‘null’
- Fixed: 827048 – Filtered messages lost (POP) going to imap or maildir folder (with mail.serverDefaultStoreContractID = @mozilla.org/msgstore/maildirstore;1)
- Fixed: 920118 – Make “visited link” coloring work in thunderbird (enable and use Places for history)
- Fixed: 942005 – [meta] Attempt to more closely mimic the new Australis Fx theme.
- Fixed: 1008822 – Forwarding RSS post as inline does not use default account identity
- Fixed: 1010140 – libffi dep builds fail to build for comm-central due to “IndexError: string index out of range” pymake error
- Fixed: 1011616 – Join Chat dialog box won’t close with Auto-join this Chat Room checked
- Fixed: 1022222 – Free space in account central different under email and accounts
- Fixed: 1022340 – Port bug 885139 – LWT header image applied to the selected tab isn’t updated upon re-cropping
- Fixed: 1025373 – TypeError: gCurrentFilterList is undefined when opening filter list with no selected folder
- Fixed: 1025486 – No tab overflow arrows shown on overflow
- Fixed: 1026629 – Non-existent priority level PRIORITY_MEDIUM_HIGH requested for notification
- Fixed: 1030019 – Port |Bug 598615 – HAVE_64BIT_OS changed to HAVE_64BIT_BUILD| to Thunderbird
- Fixed: 1033568 – TEST-UNEXPECTED-FAIL | /builds/slave/test/build/mozmill/message-header/test-header-toolbar.js | test-header-toolbar.js::test_customize_header_toolbar_reorder_buttons
MailNews Core-specific: (17)
- Fixed: 1017939 – newly created maildir subfolders folders are created under INBOX instead of INBOX.sbd
- Fixed: 1018300 – Modify test_pop3MoveFilter.js to use Promises.
- Fixed: 1018543 – Modify test_pop3FilterActions.js to use Promises
- Fixed: 1018589 – Can’t add RSS feed with Cyrillic URL -> support idn urls for feeds
- Fixed: 1022723 – Some headers are sent prefixed with “********:” when they were folded due to their length
- Fixed: 1023700 – Modify test_pop3MoveFilter2.js to use Promises.
- Fixed: 1027241 – Import nsICMS* from Gecko to MailNews Core to fix bustage caused by their removal from Gecko
- Fixed: 1028217 – Update CollectReports() to fix c-c bustage caused by Bug 1010064
- Fixed: 1028548 – Fix nsMsgMaildirStore::RenameFolder() and make test_nsIMsgLocalMailFolder.js pass with maildir mailbox storage format
- Fixed: 1028997 – Assertion failure in nsMsgAttachmentHandler.h
- Fixed: 1030116 – Fix test_pop3MoveFilter2.js to work with maildir.
- Fixed: 1030291 – Fix test_pop3FilterActions.js to work with maildir.
- Fixed: 1031291 – Prepare comm-central to m-c changes from bug 762358
- Fixed: 1031703 – Fix test_msgIDParsing.js to work with maildir.
- Fixed: 1035086 – Enable MOZ_PSEUDO_DERECURSE on c-c builds
- Fixed: 1035096 – Avoid much mess by not letting mozilla subconfigure read mozconfig
- Fixed: 1035590 – Don’t share python virtualenv between c-c and m-c.
I think in order to truly understand what the DocShell currently is, we have to find out where the idea of creating it came from. That means going way, way back to its inception, and figuring out what its original purpose was.
So I’ve gone back, peered through various archived wiki pages, newsgroup and mailing list posts, and I think I’ve figured out that original purpose.1
The original purpose can be, I believe, summed up in a single word: embedding.Embedding
Back in the late 90’s, sometime after the Mozilla codebase was open-sourced, it became clear to some folks that the web was “going places”. It was the “bees knees”. It was the “cat’s pajamas”. As such, it was likely that more and more desktop applications were going to need to be able to access and render web content.
The thing is, accessing and rendering web content is hard. Really hard. One does not simply write web browsing capabilities into their application from scratch hoping for the best. Heartbreak is in that direction.
Instead, the idea was that pre-existing web engines could be embedded into other applications. For example, Steam, Valve’s game distribution platform, displays a ton of web content in its user interface. All of those Steam store pages? Those are web pages! They’re using an embedded web engine in order to display that stuff.2
So making Gecko easily embeddable was, at the time, a real goal, and a real project.nsWebShell
The problem was that embedding Gecko was painful. The top-level component that embedders needed to instantiate and communicate with was called “nsWebShell”, and it was a pretty unwieldy. Lots of internal knowledge about the internal workings of Gecko was leaked through the nsWebShell component, and it’s interface changed far too often.
It was also inefficient – the nsWebShell didn’t just represent the top-level “thing that loads web content”. Instances of nsWebShell were also used recursively for subdocuments within those documents – for example, (i)frames within a webpage. These nested nsWebShell’s formed a tree. That’s all well and good, except for the fact that there were things that the nsWebShell loaded or did that only the top-level nsWebShell really needed to load or do. So there was definitely room for some performance improvement.
In order to correct all of these issues, a plan was concocted to retire nsWebShell in favour of several new components and a slew of new interfaces. Two of those new components were nsDocShell and nsWebBrowser.nsWebBrowser
nsWebBrowser would be the thing that embedders would drop into the applications – it would be the browser, and would do all of the loading / doing of things that only the top-level web browser needed to do.
The interface for nsWebBrowser would be minimal, just exposing enough so that an embedder could drop one into their application with little fuss, point it at a URL, set up some listeners, and watch it dance.nsDocShell
nsDocShell would be… well, everything else that nsWebBrowser wasn’t. So that dumping ground that was nsWebShell would get dumped into nsDocShell instead. However, a number of new, logically separated interfaces would be created for nsDocShell.
Examples of those interfaces were:
So instead of a gigantic, ever changing interface, you had lots of smaller interfaces, many of which could eventually be frozen over time (which is good for embedders).
These interfaces also made it possible to shield embedders from various internals of the nsDocShell component that embedders shouldn’t have to worry about.Ok, but… what was it?
But I still haven’t answered the question – what was the DocShell at this point? What was it supposed to do now that it was created.
This ancient wiki page spells it out nicely:
This class is responsible for initiating the loading and viewing of a document.
Basically, any time a document is to be viewed, a DocShell needs to be created to view it. We create the DocShell, and then we point that DocShell at the URL, and it does the job of kicking off communications via the network layer, and dealing with the content once it comes back.
So it’s no wonder that it was (and still is!) a dumping ground – when it comes to loading and displaying content, nsDocShell is the central nexus point of communications for all components that make that stuff happen.
I believe that was the original purpose of nsDocShell, anyhow.And why “shell”?
This is a simple question that has been on my mind since I started this. What does the “shell” mean in nsDocShell?
Y’know, I think it’s actually a fragment left over from the embedding work, and that it really has no meaning anymore. Originally, nsWebShell was the separation point between an embedder and the Gecko web engine – so I think I can understand what “shell” means in that context – it’s the touch-point between the embedder, and the embedee.
I think nsDocShell was given the “shell” monicker because it did the job of taking over most of nsWebShell’s duties. However, since nsWebBrowser was now the touch-point between the embedder and embedee… maybe shell makes less sense. I wonder if we missed an opportunity to name nsDocShell something better.
In some ways, “shell” might make some sense because it is the separation between various documents (the root document, any sibling documents, and child documents)… but I think that’s a bit of a stretch.
But now I’m racking my brain for a better name (even though a rename is certainly not worth it at this point), and I can’t think of one.
What would you rename it, if you had the chance?What is nsDocShell doing now?
I’m not sure what’s happened to nsDocShell over the years, and that’s the point of the next few posts in this series. I’m going to be going through the commits hitting nsDocShell from 1999 until the present day to see how nsDocShell has changed and evolved.
Hold on to your butts.Further reading
The above was gleaned from the following sources:
I’m very much prepared to be wrong about any / all of this. I’m making assertions and drawing conclusions by reading and interpreting things that other people have written about DocShell – and if the telephone game is any indication, this indirect analysis can be lossy. If I have misinterpreted, misunderstood, or completely missed the point in any of the above, please don’t hesitate to comment, and I will correct it forthwith. ↩
They happen to be using WebKit, the same web engine that powers Safari, and (until recently) Chromium. According to this, they’re using the Chromium Embedding Framework to display this web content. There are a number of applications that embed Gecko. Firefox is the primary consumer of Gecko. Thunderbird is another obvious one – when you display HTML email, it’s using the Gecko web engine in order to lay it out and display it. WINE uses Gecko to allow Windows-binaries to browse the web. Making your web engine embeddable, however, has a development cost, and over the years, making Gecko embeddable seems to have become less of a priority. Servo is a next-generation web browser engine from Mozilla Research that aims to be embeddable. ↩
People love heatmaps.
They’re a great way to show how much various UI elements are used in relation to each other, and are much easier to read at a glance than a table of click- counts would be. They can also reveal hidden patterns of usage based on the locations of elements, let us know if we’re focusing our efforts on the correct elements, and tell us how effective our communication about new features is. Because they’re so useful, one of the things I am doing in my new role is setting up the framework to provide our UX team with automatically updating heatmaps for both Desktop and Android Firefox.
Unfortunately, we can’t just wave our wands and have a heatmap magically appear. Creating them takes work, and one of the most tedious processes is figuring out where each element starts and stops. Even worse, we need to repeat the process for each platform we’re planning on displaying. This is one of the primary reasons we haven’t run a heatmap study since 2012.
In order to not spend all my time generating the heatmaps, I had to reduce the effort involved in producing these visualizations.
Being a programmer, my first inclination was to write a program to calculate them, and that sort of worked for the first version of the heatmap, but there were some difficulties. To collect locations for all the elements, we had to display all the elements.
Customize mode (as shown above) was an obvious choice since it shows everything you could click on almost by definition, but it led people to think that we weren’t showing which elements were being clicked the most, but instead which elements people customized the most. So that was out.
Next we tried putting everything in the toolbar, or the menu, but those were a little too cluttered even without leaving room for labels, and too wide (or too tall, in the case of the menu).
Similarly, I couldn’t fit everything into the menu panel either. The only solution was to resort to some Photoshop-trickery to fit all the buttons in, but that ended up breaking the script I was using to locate the various elements in the UI.
Since I couldn’t automatically figure out where everything was, I figured we might as well use a nicely-laid out, partially generated image, and calculate the positions (mostly-)manually.
I had foreseen the need for different positions for the widgets when the project started, and so I put the widget locations in their own file from the start. This meant that I could update them without changing the code, which made it a little nicer to see what’s changed between versions, but still required me to reload the whole page every time I changed a position or size, which would just have taken way too long. I needed something that could give me much more immediate feedback.
Fortunately, I had recently finished watching a series of videos from Ian Johnson (@enjalot on twitter) where he used a tool he made called Tributary to do some rapid prototyping of data visualization code. It seemed like a good fit for the quick moving around of elements I was trying to do, and so I copied a bunch of the code and data in, and got to work moving things around.
I did encounter a few problems: Tributary wasn’t functional in Firefox Nightly (but I could use Chrome as a workaround) and occasionally sometimes trying to move the cursor would change the value slider instead. Even with these glitches it only took me an hour or two to get from set-up to having all the numbers for the final result! And the best part is that since it's all open source, you can take a look at the final result, or fork it yourself!
I’ve been reading a book called The Annotated Alice. In this book, the late and great Martin Gardner shows us the stories of Alice’s Adventures in Wonderland and Through the Looking-Glass but supplies copious footnotes to illustrate the puns, wordplay, allusions, logic problems and satire going on beneath the text. Some of these footnotes delve into pure conjecture (there are still people to this day who theorize about various aspects of the stories), and other footnotes show quite clearly that Carrol wrote these stories with a sophisticated wit and whimsy that isn’t immediately obvious at first glance.
And it’s clear that Gardner (and others like him) have spent hours upon hours thinking and theorizing about these stories. A purposeful misspelling gets awarded a two page footnote here, and a mention of a mirror sends us off talking about matter and anti-matter and other matters (ha) of quantum physics.
So much thinking and effort to interpret these stories, and what you get out of it is a fascinating tapestry of ideas and history.
Needless to say, I’ve been finding the whole thing fascinating. It’s a hell of a read.
While reading it, I’ve wondered what it’d be like to apply the same practice to source code. Take some relatively mysterious piece of source code that only a few people feel comfortable with, and explode it out. Go through the source control history, and all of the old bugs, and see where this code came from. What was its purpose to begin with? What is its purpose now? What are the battle scars?
After much thinking, I’ve decided to try this, and I’m going to try it on a piece of Gecko called “DocShell”.
I think I just heard Ms2ger laughing somewhere.
It’s become pretty clear having talked to a few seasoned Mozilla hackers that DocShell is not well understood. The wiki page on it makes that even more clear – it starts:
The goal of this page is to serve as a dumping/organization ground for docshell docs. When someone finds out something, it should be added here in a reasonable way. By the time this gets unwieldy, hopefully we will have enough material for several actual docs on what docshell does and why.
So, I’m going to attempt to figure out what DocShell was supposed to do, and figure out what it currently does. I’m going to dig through source code, old bugs, and old CVS commits, back to the point where Netscape first open-sourced the Mozilla code-base.
It’s not going to be easy. It’s definitely going to be a multiple month, multiple post effort. I’m likely to get things wrong, or only partially correct. I’ll need help in those cases, so please comment.
And I might not succeed in figuring out what DocShell was supposed to do. But I’m pretty confident I can get a grasp on what it currently does.
So in the end, if I’m lucky, we’ll end up with a few things:
- A greater shared understanding of DocShell
- Materials that can be used to flesh out the DocShell wiki
- Better inline documentation for DocShell maybe?
I’ve also asked bz to forward me feedback requests for DocShell patches, so that way I get another angle of attack on understanding the code.
So, deep breath. Here goes. Watch this space.