hg clone https://hg.mozilla.org/build/braindump
allthethings.json is generated based on data from buildbot-configs.
It contains data about builders, schedulers, masters and slavepools.
If you want to extract information from allthethings.json feel free to use mozci to help you!
This work by Zambrano Gasparnian, Armen is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 Unported License.
Reunión bi-semanal para hablar sobre el estado de Mozilla, la comunidad y sus proyectos.
Mozilla’s goal of high quality plugin-free gaming on the Web is taking a giant leap forward today with the release of Unity 5. This new version of the world’s most popular game development tool includes a preview of their amazing WebGL exporter. Unity 5 developers are one click away from publishing their games to the Web in a whole new way, by taking advantage of WebGL and asm.js. The result is native-like performance in desktop browsers without the need for plugins.
Unity is a very popular game development tool. In fact the company says just under half of all developers report using this tool. The engine is highly suited for mobile development and as such has been used to produce a wealth of content which is particularly well suited for Web export. Small download size, low memory usage, and rendering pipeline similarities make this content straight forward to port to the Web. Unity has a long history of providing their developers the ability to ship online via a Web plugin. In recent years, browser vendors have moved to reduce their dependency on plugins for content delivery.
A new cross browser approach was needed and it has arrived
“Unity has always been a strong supporter of Web gaming,” said Andreas Gal, CTO of Mozilla. “With the ability to do plugin-free WebGL export with Unity 5, Mozilla is excited to see Unity promoting the Web as a first-class platform for their developers. One-click export to WebGL will give Unity’s developers the ability to share their content with a new class of user.”
Clicking on the images above will take you to live examples of Unity 5 exports using WebGL 1.
At GDC, Mozilla will also be providing a first look at WebGL 2. While the shipping Unity WebGL export targets WebGL 1, Unity and Mozilla have been working together to take advantage of WebGL 2, the next generation standard for 3D graphics on the Web. Unity has redeveloped their Teleporter demo to showcase the technology in action.
Mozilla and Unity will be showing off a number of titles developed in Unity and exported to the Web, including Nival’s Prime World Defenders and AaaaaAAaaaAAAaaAAAAaAAAAA! for Awesome by Dejobaan Games, which can be played right on their website. You can also try Dead Trigger 2 and Angry Bots available via Unity Technologies’ website.
For more information on Unity’s news please see their blog post.
For more information on Mozilla’s news at GDC see this post.
Edited March 4th to clarify that current Unity support is for WebGL 1 while WebGl 2 is an experimental technology being developed in conjunction with Mozilla.
GDC 2015 is a major milestone in a long term collaboration between Mozilla and the world’s biggest game engine makers. We set out to bring high performance games to the Web without plugins, and that goal is now being realized. Unity Technologies is including the WebGL export preview as part of their Unity 5 release, available today. Epic Games has added a beta HTML5 exporter as part of their regular binary engine releases. This means plugin-free Web deployment is now in the hands of game developers working with these popular tools. They select the Web as their target platform and, with one click, they can build to it. Now developers can unlock the world’s biggest open distribution platform leveraging two Mozilla-pioneered technologies, asm.js and WebGL.
What has changed?
The technology is spreading
Browser support for the underlying Web standards is growing. WebGL has now spread to all modern browsers, both desktop and mobile. We are seeing all browsers optimize for asm.js-style code, with Firefox and Internet Explorer committed to advanced optimizations.
“With the ability to reach hundreds of millions of users with just a click, the Web is a fantastic place to publish games,” said Andreas Gal, CTO of Mozilla. “We’ve been working hard at making the platform ready for high performance games to rival what’s possible on other platforms, and the success of our partnerships with top-end engine and game developers shows that the industry is taking notice.”
Not done yet
Mozilla is committed to advancing what is possible on the Web. While already capable of running great game experiences, there is plenty of potential still to be unlocked. This year’s booth showcase will include some bleeding edge technologies such as WebGL 2 and WebVR, as well as updated developer tools aimed at game and Web developers alike. These tools will be demonstrated in our recently released 64-bit version of Firefox Developer Edition. Mozilla will also be providing developers access to SIMD and experimental threading support. Developers are invited to start experimenting with these technologies, now available in Firefox Nightly Edition. Visit the booth to learn more about Firefox Marketplace, now available in our Desktop, Android, and Firefox OS offerings as a distribution opportunity for developers.
To learn more about Mozilla’s presence at GDC, read articles from the developers on the latest topics, or learn how to get involved, visit games.mozilla.org or come see us at South Hall Booth #2110 till March 6th. For press inquiries please email email@example.com.
People who have observed the list carefully may have noticed that there are fewer accepted organizations this year: 137 (down from 190 in 2014 and 177 in 2013). Other organizations that have participated successfully several times are also not in the 2015 list (eg. Linux Foundation, Tor, ...).
After a quick email exchange with Google last night, here is the additional information I have:
- not accepting Mozilla was a difficult decision for them. It is not the result of a mistake on our part or an accident on their side.
- there's an assumption that not participating for one year would not be as damaging for us as it would be for some other organizations, due to us having already participated many times.
- this event doesn't affect negatively our chances of being selected next year, and we are encouraged to apply again.
This news has been a surprise for me. I am disappointed, and I'm sure lots of people reading this are disappointed too. I would like to thank all the people who considered participating this year with Mozilla, and especially all the Mozillians who volunteered to mentor and contributed great project ideas. I would also like to remind students that while Summer of Code is a great opportunity to contribute to Mozilla, it's not the only one. Feel free to contact mentors if you would like to work on some of the suggested ideas anyway.
Let's try again next year!
The curl project has been around for a long time by now and we’ve been through several different version control systems. The most recent switch was when we switched to git from CVS back in 2010. We were late switchers but then we’re conservative in several regards.
When we switched to git we also switched to github for the hosting, after having been self-hosted for many years before that. By using github we got a lot of services, goodies and reliable hosting at no cost. We’ve been enjoying that ever since.
However, as we have been a traditional mailing list driving project for a long time, I have previously not properly embraced and appreciated pull requests and issues filed at github since they don’t really follow the old model very good.
Just very recently I decided to stop fighting those methods and instead go with them. A quick poll among my fellow team mates showed no strong opposition and we are now instead going full force ahead in a more github embracing style. I hope that this will lower the barrier and remove friction for newcomers and allow more people to contribute easier.
As an effect of this, I would also like to encourage each and everyone who is interested in this project as a user of libcurl or as a contributor to and hacker of libcurl, to skip over to the curl github home and press the ‘watch’ button to get notified and future requests and issues that appear.
We also offer this helpful guide on how to contribute to the curl project!
Telemetry in Firefox is how we measure stuff in the browser—anything from how fast GIFS are decoded, to how many people opened the Dev Tools animation inspector. You can check out the collection of gathered results at http://telemetry.mozilla.org or see what your browser is sending (or disable it, if that's your thing) in about:telemetry.
One question the Web Compat team at Mozilla is interested in is whether Firefox for Android and Firefox OS users are being sent more than their fair share of WAP content (typically WML or XHTMLMP sites).
(Personally, I missed out on WAP because I was too afraid to open the browser on my Nokia and have to pay for data in the early 2000s. (Also I didn't live in Japan.))
Here's the kind of amazing content that Firefox Mobile users are served and are unable to see:
Since Gecko doesn't know how to decode WAP, the browser calls it a day and treats it as application/octet-stream which results in a prompt for the user to download a page. Check out the dependencies of these bugs for some more of the gritty details.
As to why we're sent WAP stuff in the first place, this is likely due to old UA detection libraries that don't recognize the User Agent header. The logical assumption therefore is that this unknown browser is some kind of ancient proto-HTML capable graphing calculator. Naturally you would want to serve that kind of user agent WAP, rather than a Plain-Old HTTP Site (POHS).
So this seems like a pretty good opportunity to use Telemetry to measure how common happens. If it's happening all the time, we can push for some form of actual support in Gecko itself. But if it's exceedingly rare we can all move on with our lives, etc.
To measure this we landed a histogram named HTTP_WAP_CONTENT_TYPE_RECEIVED. I made a not-very-useful and mostly buggy visualization of the data we've gathered so far (from Nightly 39 users) using the Telemetry.js API here: http://miketaylr.github.io/compat-telemetry-dashboards/wap.html. This updates every night and will need a few months of measuring before we can make any real decisions so I won't bother publishing any results just yet.
I will note that these patches haven't been taken up yet by Mozilla China's version of Firefox for Android which is one region we suspect receives more WAP than the West.
OK, so that's Part 1 of this exciting 2 part WAP Telemetry series. In Part 2 (which might get actually written tomorrow depending on a number of factors all of which are probably just laziness) I'll write out the more mundane technical details of landing a Telemetry patch in Gecko.
the following changes have been pushed to bugzilla.mozilla.org:
-  Need edits to Recruiting Component
-  Adding “Rank” to Product:Core Component: webRTC, webRTC: Audio/Video, webRTC: Signaling, webRTC: Networking
-  form.reps.mentorship calls an invalid method (Can’t locate object method “realname” via package “Bugzilla::User”)
-  removing the privacy review bug
-  Minor Brand Initiation Form Updates
-  Add links to socorro from the crash signatures in show_bug.cgi
discuss these changes on mozilla.tools.bmo.
Filed under: bmo, mozilla
This is an update on some recent work on the Media Source Extensions API in Firefox. There has been a lot of work done on MSE and the underlying media framework by Gecko developers and this update just covers some of the telemetry and exposed debug data that I’ve been involved with implementing.Telemetry
Mozilla has a telemetry system to get data on how Firefox behaves in the real world. We’ve added some MSE video stats to telemetry to help identify usage patterns and possible issues.
Bug 1119947 added information on what state an MSE video is in when the video is unloaded. The intent of this is to find out if users are exiting videos due to slow buffering or seeking. The data is available on telemetry.mozilla.org under the VIDEO_MSE_UNLOAD_STATE category. This has five states:
0 = ended, 1 = paused, 2 = stalled, 3 = seeking, 4 = other
The data provides a count of the number of times a video was unloaded for each state. If a large number of users were exiting during the stalled state then we might have an issue with videos stalling too often. Looking at current stats on beta 37 we see about 3% unloading on stall with 14% on ended and 57% on other. The ‘other’ represents unloading during normal playback.
Bug 1127646 will add additional data to get:
- Join Latency - time between video load and video playback for autoplay videos
- Mean Time Between Rebuffering - play time between rebuffering hiccups
This will be useful for determining performance of MSE for sites like YouTube. The bug is going through the review/comment stage and when landed the data will be viewable at telemetry.mozilla.org.about:media plugin
While developing the Media Source Extensions support in Firefox we found it useful to have a page displaying internal debug data about active MSE videos.
In particular it was good to be able to get a view of what buffered data the MSE JavaSript API had and what our internal Media Source C++ code stored. This helped track down issues involving switching buffers, memory size of resources and other similar things.
The internal data is displayed in an about:media page. Originally the page was hard coded in the browser but :gavin suggested moving it to an addon. The addon is now located at https://github.com/doublec/aboutmedia. That repository includes the aboutmedia.xpi which can be installed directly in Firefox. Once installed you can go to about:media to view data on any MSE videos.
To test this, visit a video that has MSE support in a nightly build with the about:config preferences media.mediasource.enabled and media.mediasource.mp4.enabled set to true. Let the video play for a short time then visit about:media in another tab. You should see something like:https://www.youtube.com/watch?v=3V7wWemZ_cs mediasource:https://www.youtube.com/6b23ac42-19ff-4165-8c04-422970b3d0fb currentTime: 101.40625 SourceBuffer 0 start=0 end=14.93043 SourceBuffer 1 start=0 end=15 Internal Data: Dumping data for reader 7f9d85ef1800: Dumping Audio Track Decoders: - mLastAudioTime: 7.732243 Reader 1: 7f9d75cba800 ranges=[(10.007800, 14.930430)] active=false size=79880 Reader 0: 7f9d85e88000 ranges=[(0.000000, 10.007800)] active=false size=160246 Dumping Video Track Decoders - mLastVideoTime: 7.000000 Reader 1: 7f9d75cbd800 ranges=[(10.000000, 15.000000)] active=false size=184613 Reader 0: 7f9d85985000 ranges=[(0.000000, 10.000000)] active=false size=1281914
The first portion of the displayed data shows the JS API video of the data buffered:currentTime: 101.40625 SourceBuffer 0 start=0 end=14.93043 SourceBuffer 1 start=0 end=15
This shows two SourceBuffer objects. One containing data from 0-14.9 seconds and the other 0-15 seconds. One of these will be video data and the other audio. The currentTime attribute of the video is 101.4 seconds. Since there is no buffered data for this range the video is likely buffering. I captured this data just after seeking while it was waiting for data from the seeked point.
The second portion of the displayed data shows information on the C++ objects implementing media source:Dumping data for reader 7f9d85ef1800: Dumping Audio Track Decoders: - mLastAudioTime: 7.732243 Reader 1: 7f9d75cba800 ranges=[(10.007800, 14.930430)] active=false size=79880 Reader 0: 7f9d85e88000 ranges=[(0.000000, 10.007800)] active=false size=160246 Dumping Video Track Decoders - mLastVideoTime: 7.000000 Reader 1: 7f9d75cbd800 ranges=[(10.000000, 15.000000)] active=false size=184613 Reader 0: 7f9d85985000 ranges=[(0.000000, 10.000000)] active=false size=1281914
A reader is an instance of the MediaSourceReader C++ class. That reader holds two SourceBufferDecoder C++ instances. One for audio and the other for video. Looking at the video decoder it has two readers associated with it. These readers are instances of a derived class of MediaDecoderReader which are tasked with the job of reading frames from a particular video format (WebM, MP4, etc).
The two readers each have buffered data ranging from 0-10 seconds and 10-15 seconds. Neither are ‘active’. This means they are not currently the video stream used for playback. This will be because we just started a seek. You can view how buffer switching works by watching which of these become active as the video plays. The size is the amount of data in bytes that the reader is holding in memory. mLastVideoTime is the presentation time of the last processed video frame.
MSE videos will have data evicted as they are played. This size threshold for eviction defaults to 75MB and can be changed with the media.mediasource.eviction_threshold variable in about:config. When data is appended via the appendBuffer method on a SourceBuffer an eviction routine is run. If data greater than the threshold is held then we start removing portions of data held in the readers. This will be noticed in about:media by the start and end ranges being trimmed or readers being removed entirely.
This internal data is most useful for Firefox media developers. If you encounter stalls playing videos or unusual buffer switching behaviour then copy/pasting the data from about:media in a bug report can help with tracking the problem down. If you are developing an MSE player then the information may also be useful to find out why the Firefox implementation may not be behaving how you expect.
Media Source Extensions is still in progress in Firefox and can be tested on Nightly, Aurora and Beta builds. The current plan is to enable support limited to YouTube only in Firefox 37 on Windows and Mac OS X for MP4 videos. Other platforms, video formats and wider site usage will be enabled in future versions as the implementation improves.
To track work on the API you can follow the MSE bug in Bugzilla.
2013 was an amazing year. Which is why I’m especially proud of what we accomplished in 2014.
We doubled our small dollar performance. We tripled our donor base. We met our target of 10,000 volunteer contributors. And we matched our exceptional grant performance.
We also launched our first, large-scale advocacy campaign, playing a key role in the Net Neutrality victory.
But best of all is that close to 100 Mozillians share the credit for pulling this off.
Here’s to 2015 and to Mozilla continuing to find its voice and identity as a dynamic non-profit.
A big thank you to everyone who volunteered, gave, and made it happen.CLICK THE IMAGE TO MAKE IT LARGER
Filed under: Mozilla
Have you visited Marketplace lately to app nominations in the spotlight? We just refreshed the ever-present “Mozillia Communites” apps collection. And a lot of apps populate the recent “Cats” and “Outer Space Collections” collections, this time we are move move and move to prepare next month featured application on Firefox Marketplace.
Two years ago I proposed a Webmaker Club at my daughter’s school, and it was turned down in an email:
Because it involves students putting (possibly) personal info/images on-line we are not able to do the club at this time. They did say that they may have to reconsider in the future because more and more of life is happening on-line.
One year later, and because our principle is amazing, and sponsored it – I had a ‘lunch time’ Webmaker Club at my daughter’s elementary school (grades 4 & 5) . It was great fun, I learned a lot as always thanks to challenges : handling the diversity of attendance, interests and limited time. I never get tired of helping kids ‘make the thing they are imagining’.
This year, I was excited to be invited to lead a Webmaker ‘Exploratory’ in our town’s middle school (grades 6-8). Exciting on so many levels, but two primarily
1) Teachers and schools are recognizing the need for web literacy (and its absence), and that it should be offered as part of primary education.
2) Schools are putting faith in community partnerships to teach. At least this is what it feels like to me – pairing a technically-strong teacher, with a community expert in coding/web (whatever) is a winning situation.
I wrote specific instructions for each week that we tracked on a wiki, we used Creative Commons Image Search and talked about our digital footprint.What worked
Having an ‘example make’ of the milestone for this class where each week kids could see, in advance what they were making.
Having a ‘starting template‘ for the lesson helped those kids who missed a class, catch up quickly.
Being flexible about that template, meant those kids who preferred to work on their own single ‘make’ could still challenge themselves a bit more.
Baked-In Web Literacy CC image search brought up conversations about ownership, sharing on the web and using a Wiki led to discussion about how Wikimedia editing and editors build content; about participating in open communities.
Sending my teacher-helper the curriculum a few days before, so she could prepare as a mentor.
Having some ‘other activities’ in my back pocket for kids who got bored, or finished early. These were just things like check out this ‘hour of code tutorial’.What didn’t work
We were sharing a space with the ‘year book’ team, who also used the internet, and sometimes our internet was moving slower than a West Coast Banana Slug. In our class ‘X Ray Goggles’ challenge, kids sat for long periods of time before being able to do much. Some also had challenges saving/publishing their X Ray Goggles Make.
Week 2, To get around slow internet – I brought everyone USB sticks and taught them to work locally – this also was a bit of a fail, as I realized many in the group didn’t know simple terms like ‘directory and folder’. I made a wrong assumption they had this basic knowledge. Also I should have collected USB sticks after class, because most lost or damaged in the care of students. We went back to slow internet – although, it was never as bad as that first day.
Having only myself and one teacher with that many kids meant we were running between kids. Also slightly unfair to the teacher who was learning along with the group. It also sometimes meant kids waited too long for help.
Not all kids liked the game we were making
So overall I think it went well, we had some wonderful kids, I was proud of all of them. The final outcome/learning, the sponsoring teacher, and I realized was that many of the lessons (coding, wikipedia, CC) could easily fit into any class project – rather than having Webmaking as it’s ‘own class’.
So in future, that may be the next way I participate: as someone who comes into say – a social studies class, or history class and helps students put together a project on the web. Perhaps that’s how community can offer their help to teachers in schools, as a way to limit large commitments like running an entire program, but to have longer-lasting and embedding impact in schools.
For the remainder of the year, and next – my goal seems to be as a ‘Webmaker Plugin’ , helping integrate web literacy into existing class projects :)
Today is the start of the third week of the mentoring program.
Since the start of the program, four bugs have been marked fixed:
- Bug 951695 – Consider renaming “Character Encoding” to “Text Encoding”
- Bug 782623 – Name field in Meta tags often empty
- Bug 1124271 – Clicking the reader mode button in an app tab opens reader mode in a new tab
- Bug 1113761 – Devtools rounds sizes up way too aggressively (and not reflecting actual layout). e.g. rounding 100.01px up to 101px
Also, the following bugs are in progress and look like they should be ready for review soon:
- Bug 1054276 – In the “media” view, the “save as” button saves images with the wrong extension
- Bug 732688 – No Help button in the Page Info window
The bugs currently being worked on are:
- Bug 1136526 – Move silhouetted versions of Firefox logo into browser/branding
- Bug 736572 – pageinfo columns should have arrows showing which column is sorted and sort direction
- Bug 418517 – Add “Select All” button to Page Info “Media” tab
- Bug 967319 – Show a nodesList result with natural order
I was hoping to have 8-9 bugs fixed by this time, but I’m happy with four bugs fixed and two bugs being pretty close. Bug 967319 in the “being worked on” section is also close, but still needs work with tests before it can be ready for review.
Tagged: firefox, mentoring, mozilla, planet-mozilla
The Monday Project Meeting
I’ve been hearing lately that Mozilla QA’s recognition story kind of sucks with some people going completely unrecognized for their efforts. Frankly, this is embarrassing!
Some groups have had mild success attempting to rectify this problem but not all groups share in this success. Some of us are still struggling to retain contributors due to lack of recognition; a problem which becomes harder to solve as QA becomes more decentralized.
As much as it pains me to admit it, the Testdays program is one of these areas. I’ve blogged, emailed, and tweeted about this but despite my complaining, things really haven’t improved. It’s time for me to take some meaningful action.
We need to get a better understanding of our recognition story if we’re ever to improve it. We need to understand what we’re doing well (or not) and what people value so that we can try to bridge the gaps. I have some general ideas but I’d like to get feedback from as many voices as possible and not move forward based on personal assumptions.
I want to hear from you. Whether you currently contribute or have in the past. Whether you’ve written code, ran some tests, filed some bugs, or if you’re still learning. I want to hear from everyone.
Look, I’m here admitting we can do better but I can’t do that without your help. So please, help me.
I received this morning a message from the Adobe Edge Reflow prerelease forum that triggered my interest. I must admit I did not really follow what happened there during the last twelve months for many various reasons... But this morning, it was different. In short, the author had questions about the fate of Edge Reflow, in particular because of the deep silence of that forum...
Adobe announced Edge Reflow in Q3 2012 I think. It followed the announcement of Edge Code a while ago. Reflow was aimed at visual responsive design in a new, cool, interactive desktop application with mobile and photoshop links. The first public preview was announced in February 2013 and a small community of testers and contributors gathered around the Adobe prerelease fora. Between January 2013 and now, roughly 1300 messages were sent there.
Reflow is a html5/JS app turned into a desktop application through the magic of CEF. It has a very cool and powerful UI, superior management of simple Media Queries, excellent management of colors, backgrounds, layers, magnetic grids and more. All in all, a very promising application for Web Authoring.
But the last available build of Reflow, again through the prerelease web site, is only a 0.57.17154 and it is now 8 months old. After 2 years and a half, Reflow is still not here and there are reasons to worry.
First, the team (the About dialog lists more than 20 names...) seems to have vanished and almost nothing new has been contributed/posted to Reflow in the last six to eight months.
Second, the application still suffers from things I identified as rather severe issues early on: the whole box model of the application is based on CSS floats and is then not in line with what modern web designers are looking for. Eh, it's not even using absolute positioning... It also means it's going to be rather complicated to adapt it to grids and flexbox, not even mentioning Regions...
Reflow also made the choice to generate Web pages instead of editing Web pages... It means projects are saved in a proprietary format and only exported to html and CSS. It's impossible to take an existing Web page and open it in Reflow to edit it. In a world of Web Design that sees authors use heterogeneous environments, I considered that as a fatal mistake. I know - trust me, I perfectly know - that making html the pivot format of Reflow would have implied some major love and a lot, really a lot of work. But not doing it meant that Edge Reflow had to be at the very beginning of the editorial chain, and that seemed to me an unbearable market restriction.
Then there was the backwards compatibility issue. Simply put, how does one migrate Dreamweaver templates to Reflow? Short answer, you can't...
I suspect Edge Reflow is now at least on hold, more probably stopped. More than 2 years and still no 1.0 on such an application that should have seen a 1.0beta after six to eight months is not a good sign anyway. After Edge Code that became Brackets in november 2014, that raises a lot of question on the Edge concept and product line. Edge Animate seems to be still maintained at Adobe (there's our old Netscape friend Kin Blas in the list of credits) but I would not be surprised if the name is changed in the future.
Too bad. I was, in the beginning, really excited by Edge Reflow. I suspect we won't hear about it again.
In this post you can find an overview about the work happened in the Firefox Automation team during week 51 and 52 of 2014. I’m sorry for this very late post but changes to our team, which I will get to in my next upcoming post, caught me up with lots of more work and didn’t give me the time for writing status reports.Highlights
Henrik started work towards a Mozmill 2.1 release. Therefore he had to upgrade a couple of mozbase packages first to get latest Mozmill code on master working again. Once done the patch for handling parent sections in manifest files finally landed, which was originally written by Andrei Eftimie and was sitting around for a while. That addition allows us to use mozhttpd for serving test data via a local HTTP server. Last but not least another important feature went in, which let us better handle application disconnects. There are still some more bugs to fix before we can actually release version 2.1 of Mozmill.
Given that we only have the capacity to fix the most important issues for the Mozmill test framework, Henrik started to mass close existing bugs for Mozmill. So only a hand-full of bugs will remain open. If there is something important you want to see fixed, we would encourage you to start working on the appropriate bug.
For Mozmill CI we got the new Ubuntu 14.10 boxes up and running in our staging environment. Once we can be sure they are stable enough, they will also be enabled in production.Individual Updates
If you are interested in further details and discussions you might also want to have a look at the meeting agenda, the video recording, and notes from the Firefox Automation meeting of week 51 and week 52.
Protecting the privacy of users and the information collected about them online is crucial to maintaining and growing a healthy and open Web. Unfortunately, there have been massive threats that weaken our ability to create the Web that we want to see. The most notable and recent example of this is the expansive surveillance practices of the U.S. government that were revealed by Edward Snowden. Even though it has been nearly two years since these revelations began, the U.S. Congress has failed to pass any meaningful surveillance reform, and is about to consider creating new surveillance authorities in the form of the Cybersecurity Information Sharing Act of 2015.
We opposed the Cyber Intelligence Sharing and Protection Act in 2012 – as did a chorus of privacy advocates, information security professionals, entrepreneurs, and leading academics, with the President ultimately issuing a veto threat. We believe the newest version of CISA is worse in many respects, and that the bill fundamentally undermines Internet security and user trust.
CISA is promoted as facilitating the sharing of cyber threat information, but:
- is overbroad in scope, allowing virtually any type information to be shared and to be used, retained, or further shared not just for cybersecurity purposes, but for a wide range of other offences including arson and carjacking;
- allows information to be shared automatically between civilian and military agencies including the NSA regardless of the intended purpose of sharing, which limits the capacity of civilian agencies to conduct and oversee the exchange of cybersecurity information between the private sector and sector-specific Federal agencies;
- authorizes dangerous countermeasures that could seriously damage the Internet; and
- provides blanket immunity from liability with shockingly insufficient privacy safeguards.
The lack of meaningful provisions requiring companies to strip out personal information before sharing with the government, problematic on its own, is made more egregious by the realtime sharing, data retention, lack of limitations, and sweeping permitted uses envisioned in the bill.
Unnecessary and harmful sharing of personal information is a very real and avoidable consequence of this bill. Even in those instances where sharing information for cybersecurity purposes is necessary, there is no reason to include users’ personal information. Threat indicators rarely encompass such details. Furthermore, it’s not a difficult or onerous process to strip out personal information before sharing. In the exceptional cases where personal information is relevant to the threat indicator, those details would be so relevant to mitigating the threat at hand that blanket immunity from liability for sharing would not be necessary.
We believe Congress should focus on reining in the NSA’s sweeping surveillance authority and practices. Concerns around information sharing are at best a small part of the problem that needs to be solved in order to secure the Internet and its users.
Previously, in my exciting series “improving the HTTP framing checks in Firefox” we learned that I landed a patch, got it backed out, struggled to improve the checks and finally landed the fixed version only to eventually get that one backed out as well.
And now I’ve landed my third version. The amendment I did this time:
When receiving HTTP content that is content-encoded and compressed I learned that when receiving deflate compression there is basically no good way for us to know if the content gets prematurely cut off. They seem to lack the footer too often for it to make any sense in checking for that. gzip streams however end with a footer so they are easier to reliably detect when they are incomplete. (As was discovered before, the Content-Length: is far too often not updated by the server so it is instead wrongly showing the uncompressed size.)
This (deflate vs gzip) knowledge is now used by the patch, meaning that deflate compressed downloads can be cut off without the browser noticing…
Will this version of the fix actually stick? I don’t know. There’s lots of bad voodoo out there in the HTTP world and I’m putting my finger right in the middle of some of it with this change. I’m pretty sure I’ve not written my last blog post on this topic just yet… If it sticks this time, it should show up in Firefox 39.