Mozilla Nederland LogoDe Nederlandse

Daniel Stenberg: The life of a curl security bug

Mozilla planet - to, 05/10/2017 - 14:59
The report

Usually, security problems in the curl project come to us out of the blue. Someone has found a bug they suspect may have a security impact and they tell us about it on the email address. Mails sent to this address reach a private mailing list with the curl security team members as the only subscribers.

An important first step is that we respond to the sender, acknowledging the report. Often we also include a few follow-up questions at once. It is important to us to keep the original reporter in the loop and included in all subsequent discussions about this issue – unless they prefer to opt out.

If we find the issue ourselves, we act pretty much the same way.

In the most obvious and well-reported cases there are no room for doubts or hesitation about what the bugs and the impact of them are, but very often the reports lead to discussions.

The assessment

Is it a bug in the first place, is it perhaps even documented or just plain bad use?

If it is a bug, is this a security problem that can be abused or somehow put users in some sort of risk?

Most issues we get reported as security issues are also in the end treated as such, as we tend to err on the safe side.

The time plan

Unless the issue is critical, we prefer to schedule a fix and announcement of the issue in association with the pending next release, and as we do releases every 8 weeks like clockwork, that’s never very far away.

We communicate the suggested schedule with the reporter to make sure we agree. If a sooner release is preferred, we work out a schedule for an extra release. In the past we’ve did occasional faster security releases also when the issue already had been made public, so we wanted to shorten the time window during which users could be harmed by the problem.

We really really do not want a problem to persist longer than until the next release.

The fix

The curl security team and the reporter work on fixing the issue. Ideally in part by the reporter making sure that they can’t reproduce it anymore and we add a test case or two.

We keep the fix undisclosed for the time being. It is not committed to the public git repository but kept in a private branch. We usually put it on a private URL so that we can link to it when we ask for a CVE, see below.

All security issues should make us ask ourselves – what did we do wrong that made us not discover this sooner? And ideally we should introduce processes, tests and checks to make sure we detect other similar mistakes now and in the future.

Typically we only generate a single patch from the git master master and offer that as the final solution. In the curl project we don’t maintain multiple branches. Distros and vendors who ship older or even multiple curl versions backport the patch to their systems by themselves. Sometimes we get backported patches back to offer users as well, but those are exceptions to the rule.

The advisory

In parallel to working on the fix, we write up a “security advisory” about the problem. It is a detailed description about the problem, what impact it may have if triggered or abused and if we know of any exploits of it.

What conditions need to be met for the bug to trigger. What’s the version range that is affected, what’s the remedies that can be done as a work-around if the patch is not applied etc.

We work out the advisory in cooperation with the reporter so that we get the description and the credits right.

The advisory also always contains a time line that clearly describes when we got to know about the problem etc.


Once we have an advisory and a patch, none of which needs to be their final versions, we can proceed and ask for a CVE.

Depending on where in the release cycle we are, we might have to hold off at this point. For all bugs that aren’t proprietary-operating-system specific, we pre-notify and ask for a CVE on the distros@openwall mailing list. This mailing list prohibits an embargo longer than 14 days, so we cannot ask for a CVE from them longer than 2 weeks in advance before our release.

The idea here is that the embargo time gives the distributions time and opportunity to prepare updates of their packages so they can be pretty much in sync with our release and reduce the time window their users are at risk. Of course, not all operating system vendors manage to actually ship a curl update on two weeks notice, and at least one major commercial vendor regularly informs me that this is a too short time frame for them.

For flaws that don’t affect the free operating systems at all, we ask MITRE directly for CVEs.

The last 48 hours

When there is roughly 48 hours left until the coming release and security announcement, we merge the private security fix branch into master and push it. That immediately makes the fix public and those who are alert can then take advantage of this knowledge – potentially for malicious purposes. The security advisory itself is however not made public until release day.

We use these 48 hours to get the fix tested on more systems to verify that it is not doing any major breakage. The weakest part of our security procedure is that the fix has been worked out in secret so it has not had the chance to get widely built and tested, so that is performed now.

The release

We upload the new release. We send out the release announcement email, update the web site and make the advisory for the issue public. We send out the security advisory alert on the proper email lists.

Bug Bounty?

Unfortunately we don’t have any bug bounties on our own in the curl project. We simply have no money for that. We actually don’t have money at all for anything.

Hackerone offers bounties for curl related issues. If you have reported a critical issue you can request one from them after it has been fixed in curl.


Categorieën: Mozilla-nl planet

Mozilla stopt in juni 2018 met ondersteuning Windows XP en Vista - Techzine

Nieuws verzameld via Google - to, 05/10/2017 - 14:13


Mozilla stopt in juni 2018 met ondersteuning Windows XP en Vista
Mozilla heeft vandaag aangekondigd dat het in juni 2018 helemaal stopt met ondersteuning van Firefox voor Windows XP en Windows Vista. Firefox-gebruikers die de browser op een apparaat met een van die twee besturingssystemen draaien, kunnen tot ...

en meer »
Categorieën: Mozilla-nl planet

Emily Dunham: Saying Ping

Mozilla planet - to, 05/10/2017 - 09:00
Saying Ping

There’s an idiom on IRC, and to a lesser extent other more modern communication media, where people indicate interest in performing a real-time conversation with someone by saying “ping” to them. This effectively translates to “I would like to converse with you as soon as you are available”.

The traditional response to “ping” is to reply with “pong”. This means “I am presently available to converse with you”.

If the person who pinged is not available at the time that the ping’s recipient replies, what happens? Well, as soon as they see the pong, they re-ping (either by saying “ping” or sometimes “re-ping” if they are impersonating a sufficiently complex system to hold some state).

This attempt at communication, like “phone tag”, can continue indefinitey in its default state.

It is an inefficient use of both time and mental overhead, since each missed “ping” leaves the recipient with a vague curiosity or concern: “I wonder what the person who pinged wanted to talk to me about...”. Additionally, even if both parties manage to arrange synchronous communication at some point in the future, there’s the very real risk that the initiator may forget why they originally pinged at all.

There is an extremely simple solution to the inefficiency of waiting until both parties are online, which is to stick a little metadata about your question onto the ping. “Ping, could you look issue # xyz?” “Ping, can we chat about your opinions on power efficiency sometime?”. And yet there appears to be a decent correlation between people I regard as knowing more than I do about IRC etiquette, and people who issue pings without attaching any context to them.

If you do this, and happen to read this, could you please explain why to me sometime?

Categorieën: Mozilla-nl planet

Karl Dubost: A Web Compatibility Issue? Maybe Not.

Mozilla planet - to, 05/10/2017 - 07:40

There is an ongoing Firefox sprint (This looks like a URI that will die in the future. Sniff.) trying to identify issues in the browsers and report them on Webcompat.


We got a flood of new issues. Unfortunately a lot of them were invalid and were not part of what we qualify as Web Compatibility issues. This strains a lot on our team (usually Adam, Eric and myself doing triage, but for this occasion everyone had to join) and it is probably frustrating for the reporters to see their bugs closed as invalid. So in the spirit of "How do we make it better for everyone?", a couple of guidelines for people running the events related to Webcompat sprint.

What is a Webcompat issue?

Or more exactly what is not a Webcompat issue.

Our work focuses on identifying differences in between browsers when accessing a Web site. That said all differences are not necessary Web compatibility issues.

Before starting a Webcompat sprint (organizers)
  • Fully understand this document. If you organizing an event and have questions unanswered in this page, please ask a question.
  • Make sure to explain to participants that quality of the reports is the goal.
  • Make sure that participants have created a fresh profile.
    • This fresh profile should not contains any add-ons, except if needed the reporter extension. Developer Edition and Nightly already have the Report Site Issue button into the "•••" menu.
    • This fresh profile can be set in a way that it resets itself at each restart.
  • Encourage participants to write detailed steps for reproducing the issue. An idea is for example to work as a team. One finds an issue, write the form without submitting it. The team mate on another computer is trying to reproduce the issue with only the form instructions. If the person can't understand the given instructions and/or can't see where the issue is, then it's probably an incomplete report.
During the Webcompat sprint (participants) Testing in Multiple Browsers

The most important thing of all. You need to test in at least two browsers. The more, the merrier. Being sure that it's actually not breaking elsewhere is a good for a webcompat issue. Sometimes it's better to ask someone else who is using a browser and compare the results.

Responsive Mode and Devices

Responsive mode on desktop browsers is a quick way to test if the website is ready for mobile devices. That said these responsive modes have their own limitations. They are not simulators, just emulators. You might want to double check on a real device before reporting.

Slow Network

If the network is slow. It is usually slow in any browsers. These do not make necessary a Web compatibility issue. Performance issues are very hard to decipher, because they are dependent on many external parameters.

  1. Try to reproduce in another browser. If it's blazing fast in another browser. There might be something interesting.
  2. Try to reload and see if the second load is faster (it should if the website has been correctly parametrized.)

Most of the time, this is not a Webcompat issue.

Flash plug-in

Some sites do not work or half work without a flash plugin. We can't really do anything about it. It might not be a good business decision but they chose it. It creates basically the same issue for all browsers. If you are unsastified with it, try to find their feedback or contact page and send them an email.

This is not a Webcompat issue.

Java Applets

Java Applets were used a lot in the 90s to provide application contexts in Web page when HTML was not enough. It has mostly disappeared in the context of a Web page. If you see a page dependent on Java, no need to report it.

This is not a Webcompat issue.

Tracking Protection List

Firefox and some other browsers have mechanisms to block trackers. If you are accessing a website with a strict tracking protection active. The site is likely to break in some fashions. Sites have a tendency to track all your actions. And some scripts fail when the tracker is not working.

This is not a Webcompat issue.

Ads Blockers/Script Blockers

This is a variation of the Tracking Protection list. Any addon you install has the potential to disrupt the normal operation of a website. It's even more acute when it's about blocking ads or scripts. ADB, uBlock, NoScript are the most current reasons for a website failing. Sometimes you will get a site completely blank or just with the text and no layout at all.

This is not a Webcompat issue.

Desktop site instead of mobile site

When testing on mobile, make sure to test on a device. Receiving a desktop site on a mobile device is frequent. Not all websites have specific versions of their site for mobile, or a website which adjusts itself to the screen context. That said, there are specific case which are worth reporting.

  1. Receiving a desktop site on a mobile device while on the same device, Chrome is receiving the mobile version. Very often this is the result of user agent sniffing. You can report it.
  2. Receiving a desktop site partially visible while Chrome seems to adjust the site to the current screen. This is likely a duplicate of the lack of virtual viewport on Firefox Android. You can report it.
Different mobile versions

Sometimes some websites are sending a different version to two browsers. For example a text only version to one browser and a very graphic one to another browser. It's probably the result of user agent sniffing. You can report it.

Fluid layout

Some websites have fluid or responsive layouts. These layouts adjust well more or less to small screens. When they don't adjust, you might, for example, see a navigation bar with items folding on a second line. These issues are difficult to identify. Your best lead is to test in another browser. If you get the same behavior in both Chrome, Firefox, Edge and Safari on mobile, then it's not a webcompat issue.

This is not a Webcompat issue.

Login/Password required / Credit card required

This one is quite hard and heart wrenching. Many sites require login and password to be able to interact with them. Think social networks, emails, school networks, bank accounts, etc. For a very common site, we might be able to reproduce because some of us have an account on the same sites. But in many occasions, we do not. For example, you want to report a webcompat issue for a private page on your bank. It's unlikely that we will be able to do anything without being able to access the actual page. We are not client on this site. We do not have a credit card for buying stuff you bought.

  1. If you are anonymous, do not report it. There's nothing we can do about it.
  2. If you report it as a github user on, be ready to followup. It's very likely we will not be able to access the page ourselves, but we might be able to guide you in analyzing the issue for finding out the issue.

It might be a Webcompat issue, but it might not be worth reporting it.

Browser features (Reader View, Tabs, etc.)

Browsers all have specific features accessible through what we call the chrome (different from the chrome browser). This is basically things which are part of the browser UI and assists in having a better usage. The best way to report an issue you noticed is to directly open a bug report on the browser vendor reporting system.

This is not a Webcompat issue.

SSL errors

Some sites have very poor security practices or outdated certificates. So when the browser blocks you to access a website with a message saying "An error occurred during a connection to …" is likely an issue with their SSL handling. You can test that yourself by entering the domain name into a validator. Another common reason is the HTTPSEverywhere add-on (and alike) which tried to force redirect to https for all websites. Some sites do not have an https version. Then the site will fail under https, but will work with http. Feel free to contact with a link to the SSL validation.

This is mostly not a Webcompat issue.

Private domain names / Proxies / Routers UI (not on the public internet)

Web UIs are used for many type of applications. Many of them are accessible only in the context of a company network, a local home network, etc.

  1. If the site is not accessible through the public internet and you are anonymous, do not report it. We will not be able to do anything about it.
  2. On the other hand, if you are a github user and you are ready to help us with followup questions and diagnosis, it might be worth reporting.
After the Webcompat sprint (organizers, participants)

It's not about quantity, it's about quality. We try to improve the Web. A larger number of issues might not necessary help us to fix a browser or a website faster. On the other hand, being able to followup on the issues when there are unknown details or additional questions from people triaging and diagnosing is invaluable. Reporting many issues without being able to answer questions afterward because you have no time, or you are not interested, might waste time of many people.

You might want to do additional meetings for specifically following up on these issues. It's a great opportunity to learn how to debug and understand what is happening in a Web page.


Categorieën: Mozilla-nl planet

Mozilla Marketing Engineering & Ops Blog: Kuma Report, September 2017

Mozilla planet - to, 05/10/2017 - 02:00

Here’s what happened in September in Kuma, the engine of MDN Web Docs:

  • Ran Maintenance Mode Tests in AWS
  • Updated Article Styling
  • Continued Conversion to Browser Compat Data
  • Shipped Tweaks and Fixes

Here’s the plan for October:

  • Move MDN to AWS
  • Improve Performance of the Interactive Editor
Done in September Ran Maintenance Mode Tests in AWS

Back in March 2017, we added Maintenance Mode to Kuma, which allows the site content to be available when we can’t write to the database. This mode got its first workout this month, as we put MDN into Maintenance Mode in SCL3, and then sent an increasing percentage of public traffic to an MDN deployment in AWS.

We ran 3 tests in September. In the first, we just tried Maintenance Mode with production traffic in SCL3. In the second test we sent 5% of traffic to AWS, and in the third test we ramped it up to 15%, then 50%, and finally 100%. The most recent test, on October 3, included New Relic monitoring, which gave us useful data and pretty charts.

Web Transactions Time shows how the average request is handled by the different services. For the SCL3 side, you can see a steady improvement in transaction time from 125 to 75 ms, as more traffic is handled by AWS.

SCL3 transaction time

On the AWS side, the response time grows from 40 to 90 ms, as the DNS configuration sends 100% of traffic to the new cluster.

AWS transaction time

The Web Transaction Percentiles chart shows useful statistics beyond the average. For example, 99% of users see at least 375 ms response time, and the median is at 50 ms.

SCL3 transaction percent

On the AWS side, 99% of users see at least 350 ms response time (slightly better), and the median is at 100 ms (slightly worse).

AWS transaction percent

Finally, Throughput measures the requests handled per minute. SCL3 continued handling over 500 requests per minute during the test. This may be due to clients using old DNS records, or because KumaScript continues making requests to render out-of-date pages.

SCL3 throughput

AWS ramped up to over 2000 requests per minute during the test, easily handing the load of a US afternoon.

AWS throughput

We consider this a successful test. Our AWS environment can easily handle regular, read-only MDN traffic, with capacity to spare. We don’t expect MDN users to notice much of a difference when we make the change.

Updated Article Styling

We’re working on the next phase of redesigning MDN. We’re looking at ways to present MDN articles, to make them easier to read, to scan quickly, and to emphasize the most useful information. We’re testing some ideas with users, and some of the adjustments showed up on the site this month.

For example, MDN documents a lot of code in prose, such as HTML element and attribute names. In PR 4400, Stephanie Hobson added a highlight background to make these stand out.

Before PR 4400, a fixed-width font was used to display literals:

Before 4400 no highlight

After PR 4000, the literals stand out with a light grey background:

After 4400 highlight

There’s a lot that goes into making text on the web readable (see Stephanie’s slides from her talk at #a11yTOConf for some suggestions). One of the things we can do with the default style is to try to make lines about 50-75 characters wide. On the other hand, code examples don’t wrap well, and we want to make them stand out. We’re experimenting with style changes for line length with beta testers, using some of the ideas from For example, PR 4402 expands the sample output, making the examples stand out from the rest of the page.

Before PR 4402, the examples shared the text’s narrow width:

Before 4402 narrow

After PR 4402, the example is as wide as the code samples, and the buttons restyled:

After 4402 narrow

We’ll test more adjustments with beta testers and in individual user tests. Some of these we’ll ship immediately, and others will inform the article redesign.

Continued Conversion to Browser Compat Data

The Browser Compat Data (BCD) project now includes all the HTML and JavaScript compatibility data from MDN. 1,500 MDN pages now generate their compatibility tables from this data. Only 4,500 more to go!

The BCD project was the most active MDN project in September. There were 159 commits over 90 pull requests. These PRs came from from 18 different contributors, bringing the total to 50 contributors. There’s over 58,000 additional lines in the project. 13 of these PRs are from Daniel D. Beck, who is joining the MDN team as a contractor.

This progress was made possible by Florian Scholz, Jean-Yves Perrier, and wbamberg, who quickly and accurately reviewed the PRs, working out issues and getting them merged. Florian has also started a weekly release of the npm package, and we’re up to mdn-browser-compat-data 0.0.8.

Shipped Tweaks and Fixes

There were many PRs merged in September:

Here are some of the highlights:

Planned for October

Work will continue to migrate to Browser Compat Data, and to fix issues with the redesign and the new interactive examples.

Move MDN to AWS

This week, we’ll complete our functional testing of MDN, making sure that page editing and other read/write tests are working, and that the rarely used features continue to work.

On Tuesday October 10, we’ll put SCL3 in Maintenance Mode again, move the database, and come back with MDN in AWS.

We’ve done a lot of preparation, but we expect something to break, so we’re planning on fixing AWS-related bugs in October. The AWS move will also allow us to improve our deployment processes, helping us ship features faster. If things go smoothly, we have plenty of other work lined up, such as style improvements, SEO-related tweaks, updating to Django 1.11, and getting KumaScript UI strings into Pontoon.

Improve Performance of the Interactive Editor

We’re continuing the beta test for the interactive editor. The feedback has been overwhelming positive, but we’re not happy with the page speed impact. We’ll continue work in October to improve performance. In the meantime, contractor Mark Boas is preparing examples for the launch, such as 26 examples for JavaScript expressions and operators (PR 286).

Categorieën: Mozilla-nl planet

Air Mozilla: Bugzilla Project Meeting, 04 Oct 2017

Mozilla planet - wo, 04/10/2017 - 22:00

Bugzilla Project Meeting The Bugzilla Project Developers meeting.

Categorieën: Mozilla-nl planet

Air Mozilla: Bugzilla Project Meeting, 04 Oct 2017

Mozilla planet - wo, 04/10/2017 - 22:00

Bugzilla Project Meeting The Bugzilla Project Developers meeting.

Categorieën: Mozilla-nl planet

Update on Firefox support for Windows XP and Vista

Mozilla Futurereleases - wo, 04/10/2017 - 20:57

Last year we announced that Windows XP and Vista users would be automatically moved to the Firefox Extended Support Release (ESR), ensuring them continued updates until at least September, 2017.

Today we are announcing June 2018 as the final end of life date for Firefox support on Windows XP and Vista. As one of the few browsers that continues to support Windows XP and Vista, Firefox users on these platforms can expect security updates until that date. Users do not need to take additional action to receive those updates.

We strongly encourage our users to upgrade to a version of Windows that is supported by Microsoft. Unsupported operating systems receive no security updates, have known exploits, and are dangerous for you to use.

For more information please visit the Firefox support page.

The post Update on Firefox support for Windows XP and Vista appeared first on Future Releases.

Categorieën: Mozilla-nl planet

Mozilla Future Releases Blog: Update on Firefox support for Windows XP and Vista

Mozilla planet - wo, 04/10/2017 - 20:57

Last year we announced that Windows XP and Vista users would be automatically moved to the Firefox Extended Support Release (ESR), ensuring them continued updates until at least September, 2017.

Today we are announcing June 2018 as the final end of life date for Firefox support on Windows XP and Vista. As one of the few browsers that continues to support Windows XP and Vista, Firefox users on these platforms can expect security updates until that date. Users do not need to take additional action to receive those updates.

We strongly encourage our users to upgrade to a version of Windows that is supported by Microsoft. Unsupported operating systems receive no security updates, have known exploits, and are dangerous for you to use.

For more information please visit the Firefox support page.

The post Update on Firefox support for Windows XP and Vista appeared first on Future Releases.

Categorieën: Mozilla-nl planet

Air Mozilla: The Joy of Coding - Episode 116

Mozilla planet - wo, 04/10/2017 - 19:00

The Joy of Coding - Episode 116 mconley livehacks on real Firefox bugs while thinking aloud.

Categorieën: Mozilla-nl planet

Air Mozilla: The Joy of Coding - Episode 115

Mozilla planet - wo, 04/10/2017 - 19:00

The Joy of Coding - Episode 115 mconley livehacks on real Firefox bugs while thinking aloud.

Categorieën: Mozilla-nl planet

Air Mozilla: The Joy of Coding - Episode 115

Mozilla planet - wo, 04/10/2017 - 19:00

The Joy of Coding - Episode 115 mconley livehacks on real Firefox bugs while thinking aloud.

Categorieën: Mozilla-nl planet

Hacks.Mozilla.Org: Firefox 56: Last Stop before Quantum

Mozilla planet - wo, 04/10/2017 - 18:02

Here at Mozilla, we’re extremely excited about next month’s release of Firefox Quantum (preview it today in Developer Edition!) which brings massive speed improvements, a brand new UI, and several new or improved Developer Tools.

But that’s next month. What about last week’s release of Firefox 56?

Browser Features

For users, Firefox 56 sports two major changes:

First, Firefox Screenshots is a brand new, built-in tool for capturing and (optionally) sharing images of web pages. The tool makes it easy to select regions of the page based on the underlying DOM structure, though both full-page and free-form screenshots are also available.

graphical image of a Firefox Screenshot

Of course, the Developer Tools retain their own screenshot capabilities. For example, you can right-click on any node in the Inspector to capture a screenshot of that node, or you can use the screenshot command in the Developer Toolbar.

Second, Firefox is now 64-bit by default on all operating systems, and existing 32-bit installations will automatically upgrade to 64-bit builds if supported by the underlying hardware.

What’s New for Developers

For developers, Firefox now supports “headless” mode on all operating systems, which makes it possible to run Firefox without actually displaying a window on the screen. This is remarkably useful for automated testing, both during local development and as part of a continuous integration (CI) pipeline.


We’ve also put an enormous amount of effort into Firefox’s Developer Tools. You can read all about the current and upcoming features in Julian Descottes’s article, but we’re especially proud of our completely new debugger: as part of the “devtools.html” project, we completely rewrote the debugger as a modern web application, powered by React / Redux, and using standard HTML, JavaScript, and CSS.

You can find the source code for the debugger on GitHub.

Bidding Farewell to Legacy Add-Ons

Finally, Firefox 56 is the last release to support legacy APIs for add-ons. In their place we’ve created “WebExtensions,” a set of cross-browser extension APIs that we hope to standardize at the W3C. Since many WebExtension APIs are compatible with Chrome, Edge, and Opera, popular add-ons from other browsers (like the Vue.js DevTools) can run on Firefox without significant modification.

Unfortunately, the impending removal of old APIs with next month’s general release of Firefox Quantum will necessarily end support for several legacy add-ons. For example, the new APIs do not offer the degree of UI modification necessary to support Classic Theme Restorer. However, nearly 5,000 add-ons are already available using the new APIs, including Tree Style Tab, Tab Center Redux, and uBlock Origin. The APIs themselves are also still being developed and expanded, so expect to see greater capabilities with each release of Firefox.

In most cases, the upgrade to Firefox Quantum will be painless. Most popular add-ons will update to the new APIs before the release of Firefox Quantum, and Firefox will suggest replacements inside about:addons for those that don’t.

If you’ve ever built a Chrome extension, consider porting it from Chrome with the help of our and web-ext tools. In most cases your Chrome browser extension will run in Firefox or Microsoft Edge with just a few changes. Let us know how it goes. If you have ideas or questions, you can contact the team on the dev-addons mailing list or #extdev on IRC.

Categorieën: Mozilla-nl planet

Air Mozilla: Weekly SUMO Community Meeting October 4, 2017

Mozilla planet - wo, 04/10/2017 - 18:00

Weekly SUMO Community Meeting October 4, 2017 This is the SUMO weekly call

Categorieën: Mozilla-nl planet

Air Mozilla: Weekly SUMO Community Meeting October 4, 2017

Mozilla planet - wo, 04/10/2017 - 18:00

Weekly SUMO Community Meeting October 4, 2017 This is the SUMO weekly call

Categorieën: Mozilla-nl planet

Wladimir Palant: Observations on managed bug bounty programs

Mozilla planet - wo, 04/10/2017 - 17:38

I’ve been increasingly using Bugcrowd lately, a platform that manages security bug bounty programs for its clients and allows security researchers to contribute to a number of such programs easily. Previously, I’ve mostly reported security issues in Mozilla and Google products. Both companies manage their bug bounty programs themselves and are very invested in security, so Bugcrowd came as a considerable culture shock.

First of all, it appears that many companies consider bug bounty programs an alternative to building solid in-house security expertise. They will patch whatever bugs are reported, but they don’t seem to draw any conclusions about the deficiencies in their security architecture. Eventually, even the most insecure application will have enough patches applied that finding new issues takes too much effort for the monetary rewards offered. At that point, almost no new reports will be coming in and for the management it’s “mission accomplished” I guess. Sadly, with security being an afterthought the product remains inherently insecure, even the smallest change could potentially open new security holes.

Actually, Bugcrowd makes it very easy to take that route. The majority of the bug bounty programs are private, meaning that not only are security researchers forbidden to discuss the issues they find, they aren’t even allowed to discuss the existence of the bug bounty program. So the vendors don’t have to fear publicity when their product (which is sometimes supposed to be a security product) turns out to be full of critical bugs.

Communication with security researchers is also remarkable. Recently, I reported a major vulnerability that allowed websites to inject code into a browser extension. I noted all the things that this code could potentially do, such as reading cookies from any website. But my proof of concept was limited to retrieving user’s data and showing their user name. Here is a reply that I received:

I was able to verify the issue. Can you create another poc, that reads some sensitive information(like passwords for example), so we can make a case for a higher priority? For now, this seems equivalent to a reflected XSS/ P3 to me.

I’m used to providing a minimal proof of concept, and this isn’t the first time that I was asked to demonstrate the issue “properly” on Bugcrowd. This comment finally made me realize the problem: with Mozilla and Google I used to communicate with developers. On Bugcrowd however, the triaging is often being done by people who cannot analyze a proof of concept and merely try to categorize an issue in terms of the vulnerability rating taxonomy.

With out of the box vulnerabilities and particularly vulnerabilities involving browser extensions that categorization becomes a very non-trivial task. It doesn’t help that many companies apparently outsourced the first contact to Bugcrowd’s “experts.” These might indeed have great knowledge of security issues, but occasionally they will have even less product knowledge than me. As a consequence, the important information in your report isn’t considered the line of code causing the issue but rather the proof of concept which shows exactly what harm one could do with it. There is a clear monetary incentive for that: you won’t be paid more if you can pinpoint the issue in the application or provide good recommendations, yet you will definitely get paid less if you underestimate or fail to communicate the scope of the issue you discovered.

The consequence for me (and others before me as well it seems) is that participating on Bugcrowd requires an attitude change. Normally, I report security issues because I want a product to be secure. So I will report all issues I notice no matter how minor, and I will occasionally provide recommendations on addressing this entire class of problems. With Bugcrowd, this approach doesn’t work. For example, the few clickjacking issues I reported didn’t go anywhere because the proof of concept wasn’t reliable enough. I could probably produce a reliable proof of concept, but for clickjacking this is lots of work. Yet clickjacking is a P4 issue that will typically be rewarded with $100. Investing so much time in minor issues just doesn’t pay off, and neither does writing recommendations that most likely won’t make their way to the developers. Worse yet, too many reports of minor issues will degrade my rating on Bugcrowd and prevent me from being invited to private bug bounty programs.

Now Bugcrowd isn’t the only platform in this field, HackerOne being the other big player. I don’t have enough experience with HackerOne yet, so I cannot tell whether the same issues are present there. If somebody knows, I’d love to hear.

Categorieën: Mozilla-nl planet

Chris H-C: Anatomy of a Firefox Update

Mozilla planet - wo, 04/10/2017 - 17:26

Alessio (:Dexter) recently landed a new ping for Firefox 56: the “update” ping with reason “ready”. It lets us know when a client’s Firefox has downloaded and installed an update and is only waiting for the user to restart the browser for the update to take effect.

In Firefox 57 he added a second reason for the “update” ping: reason “success”. This lets us know when the user’s started their newly-updated Firefox.

I thought I might as well see what sort of information we could glean from this new data, using the recent shipping of the new Firefox Quantum Beta as a case study.

This is exploratory work and you know what that means[citation needed]: Lots of pretty graphs!

First: the data we knew before the “update” ping: Nothing.

Well, nothing specific. We would know when a given client would use a newly-released build because their Telemetry pings would suddenly have the new version number in them. Whenever the user got around to sending them to us.

We do have data about installs, though. Our stub installer lets us know how and when installs are downloaded and applied. We compile those notifications into a dataset called download_stats. (for anyone who’s interested: this particular data collection isn’t Telemetry. These data points are packaged and sent in different ways.) Its data looks like this:Screenshot-2017-9-29 Recent Beta Downloads.png

Whoops. Well that ain’t good.

On the left we have the tailing edge of users continuing to download installs for Firefox Beta 56 at the rate of 50-150 per hour… and then only a trace level of Firefox Beta 57 after the build was pushed.

It turns out that the stub installer notifications were being rejected as malformed. Luckily we kept the malformed reports around so that after we fixed the problem we could backfill the dataset:Screenshot-2017-10-4 Recent Beta Downloads

Now that’s better. We can see up to 4000 installs per hour of users migrating to Beta 57, with distinct time-of-day effects. Perfectly cromulent, though the volume seems a little low.

But that’s installs, not updates.

What do we get with “update” pings? Well, for one, we can run queries rather quickly. Querying “main” pings to find the one where a user switched versions requires sifting through terabytes of data. The query below took two minutes to run:

Screenshot-2017-10-3 Users Updating to Firefox Quantum Beta 57(1)

The red line is update/ready: the number of pings we received in that hour telling us that the user had downloaded an update to Beta 57 and it was ready to go. The blue line is update/success: the number of pings we received that hour telling us the user had started their new Firefox Quantum Beta instance.

And here it is per-minute, just because we can:Screenshot-2017-10-3 Users Updating to Firefox Quantum Beta 57(2).png

September 30 and October 1 were the weekend. As such, we’d expect their volumes to be lower than the weekdays surrounding them. However, looking at the per-minute graph for update/ready (red), why is Friday the 29th the same height as Saturday the 30th? Fridays are usually noticeably busier than Saturdays.

Friday was Navarati in India (one of our largest market for Beta) but that’s a multi-day festival that started on the Wednesday (and other sources for client data show only a 15% or so dip in user activity on that date in India), so it’s unlikely to have caused a single day’s dip. Friday wasn’t a holiday at all in any of our other larger markets. There weren’t any problems with the updater or “update” ping ingestion. There haven’t been any dataset failures that would explain it. So what gives?

It turns out that Friday’s numbers weren’t low: Saturday’s were high. In order to improve the stability of what was going to become the Firefox 56 release we began on the 26th to offer updates to the new Firefox Quantum Beta to only half of updating Firefox Beta users. To the other half we offered an update to the Firefox 56 Release Candidate.

What is a Release Candidate? Well, for Firefox it is the stabilized, optimized, rebuilt, rebranded version of Firefox that is just about ready to ship to our release population. It is the last chance we have to catch things before it reaches hundreds of millions of users.

It wasn’t until late on the 29th that we opened the floodgates and let the rest of the Beta users update to Beta 57. This contributed to a higher than expected update volume on the 30th, allowing the Saturday numbers to be nearly as voluminous as the Friday ones. You can actually see exactly when we made the change: there’s a sharp jump in the red line late on September 29 that you can see clearly on both “update”-ping-derived plots.

That’s something we wouldn’t see in “main” pings: they only report what version the user is running, not what version they downloaded and when. And that’s not all we get.

The “update”-ping-fueled graphs have two lines. This rather abruptly piques my curiosity about how they might relate to each other. Visually, the update/ready line (red) is almost always higher than the update/success line (blue). This means that we have more clients downloading and installing updates than we have clients restarting into the updated browser in those intervals. We can count these clients by subtracting the blue line from the red and summing over time:Screenshot-2017-10-3 Outstanding Updates for Users Updating to Firefox Quantum Beta 57

There are, as of the time I was drafting this post, about one half of one million Beta clients who have the new Firefox Quantum Beta… but haven’t run it yet.

Given the delicious quantity of improvements in the new Firefox Quantum Beta, they’re in for a pleasant surprise when they do.

And you can join in, if you’d like.


(NOTE: earlier revisions of this post erroneously said download_stats counted updater notifications. It counts stub installer notifications. I have reworded the post to correct for this error. Many thanks to :ddurst for catching that)

Categorieën: Mozilla-nl planet

Gervase Markham: Q3 MOSS Update

Mozilla planet - wo, 04/10/2017 - 10:00

The Mozilla Open Source Support (MOSS) update for Q3 has been published on the main Mozilla blog. Highlights include the launch of our pilot program focussed on supporting open source in India, a large grant to Ushahidi, and a very successful audit of the chrony NTP daemon.

Categorieën: Mozilla-nl planet

Mozilla Security Blog: Treating data URLs as unique origins for Firefox 57

Mozilla planet - wo, 04/10/2017 - 09:16

The data URL scheme provides a mechanism which allows web developers to inline small files directly in an HTML (or also CSS) document. The main benefit of data URLs is that they speed up page load time because the inlining of otherwise external resources reduces the number of HTTP requests a browser has to perform to load data.

Unfortunately, criminals also utilize data URLs to craft attack pages in an attempt to gather usernames, passwords and other confidential information from innocent users. Data URLs are particularly attractive to attackers because they allow them to mount attacks without requiring them to actually host a full website. Instead, scammers embed the entire attack code within the data URL, which previously inherited the security context of the embedding element. In turn, this inheritance model opened the door for Cross-Site-Scripting (XSS) attacks.

Rather than inheriting the origin of the settings object responsible for the navigation, data URLs will be treated as unique origins for Firefox 57. In other words, data URLs loaded inside an iframe are not same-origin with their parent document anymore.

Let’s consider the following example:

In Firefox version 56 and older, the script within the data URL iframe on line 13 was able to access objects from the embedding context because data URLs inherited the security context and hence were considered to be same-origin. In the specific example, the script within the data URL iframe was able to call the function foo() on line 8 which was defined by the including context and hence should be treated as a different security context.

Starting with Firefox 57, data URLs loaded inside an iframe will be considered cross-origin. Not only will that behavior mitigate the risk of XSS, it will also make Firefox standards compliant and consistent with the behavior of other browsers. In Firefox 57, an attempt to reach content from a different origin (like the one from line 13) will be blocked and the following message will be logged to the console:

Note that data URLs that do not end up creating a scripting environment, such as those found in img elements, will still be considered same-origin.

For the Mozilla Security Team:
Christoph Kerschbaumer, Ethan Tseng, Henry Chang & Yoshi Huang

The post Treating data URLs as unique origins for Firefox 57 appeared first on Mozilla Security Blog.

Categorieën: Mozilla-nl planet

Daniel Stenberg: Say hi to curl 7.56.0

Mozilla planet - wo, 04/10/2017 - 08:51

Another curl version has been released into the world. curl 7.56.0 is available for download from the usual place. Here are some news I think are worthy to mention this time…

An FTP security issue

A mistake in the code that parses responses to the PWD command could make curl read beyond the end of a buffer, Max Dymond figured it out, and we’ve released a security advisory about it. Our 69th security vulnerability counted from the beginning and the 8th reported in 2017.

Multiple SSL backends

Since basically forever you’ve been able to build curl with a selected SSL backend to make it get a different feature set or behave slightly different – or use a different license or get a different footprint. curl supports eleven different TLS libraries!

Starting now, libcurl can be built to support more than one SSL backend! You specify all the SSL backends at build-time and then you can tell libcurl at run-time exactly which of the backends it should use.

The selection can only happen once per invocation so there’s no switching back and forth among them, but still. It also of course requires that you actually build curl with more than one TLS library, which you do by telling configure all the libs to use.

The first user of this feature that I’m aware of is git for windows that can select between using the schannel and OpenSSL backends.

curl_global_sslset() is the new libcurl call to do this with.

This feature was brought by Johannes Schindelin.


The currently provided API for creating multipart formposts, curl_formadd, has always been considered a bit quirky and complicated to work with. Its extensive use of varargs is to blame for a significant part of that.

Now, we finally introduce a replacement API to accomplish basically the same features but also with a few additional ones, using a new API that is supposed to be easier to use and easier to wrap for bindings etc.

Introducing the mime API: curl_mime_init, curl_mime_addpart, curl_mime_name and more. See the postit2.c and multi-post.c examples for some easy to grasp examples.

This work was done by Patrick Monnerat.

SSH compression

The SSH protocol allows clients and servers to negotiate to use of compression when communicating, and now curl can too. curl has the new –compressed-ssh option and libcurl has a new setopt called CURLOPT_SSH_COMPRESSION using the familiar style.

Feature worked on by Viktor Szakats.


Peter Wu and Jay Satiro have worked on this feature that allows curl to store SSL session secrets in a file if this environment variable is set. This is normally the way you tell Chrome and Firefox to do this, and is extremely helpful when you want to wireshark and analyze a TLS stream.

This is still disabled by default due to its early days. Enable it by defining ENABLE_SSLKEYLOGFILE when building libcurl and set environment variable SSLKEYLOGFILE to a pathname that will receive the keys.


This, the 169th curl release, contains 89 bug fixes done during the 51 days since the previous release.

47 contributors helped making this release, out of whom 18 are new.

254 commits were done since the previous release, by 26 authors.

The top-5 commit authors this release are:

  1. Daniel Stenberg (116)
  2. Johannes Schindelin (37)
  3. Patrick Monnerat (28)
  4. Jay Satiro (12)
  5. Dan Fandrich (10)

Thanks a lot everyone!

(picture from pixabay)

Categorieën: Mozilla-nl planet