mozilla

Mozilla Nederland LogoDe Nederlandse
Mozilla-gemeenschap

Chris H-C: When, Exactly, does a Firefox User Update, Anyway?

Mozilla planet - wo, 27/06/2018 - 18:01

There’s a brand new Firefox version coming. There always is, since we adopted the Rapid Release scheme way back in 2011 for Firefox 5. Every 6-8 weeks we instantly update every Firefox user in the wild with the latest security and performance…

Well, no. Not every Firefox user. We have a dashboard telling us only about 90% of Firefox users are presently up-to-date. And “up-to-date” here means any user within two versions of the latest:

Pie chart of "Up to date and out of date client distribution" showing 83.9% up to date, 8% out of date and potentially of concern, and 2.1% out of date and of concern.

Why two versions? Shouldn’t “Up to date” mean up-to-date?

Say it’s June 26, 2018, and Firefox 61 is ready to go. The first code new to this version landed March 13, and it has spent the past three months under intense development and scrutiny. During its time as Firefox Nightly 61.0a1 it had features added, performance and security improvements layered in, and some rearranging of home page settings. While it was known as Firefox Beta 61.0b{1-14} it has stabilized, with progressively fewer code changes accepted as we prepare it for our broadest population.

So we put Firefox 61.0 up on the server, ring the bell and yell “Come and get it~!”?

No.

Despite our best efforts, our Firefox Beta-running population is not as diverse as our release population. The release population has thousands of gpu+driver combinations 61 has never run on before. Some users will use it for kiosks and internet cafes in areas of the world we have no beta users. Other users will have combinations of addons and settings that are unique to them alone… and we’ll be shipping a fresh browsing experience to each and every one of them.

So maybe we shouldn’t send it to everyone at once in case it breaks something for those users whose configurations we haven’t had an opportunity to test.

As a result, our update servers will, upon being asked by your Firefox instance if there is an update available, determine whether or not to lie. Our release managers will chose to turn the release dial to, say, 10% to begin. So when your Firefox asks if there is an update, nine out of ten times we will lie and say “No, try again later.” And that random response is cached so that everyone else trying for the next one or two minutes will get the same response you did.

At 10% roughly one out of every ten 1-2min periods will tell the truth: “Yes, there is an update and you can find it here: <url>”. This adds a bit of a time-delay between “releasing” a new version and users having it.

Eventually, after a couple of days or maybe up to a week, we will turn the dial up to 100% and everyone will be able to receive the update and in a matter of hours the entire population will be up-to-date and…

No.

When does a Firefox instance ask for an update? We “only” release a new update every six-to-eight weeks, it would be wasteful to be asking all the time. When -should- we ask?

If you’ve ever listened to a programmer complain about time, you might have an inkling of the complexity involved in simply trying to figure out when to ask if there’s an update available.

The simplest two cases of Firefox instances asking for updates are: “When the user tells it to”, and “If the Firefox instance was released more than 63 days ago.”

For the first of these two cases, you can at any time open Help > About Firefox and it will check to see if your Firefox is up-to-date. There is also a button labeled “Check for Updates” in Preferences/Options.

For the second, we have a check during application startup that compares the date we built your Firefox to the date your computer thinks it is. If they differ by more than 63 days, we will ask for an update.

We can’t rely on users to know how to check for updates, and we don’t want our users to wait over two -more- months to benefit from all of our hard work.

Maybe we should check twice a day or so. That seems like a decent compromise. So that’s what we do for Firefox release users: we check every 12 hours or so. If the user isn’t running Firefox for more than 12 hours, then when they start it up again we check against the client’s clock to see if it’s been 12 hours since our last check.

Putting this all together:

Firefox must be running. It must have been at least 12 hours since the last time we checked for updates. If we are still throttling updates to, say, 10% we (or the client who asked previously within the past 1-2min) must be lucky to be told the truth that there is an update available. Firefox must be able to download the entire update (which can be interrupted if the user closes Firefox before the download is complete). Firefox must be able to apply the update. The user must restart Firefox to start running the new version.

And then, and only then, is that one Firefox user running the updated Firefox.

How does this look like for an entire population of Firefox users whose configurations and usage behaviours I already mentioned are the most diverse of all of our user populations?

That’ll have to wait for another time, as it sure isn’t a simpler story than this one. For now, you can enjoy looking at some graphs I used to explore a similar topic, earlier.

:chutten

Categorieën: Mozilla-nl planet

Mozilla Open Policy & Advocacy Blog: Privacy Progress and Protections for California

Mozilla planet - wo, 27/06/2018 - 03:20

Update: “We’re pleased to see California passed a sweeping privacy law to protect your data and right to choose how it’s used. It’s still not perfect, but we’re not letting perfect be the enemy of good – and Mozilla will work with legislators in California to strengthen it.” – Denelle Dixon, Chief Operating Officer

People in California are in midst of an important discussion around improving privacy protections. This comes in the wake of Cambridge Analytica and the European GDPR going into effect – but it’s a discussion that has been a long time in the making. We are excited to see the potential for progress.

Californians are considering two competing approaches; a narrow ballot initiative on privacy and a broader privacy bill, the California Consumer Privacy Act or CalCPA, currently moving quickly through the legislature. Today, Mozilla is weighing in to endorse the broader bill. While we are also supportive of the ballot initiative, we believe the bill is the better option for Californians.

The ballot initiative would allow anyone to know what kinds of data a company collects and would require an opt-out for data selling to third parties. These are important first steps. The bill takes bigger steps forward, giving consumers rights more in line with the GDPR, like access to your data and correction or deletion of your data.

The bill is moving through the legislature quickly and still has flaws. Mozilla has suggested substantive changes that could clarify its requirements and better protect Californians’ privacy. But overall the bill is a very positive move that offers huge potential to advance the privacy conversation in the United States, as other states look to these concepts and even apply them at the national level.

Mozilla stands with Californians – and our community globally – to support positive progress in protecting user privacy. We look forward to working with policymakers to ensure that the draft puts users first.

The post Privacy Progress and Protections for California appeared first on Open Policy & Advocacy.

Categorieën: Mozilla-nl planet

Ryan Harter: Planning Data Science is hard: EDA

Mozilla planet - wo, 27/06/2018 - 02:19

Data science is weird. It looks a lot like software engineering but in practice the two are very different. I've been trying to pin down where these differences come from.

Michael Kaminsky hit on a couple of key points in his series on Agile Management for Data Science on Locally Optimistic. In Part II Michael notes that Exploratory Data Analyses (EDA) are difficult to plan for: "The nature of exploratory data analysis means that the objectives of the analysis may change as you do the work." - Bingo!

I've run into this problem a bunch of times when trying to set OKRs for major analyses. It's nearly impossible to scope a project if I haven't already done some exploratory analysis. I didn't have this problem when I was doing engineering work. If I had a rough idea of what pieces I needed to stitch together, I could at least come up with an order-of-magnitude estimate of how long a project would take to complete. Not so with Data Science: I have a hard time differentiating between analyses that are going to take two weeks and analyses that are going to take two quarters.

That's all. No deep insight. Just a +1 and a pointer to the folks who got there first.

Categorieën: Mozilla-nl planet

The Mozilla Blog: Supreme Court’s Decision to Uphold Trump’s Travel Ban is a Disappointment

Mozilla planet - wo, 27/06/2018 - 00:28

We are deeply disappointed by today’s Supreme Court 5-4 ruling which provides a legal basis for the Trump Administration to prohibit individuals from Libya, Iran, Somalia, Syria, Yemen, North Korea and Venezuela from entering the United States. We agree with the four dissenting Justices that the majority ignored key facts that overwhelmingly showed that this is a religious ban that “masquerades behind a facade of national-security con­cerns.”

At issue is the Trump Administration’s third Executive Order on immigration, which differed from the original January 2017 and March 2017 orders by the removal of Iraq and Sudan, and the addition of three non-muslim majority countries. Five Justices held that the President has broad discretion to protect national security, and irrespective of Trump’s personal beliefs or statements, his action was justified because he consulted with other agencies and officials on whether people from certain countries posed security risks.

This was harshly criticized in the two dissenting opinions as a highly abridged account that ignores:

  • repeated anti-muslim statements without apology by Trump and some of the officials with whom he consulted;
  • public statistics showing that people eligible to receive a waiver under the very terms of the Order are being denied;
  • weak analysis and preparation done in the agency review;
  • lack of exemptions for people in need, such as asylum seekers; and
  • efforts to edit the Executive Orders to make them more justifiable based on territory than religion

Cumulatively, the dissenting justices believed there was enough evidence to hold the Executive Order unlawful. Unfortunately, history will not reflect that.  Since Trump issued his first travel ban, Mozilla and 100+ tech companies filed several “friend of the court” briefs warning against its adverse consequences and reminding the Court of the importance of diversity.

The internet is built, maintained, and governed through a myriad of global civil society, private sector, government, academic, and developer communities. Travel across borders is central for their cooperation and exchange of ideas and information. It is also necessary for a global workforce that reflects the diversity of the internet itself.

Today’s opinion turns “a blind eye to the pain and suffering the Procla­mation inflicts upon countless families and individuals.” We will continue our fight to protect the internet. Countries may arbitrarily close their borders, but the internet must remain open and accessible to everyone.

The post Supreme Court’s Decision to Uphold Trump’s Travel Ban is a Disappointment appeared first on The Mozilla Blog.

Categorieën: Mozilla-nl planet

Mozilla Open Innovation Team: Cracking the Code — how Mozilla is helping university students contribute to Open Source

Mozilla planet - di, 26/06/2018 - 19:40

After a year of research, Mozilla’s Open Source Student Network (OSSN), is launching a pilot program to tackle the challenges around how Open Source projects effectively support university students as they work towards their first code contribution.

Despite an abundance of evidence that the most valuable contributions to a project often come from people under the age of 30, Open Source projects often struggle to onboard and maintain university students as new code contributors.

Students who have expressed interest in contributing often feel intimidated, that they don’t have the appropriate skills or aren’t able to find a project, to begin with.

Based on our recent research, we identified that more than 50% of university students within our network who had tried to contribute code to an Open Source project had been unable to make a successful contribution because of issues they encountered during their contribution journey.

From identifying a project to work on, exploring the codebase, setting up the development environment, writing code and even when trying merge their code, students faced issues which drove them away from the project before they completed their first contribution.

<figcaption>User journey: code contribution to an Open Source project</figcaption>How we’re answering the big questions

Our research uncovered a series of questions related to each portion of the user journey.

We’re designing a series of pilots, each of which aims to answer specific questions, connected to different parts of the typical user’s journey, such as:

  • What do the students care most about when evaluating whether or not to contribute to a project?
  • What is the best mentorship model for university students?
  • What is more incentivizing in the onboarding process: To code a dummy issue/bug or to solve an actual issue in a real world project?
  • What is the better way to engage students in a project — presenting them with suggested bugs (bug matching) or allowing them to find issues on their own through exploration?

As part of the pilots and in collaboration with Mozilla projects like Common Voice, Devtools, Firefox Focus for Android and external organizations like the GNOME Foundation, the Linux Foundation and Wikimedia the OSSN is building new ways for students to discover, interact and engage with Open Source projects.

One of these pilots is…

An example of one of these pilots is the “Project Overview Pilot”. The aim of this particular pilot is to answer a question from the “discovery” portion of the user journey: how do students evaluate whether they want to contribute to a project?

Based on a survey we released at the beginning of the year, we discovered that students care equally for the mission of the project as well as the technical skills required for contribution. Here are the top four criteria for project selection:

  1. The mission of the project
  2. The technology (programming language/libraries/framework etc.)
  3. The time needed for setting up the development environment
  4. Whether a community exists and how to connect

While the mission and the technical requirements of a project are often well presented and visible, we can argue that the other two criteria are not properly surfaced.

Our assumption for our pilot is that by surfacing this information, students will identify the right project for them to contribute to and hence will contribute code with more confidence, less effort and in a shorter time.

In order to validate our assumption, we created the following platform for showcasing all the relevant information students care about at a glance for a broad set of diverse, healthy, active and inclusive Open Source projects.

<figcaption>Project Overview Pilot</figcaption>What’s happening next

From now until October 2018, along with our key collaborators, we will continue building and providing pilots for our students to help them contribute code to their favorite projects while growing their skills around a diverse set of technologies. Furthermore, throughout these pilots, students will be helping the network by providing useful insights and metrics, which will be used to refine the onboarding experience of projects in the future.

If you are a student from an American and/or a Canadian post-secondary institution or you know students who might be interested participating in this initiative, please share this link with them.

If you are an organization or project interested in supporting our initiative by having us surface your project’s contribution opportunities within our network, please reach out at christos AT mozilla DOT com.

Cracking the Code — how Mozilla is helping university students contribute to Open Source was originally published in Mozilla Open Innovation on Medium, where people are continuing the conversation by highlighting and responding to this story.

Categorieën: Mozilla-nl planet

The Mozilla Blog: New Firefox Releases Now Available

Mozilla planet - di, 26/06/2018 - 15:02

Even though summer is here in the northern hemisphere, we’re not taking any breaks. Firefox continues our focus on making a browser that is smarter and faster than any other, so you can get stuff done before you take that much needed outdoor stroll.

Key highlights for this update include:
  • Add Search Engines: users can now more easily add custom search engines to the location bar in Firefox enabling quicker and more streamlined search functionality. Imagine searching an actor’s name, now with Firefox you can automatically search through IMDB in the location bar.
  • Tab Warming: speedier response times are now available when switching between tabs because Firefox is preemptively loading tabs when you’re hovering over them.
  • Retained Display Lists: access to the pages you frequently visit quicker, thanks to retained display lists. This new functionality locally remembers content that has been visited previously, so it doesn’t need to be reloaded each time you go to the site.
  • Accessibility Tools Inspector: ability for creators and developers to now easily make pages for users with accessibility requirements. Firefox is committed to a stronger, more accessible browser and this new tool supports that mission.
  • WebExtension Tab Management: Sometimes you’re listening to music in a tab, but you don’t really want that tab taking up space as you browse the web. In today’s new release, WebExtensions can now hide tabs as well as manage the behavior of the browser when a tab is opened or closed, so you can expect to see exciting new extensions that take advantage of these features in the near future.

For additional information on developer news in today’s update, visit here.  For more details on all of today’s news, you can review our release notes here.

Check out and download the latest version of Firefox Quantum available here.

 

The post New Firefox Releases Now Available appeared first on The Mozilla Blog.

Categorieën: Mozilla-nl planet

Hacks.Mozilla.Org: Firefox 61 – Quantum of Solstice

Mozilla planet - di, 26/06/2018 - 15:02

Summertime1, and the browser is speedy! Firefox 61 is now available, and with it come new performance improvements that make the fox faster than ever! Let’s take a short tour of the highlights:

Parallel CSS Parsing a road sign that says "parallel parsing only"<figcaption> adapted from mjfmjfmjf on flickr
</figcaption>

Quantum CSS boosted our style system by calculating computed styles in parallel. Firefox 61 adds even more power to Quantum CSS by also parallelizing the parsing step! The extra horsepower pays real dividends on sites with large stylesheets and complex layouts.

Retained Display Lists

One of the final steps before a page is painted onto the screen is building a list of everything that is going to be drawn, from lowest z-order to highest (“back to front”). In Firefox, this is called the display list, and it’s the final chance to determine what’s on screen before painting begins. The display list contains backgrounds, borders, box shadows, text, everything that’s about to be pixels on the screen.

Historically, the entire display list was computed prior to every new paint. This meant that if a 60fps animation was running, the display list would be computed 60 times a second. For complex pages, this gets to be costly and can lead to reduced script execution budget and, in severe cases, dropped frames. Firefox 61 enables “retained” display lists, which, as the name would suggest, are retained from paint to paint. If only a small amount of the page has changed, the renderer may only have to recompute a small portion of the display list. With retained display lists enabled, the graphics team has observed a near 40% reduction in dropped frames due to list building! We’ve decided to pass those savings on to you, and are enabling the initial work in Firefox 61.

You can dive deeper into display lists in Matt Woodrow’s recent post here on Hacks.

Pretty snazzy stuff, but there’s more than just engine improvements! Here are few other new things to check out in this release:

Accessibility Inspector

A great website is one that works for everyone! The web platform has accessibility features baked right in that let users with assistive technologies make use of web content. Just as the JS engine can see and interact with a tree of elements on a page, there is a separate “accessibility tree” that is available to assistive technologies so they can better describe and understand the structure and UI of a website. Firefox 61 ships with an Accessibility Inspector that lets developers view the computed accessibility tree and better understand which aspects of their site are friendly to assistive technologies and spot areas where accessible markup is needed. Useful for spotting poorly-labeled buttons and for debugging advanced interactions annotated with ARIA.

You can learn more about how to use the Accessibility Inspector in Marco Zehe’s introductory blog post.

Speaking of DevTools…
  • The tabs for each panel are now draggable, so you can put your most-used tools where you want them!
  • No need to open Responsive Design Mode to enable simulated network throttling – it’s now also available from the top menu of the Network pane.
  • See a custom property in the Inspector? Hover to see its value. While typing css, custom properties autocomplete, including color swatches when the value is a color.
Tab Management

One of the most popular uses of browser extensions is to help users better hoard manage their open tabs. Firefox 61 ships with new extension APIs to help power users use tabs more powerfully! Extensions with the tabs permission can now hide and restore tabs on the browser’s tab bar. The hidden tabs are still loaded, they’re simply not shown. Extensions for productivity and organization can now swap out groups of tabs based on task or context. Firefox also includes an always-available menu that lists all open tabs regardless of their hidden state.

Wrapping Up

Curious about everything that’s new or changed in Firefox 61? You can view the Release Notes or see the full platform changelog on MDN Web Docs.

Have A Great Summer!

1. In the Northern hemisphere, that is. I mean, I hope the Southern hemisphere has a great summer too- it’s just a ways off.

Categorieën: Mozilla-nl planet

Ryan Harter: You can't do data science in a GUI

Mozilla planet - di, 26/06/2018 - 09:00

I came across You can't do data science in a GUI by Hadley Wickham a little while ago. He hits on a lot of the same problems I mentioned in Don't make me code in your text box. Take a look if you have some time. In the first 15m he covers the arguement against coding in a GUI. After that he plugs for R and the tidyverse.

Categorieën: Mozilla-nl planet

Mozilla Future Releases Blog: Testing Firefox Monitor, a New Security Tool

Mozilla planet - ma, 25/06/2018 - 21:59

From shopping to social media, the average online user will have hundreds of accounts requiring passwords. At the same time, the number of user data breaches occurring each year continues to rise dramatically. Understandably, people are now more worried about internet-related crimes involving personal and financial information theft than conventional crimes. In order to help keep personal information and accounts safe, we will be testing user interest in a security tool that lets users check if one of their accounts has been compromised in a data breach.

We decided to address a growing need for account security by developing Firefox Monitor, a proposed security tool that is designed for everyone, but offers additional features for Firefox users. Visitors to the Firefox Monitor website will be able to check (by entering an email address) to see if their accounts were included in known data breaches, with details on sites and other sources of breaches and the types of personal data exposed in each breach. The site will offer recommendations on what to do in the case of a data breach, and how to help secure all accounts. We are also considering a service to notify people when new breaches include their personal data.

Partnership with HaveIBeenPwned.com
In order to create Firefox Monitor, we have partnered with HaveIBeenPwned.com (HIBP). HIBP is a valuable service, operated by Troy Hunt, one of the most renowned and respected security experts and bloggers in the world. Troy is best known for the HIBP service, which includes a database of email addresses that are known to have been compromised in data breaches. Through our partnership, Firefox is able to check your email address against the HIBP database in a private-by-design way. You can find Troy’s blog post on the partnership here.

How does it work?
It is important that we not violate our users’ privacy expectations with respect to the handling of their email address. As such, we’ve worked closely with HIBP and Cloudflare to create a method of anonymized data sharing for Firefox Monitor, which never sends your full email address to a third party, outside of Mozilla. You can read the full details of the solution here.

What will we be testing?
At this stage, we are testing initial designs of the Firefox Monitor tool in order to refine it. Beginning next week, we expect to invite approximately 250,000 users (mainly in the US) to try out the feature.

What to expect next
Once we’re satisfied with user testing, we will work on making the service available to all Firefox users. Once a release schedule has been established, it will be announced in a follow-up blog post.

In the meantime, check out and download the latest version of Firefox Quantum for the desktop in order to use the Firefox Monitor feature when it becomes available.

Download Firefox for Windows, Mac, Linux

The post Testing Firefox Monitor, a New Security Tool appeared first on Future Releases.

Categorieën: Mozilla-nl planet

Mozilla Security Blog: Scanning for breached accounts with k-Anonymity

Mozilla planet - ma, 25/06/2018 - 21:58

The new Firefox Monitor service will use anonymized range query API endpoints from Have I Been Pwned (HIBP). This new Firefox feature allows users to check for compromised online accounts while preserving their privacy.

An API request reveals sensitive data about the requesting party.

An API request can reveal subject identifiers like cookies, IP address, etc.

Anonymizing Account Identifiers

Operations like ‘search’ often need plaintext, or simply-hashed data. But, as Cloudflare has described in their own HIBP integration, searching with plain account data introduces privacy & security risks that allow an adversary, or even the service itself, to use the data to breach the searched account.

As an alternative, a user search client could download an entire set of data. Unfortunately this practice discloses all the service data to the client, which could abuse the data of all other users.

Anonymized Data Sharing

To mitigate these risks, Mozilla is working with Troy Hunt – creator and maintainer of HIBP – to use new hash range query API endpoints for breached account data in the Firefox Monitor project.

Hash range queries add k-Anonymity to the data that Mozilla exchanges with HIBP. Data with k-Anonymity protects individuals who are the subjects of the data from re-identification while preserving the utility of the data.

When a user submits their email address to Firefox Monitor, it hashes the plaintext value and sends the first 6 characters to the HIBP API. For example, the value “test@example.com” hashes to 567159d622ffbb50b11b0efd307be358624a26ee. We send this hash prefix to the API endpoint:

GET https://haveibeenpwned.com/api/breachedaccount/range/567159

The API responds with many suffixes and the list of breaches that include the full value:

[ { "HashSuffix": "D622FFBB50B11B0EFD307BE358624A26EE", "Websites": [ "LinkedIn" ] }, { "HashSuffix": "0000000000000000000000000000000000", "Websites": [ "Dropbox" ] }, { "HashSuffix": "1111111111111111111111111111111111", "Websites": [ "Adobe", "Plex" ] } ]

When Firefox Monitor receives this response, it loops thru the objects to find which (if any) prefix and breached account HashSuffix equals the the user-submitted hash value. The following pseudo code describes the algorithm in more detail:

if (fullUserHash === userHashPrefix + breachedAccount.HashSuffix)

Using the running example from above, for the first HashSuffix, the expression evaluates to:

if (‘567159D622FFBB50B11B0EFD307BE358624A26EE’ === ‘567159’+‘D622FFBB50B11B0EFD307BE358624A26EE’)

Firefox Monitor discovers that “test@example.com” appears in the LinkedIn breach, but does not disclose plaintext or even hashes of sensitive user data. Further, HIBP does not disclose its entire set of hashes, which allows Firefox users to maintain their privacy, and protects breached users from further exposure.

Brute Force Attacks

Hashed data is still vulnerable to brute-force attacks. An adversary could still loop thru a dictionary of email addresses to find the plaintext of all the range query results. To reduce this attack surface, Firefox Monitor does not store the range queries nor any results in its database. Instead, it caches a user’s results in an encrypted client session. We also monitor our scan endpoint to prevent abuse by an adversary attempting a brute force breached-account enumeration attack against our service.

Helping Subjects of Data Breaches

HIBP contains billions of records of email addresses. Troy has done an outstanding job to raise awareness and educate users about breaches globally. Breached sites embrace HIBP, even self-submitting their breached data. HIBP is there to help victims of data breaches after things go wrong, and Firefox Monitor is extending that help to more people.

The post Scanning for breached accounts with k-Anonymity appeared first on Mozilla Security Blog.

Categorieën: Mozilla-nl planet

Hacks.Mozilla.Org: Retained Display Lists for improved page performance

Mozilla planet - ma, 25/06/2018 - 16:30

Continuing Firefox Quantum’s investment in a high-performance engine, the Firefox 61 release will boost responsiveness of modern interfaces with an optimization that we call Retained Display Lists. Similar to Quantum’s Stylo and WebRender features, developers don’t need to change anything on their sites to reap the benefits of these improvements.

I wrote about this feature on the Mozilla Graphics Team blog back in January, when it was first implemented in Nightly. Now it’s ready to ship with Firefox 61. If you’re already familiar with retained display lists and how this feature optimizes painting performance, you might want to skip ahead to read about the results of our efforts and the future work we’re planning.

Working with the display list

Display list building is the process in which we collect the set of high-level items to display on screen (borders, backgrounds, text and much more), and then sort the list according to CSS painting rules into the correct back-to-front order. It’s at this point that we figure out which parts of the page are already visible on screen.

Currently, whenever we want to update what’s on the screen, we build a full new display list from scratch and then we use it to paint everything on the screen. This is great for simplicity: we don’t have to worry about figuring out which bits changed or went away. Unfortunately, the process can take a really long time. This has always been a performance problem, but as websites have become more complex and more users have access to higher resolution monitors, the problem has been magnified.

The solution is to retain the display list between paints—we build a new display list only for the parts of the page that changed since we last painted and then merge the new list into the old to get an updated list. This adds a lot more complexity, since we need to figure out which items to remove from the old list, and where to insert new items. The upside is that, in many cases, the new list can be significantly smaller than the full list. This creates the opportunity to improve perceived performance and save significant amounts of time.

Motivation

As part of the lead up to the release of Firefox Quantum, we added new telemetry to Firefox to help us measure painting performance, which therefore enabled us to make more informed decisions as to where to direct our efforts. One of these measurements defined a minimum threshold for a ‘slow’ paint (16ms), and recorded percentages of time spent in various paint stages when it occurred. We expected display list building to be significant, but were still surprised with the results: On average, display list building was consuming more than 40% of the total paint time, for work that was often almost identical to the previous frame. We’d long been planning to overhaul how we built and managed display lists, but with this new data we decided to make it a top priority for our Painting team.

Results

Once we had everything working, the next step was to see how much of an effect we’d made on performance! We had the feature enabled for the first half of the Beta 60 cycle, and compared the results with and without it enabled.

The first and most significant change: The median amount of time spent painting (the full pipeline, not just display list building) dropped by more than 33%!

Time spent in the paint pipeline for content

As you can see in the graph, the median time spent painting is around 3ms when retained display lists are enabled. Once the feature was disabled on April 18th, the paint time jumped up to 4.5ms. That frees up lots of extra time for the browser to spend on running JavaScript, doing layout, and responding to input events.

Another important improvement is in the frequency at which slow paints occurred. With retained display lists disabled, we miss the 16ms deadline around 7.8% of the time. With it enabled, this drops to 4.7%, an almost 40% reduction in frequency. We can see that we’re not just making fast paints faster, but we’re also having a significant impact on the slow cases.

Future work

As mentioned above, we aren’t always able to retain the display list. We’ve spent time working out which parts of the page have changed; when our analysis shows that most of the page has changed, then we still have to rebuild the full display list and time spent on the analysis is time wasted. Work is ongoing to try detect this as early as possible, but it’s unlikely that we’ll be able to entirely prevent it. We’re also actively working to minimize how long the preparation work takes, so that we can make the most of opportunities for a partial update.

Retaining the display list also doesn’t help for the first time we paint a webpage when it loads. The first paint always has to build the full list from scratch, so in the future we’ll be looking at ways to make that faster across the board.

Thanks to everyone who has helped make this possible, including: Miko Mynttinen, Jet Villegas, Timothy Nikkel, Markus Stange, David Anderson, Ethan Lin, and Jonathan Watt.

Categorieën: Mozilla-nl planet

Shing Lyu: How to Unit Test WebExtensions

Mozilla planet - zo, 24/06/2018 - 21:44

We all know that unit-testing is a good software engineering practice, but sometimes the hassle of setting up the testing environment will keep us from doing it in the first place. After Firefox 57, WebExtension has become the new standard for writing add-ons for Firefox. How do you set up everything to start testing your WebExtension-based add-ons?

In the earlier format of the Firefox add-ons, namely the Add-on SDK (a.k.a. Jetpack), there is a built-in command for unit-test (jpm test). But for WebExtension, as far as I know, doesn’t have such thing built in. Luckily all the technology used in WebExtension is still standard web technology, so we can use off-the-shelf JavaScript unit-testing frameworks.

I want to keep my tests as simple as possible, so I made some assumptions:

  1. I don’t test WebExtension API calls. I keep a thin layer of wrapper around WebExtension API calls, and I don’t put too much logic into them. So hopefully the risk is low enough to not test. Anything more complex like the business logic or custom data structures or functions are all tested.
  2. I don’t like to use non-standard module systems. As far as I know WebExtension doesn’t support ES6 module yet. So I follow the good old way of including all the JavaScript I need in the page (or as a background page).
  3. I don’t use Node.js libraries in add-ons, period.
Mocha and expect.js

We will be using Mocha test framework and expect.js assertion library, but you can use any test framework that supports running in browsers. We’ll be using the browser version of Mocha. You need to create an HTML file like this:

<html> <head> <meta charset="utf-8"> <title>Unit Tests (by Mocha)</title> <link href="https://cdn.rawgit.com/mochajs/mocha/2.2.5/mocha.css" rel="stylesheet" /> </head> <body> <div id="mocha"></div> <script src="https://cdn.rawgit.com/Automattic/expect.js/0.3.1/index.js"></script> <script src="https://cdn.rawgit.com/mochajs/mocha/2.2.5/mocha.js"></script> <script>mocha.setup('bdd')</script> <script src="calculator.js"></script> <script src="test.calculator.js"></script> <script> mocha.checkLeaks(); mocha.run(); </script> </body> </html>

In the file you can see that we imported the Mocha library and expect.js library from CDN, so we don’t need to install anything locally. We’ll be testing a imaginary calculator library used in our extension. The test cases are written in the test.calculator.js file. The classes and functions under tested are placed in the calculator.js file. We load the module under test and the test case in the file as well

The way to run it is to simply open this file in a Firefox, if everything goes well you should see the following screen:

empty test

I usually put the main logic and code that interacts with extension APIs in a file named background.js. Any other business and utility functions goes into separate JS files, which are test using the above unit testing framework. They are all load together with background.js as background scripts. To do so, you need to add all of them to manifest.json like so:

{ "name": "Calculator Add-on", ... "background": { "scripts": [ "background.js", "calculator.js" ] }, ... }

Now we can write more tests like so in test.calculator.js:

describe('My calculator', function() { it('can add 1 and 1 and get 2', function() { var result = my_add(1, 1); // my_add is definied in calculator.js expect(result).to.eql(2); }); });

Run the page again, and you should see it failing, because you haven’t defined my_add yet.

my_add not defined error

Now, write your my_add in calculator.js:

function my_add(x, y) { return x + y; }

Run the page once again, and your test now passes.

my_add passed

Testing asynchronous code

Many WebExtension APIs return promises so you can write functions that receives and returns promises so you can chain them. The problem is that if you have and error in the Promise chain, the error will be consumed by the promise, so the test framework will not catch it, resulting in an always-passing test. Let’s say you want to write a function that counts how many times you’ve visited Facebook, you can use the browser.hisotry.getVisits() API, which returns a promise. So first we write a test for the count_visits() function we are about to write.

describe('Facebook counter', function() { it('can count', function() { var mock_getVisits = new Promise(function(resolve, reject) { // Simulate the getVisits API that returns 3 results resolve([new Object(), new Object(), new Object()]); }); mock_getVisits .then(count_visits) .then(function(count){ expect(count).to.eql(3); }); }); });

Then we implement the count_visits(), but we made a typo by writing length as legnht

function count_visits(history_items) { return history_items.legnht; }

In this case, the function will always return undefined, because there is no such thing as legnht for arrays. But if you run the test, you’ll find the test still passing. If you open the developer, you’ll see the actual error. But seems the test framework didn’t catch it.

async code not catched

The latest Mocha library already have built in promise support, you only need to make sure you return the promise so the error can be captured.

describe('Facebook counter', function() { it('can count', function() { var mock_getVisits = function() { return new Promise(function(resolve, reject) { // Simulate the getVisits API that returns 3 results resolve([new Object(), new Object(), new Object()]); }); }; // vvvvvv Notice the "return" here return mock_getVisits() .then(count_visits) .then(function(count){ expect(count).to.eql(3); }); }); });

Now the test fails as expected:

async code catched

Another way is to use async/await to make your async function looks sync. Mocha can easily handle that as usual synchronous functions.

describe('Facebook counter', function() { // Notice the async here vvvvv it('can count using async/await', async function() { var mock_getVisits = function() { return new Promise(function(resolve, reject) { // Simulate the getVisits API that returns 3 results resolve([new Object(), new Object(), new Object()]); }); } // and await here vvvvv var visits = await mock_getVisits(); expect(count_visits(visits)).to.eql(3) }); }); Conclusion

Writing test WebExtension it’s not as hard as you might think. Simply copying and pasting the HTML and you’re ready to go. You don’t need to install anything or set up complex Node.js compiling pipeline. Start testing your WebExtension code now, it saved me many hours of debugging time, and I believe it will help you as well.

Categorieën: Mozilla-nl planet

Cameron Kaiser: TenFourFox FPR8 available

Mozilla planet - za, 23/06/2018 - 06:30
TenFourFox Feature Parity Release 8 final is now available (downloads, hashes, release notes). There are no changes from the beta except for outstanding security patches. As usual, it will go live Monday night, assuming no changes.
Categorieën: Mozilla-nl planet

Mozilla Open Policy & Advocacy Blog: The Supreme Court Scores a Win for Privacy

Mozilla planet - za, 23/06/2018 - 01:46

Today, in a 5-to-4 vote, the Supreme Court held that the government must get a warrant to obtain records of where someone’s cell phone has been. This might seem obvious, but it wasn’t obvious to the four Supreme Court Justices who voted against today’s decision. And, it wasn’t obvious to the Sixth Circuit Court of Appeals, whose opinion claimed that people cannot expect the general location of their cell phones to remain private because they must know cell phone companies have this data.

That’s why we submitted an amicus brief last year to encourage the Supreme Court to recognize that it no longer makes sense to assume that people give up their expectation of privacy when third parties handle their data. Today’s Supreme Court’s decision in Carpenter v. United States sets the record straight and helps the Fourth Amendment keep up to date with the realities of our modern, internet-connected lives.

For the majority of Americans, a cell phone and internet access are necessities, not options. “ The Supreme Court recognized that “[c]ell phones . . . [are] indispensable to participation in modern society.” It also held that it makes no sense to pretend that people “voluntarily” give their location to cell phone companies when cell tower location information is increasingly vast, increasingly precise, and inherent to the way cell networks function. This is especially important “[b]ecause location information is continually logged for all of the 400 million devices in the United States–not just those belonging to persons who might happen to come under investigation.”

We hope Carpenter v. United States will have implications beyond just location data. The Supreme Court described it’s decision as “a narrow one,” refusing to extend its rule to other forms of data. Still, this decision sets the stage for other courts to recognize that people have privacy expectations in the vast amounts of data that third-parties handle.  And further, that the government needs to meet the higher standard of review required by a warrant in order to access this data.

At Mozilla, we understand that the Internet is an integral part of modern life, and that individuals’ online privacy should be fundamental, not optional. People rely on internet technology companies to facilitate the practical necessities of modern life, and Mozilla and many other technology companies actively work to preserve the privacy of user data.  We applaud the Supreme Court’s decision to respect these efforts and keep the Fourth Amendment working for today’s digital world[1].

[1] Bonus legal history:  There are a few seminal cases which have brought the Fourth Amendment into the digital age. In U.S. v. Jones (2012), the Court decided it was unconstitutional to track a person’s car with a GPS device without a warrant. Then, in Riley v. California (2014), the Court held police could not search the data on a cellphone during an arrest without a warrant. This ruling recognizes that the government needs a warrant to get the whereabouts of an individual’s location based on cell phones records.

By Michael Feldman, Product Counsel 

The post The Supreme Court Scores a Win for Privacy appeared first on Open Policy & Advocacy.

Categorieën: Mozilla-nl planet

Mozilla Open Policy & Advocacy Blog: Data localization: bad for users, business, and security

Mozilla planet - vr, 22/06/2018 - 22:59

Mozilla is deeply concerned by news reports that India’s first data protection law may include data localization requirements. Recent leaks suggest that the Justice Srikrishna Committee, the group charged by the Government of India with developing the country’s first data protection law, is considering requiring companies subject to the law to store critical personal data within India’s borders. A data localization mandate would undermine user security, harm the growth and competitiveness of Indian industry, and potentially burden relations between India and other countries. We urge the Srikrishna Committee and the Government of India to exclude this in the forthcoming legislative proposal.

Security Risks
Locating data within a given jurisdiction does not in itself convey any security benefits; rather, the law should require data controllers to strictly protect the data that they’re entrusted with. One only has to look to the recurring breaches of the Aadhaar demographic data to understand that storing data locally does not, by itself, keep data protected (see here, here and here). Until India has a data protection law and demonstrates robust enforcement of that law, it’s difficult to see how storing user data in India would be the most secure option.

In Puttaswamy, the Supreme Court of India unequivocally stated that privacy is a fundamental right, and put forth a proportionality standard that has profound implications for government surveillance in India. We respectfully recommend that if Indian authorities are concerned about law enforcement access to data, then a legal framework for surveillance with appropriate protections for users is a necessary first step. This would provide the lawful basis for the government to access data necessary for legal proceedings. A data localization mandate is an insufficient instrument for ensuring data access for the legitimate purposes of law enforcement.

Economic and Political Harms
A data localization mandate may also harm the Indian economy. India is home to many inspiring companies that are seeking to move beyond India’s generous borders. Requiring these companies to store data locally may thwart this expansion, and may introduce a tax on Indian industry by requiring them to maintain the legal and technical regimes of multiple jurisdictions.

Most Indian companies handle critical personal data, so even data localization for just this data subset could harm Indian industry. Such a mandate would force companies to use potentially cost-inefficient data storage and deny companies from using the most effective and efficient routing possible. Moreover, the Indian outsourcing industry is predicated on the idea of these firms being able to store and process data in India, and then transfer it to companies abroad. A data localization mandate could pose an existential risk to these companies.

At the same time, if India imposes data localization on foreign companies doing business in India, other countries may impose reciprocal data localization policies that force Indian companies to store user data within that country’s jurisdictional borders, leading to legal conflict and potential breakdown of trade.

Data Transfer, Not Data Localization
There are better alternatives to ensuring user data protection. Above all, obtaining an adequacy determination from the EU would both demonstrate commitment to a global high standard of data protection, and significantly benefit the Indian economy. Adequacy would allow Indian companies to more easily expand overseas, because they would already be compliant with the high standards of the GDPR. It would also open doors to foreign investment and expansion in the Indian market, as companies who are already GDPR-compliant could enter the Indian market with little to no additional compliance burden. Perhaps most significantly, this approach would make the joint EU-India market the largest in the world, thus creating opportunities for India to step into even greater economic global leadership.

If India does choose to enact data localization policies, we strongly urge it to also adopt provisions for transfer via Binding Corporate Rules (BCRs). This model has been successfully adopted by the EU, which allows for data transfer upon review and approval of a company’s data processing policies by the relevant Data Protection Authority (DPA). Through this process, user rights are protected, data is secured, and companies can still do business. However, adequacy offers substantial benefits over a BCR system. By giving all Indian companies the benefits of data transfer, rather than requiring each company to individually apply for approval from a DPA, Indian industry will likely be able to expand globally with fewer policy obstacles.

Necessity of a Strong Regulator
Whether considering user security or economic growth, data localization is a weak tool when compared to a strong data protection framework and regulator.

By creating strong incentives for companies to comply with data use, storage, and transfer regulations, a DPA that has enforcement power will get better data protection results than data localization, and won’t harm Indian users, industry, and innovation along the way. We remain hopeful that the Srikrishna Committee will craft a bill that centers on the user — this means strong privacy protections, strong obligations on public and private-sector data controllers, and a DPA that can enforce rules on behalf of all Indians.

The post Data localization: bad for users, business, and security appeared first on Open Policy & Advocacy.

Categorieën: Mozilla-nl planet

Support.Mozilla.Org: State of Mozilla Support: 2018 Mid-year Update – Part 1

Mozilla planet - vr, 22/06/2018 - 20:20
Hello, present and future Mozillians!

As you may have heard, Mozilla held one of its All Hands biannual meetings, this time in San Francisco. The support.mozilla.org Admin team was there as well, along with several members of the support community.

The All Hands meetings are meant to be gatherings summarizing the work done and the challenges ahead. San Francisco was no different from that model. The four days of the All Hands were full of things to experience and participate in. Aside from all the plenary and “big stage” sessions – most of which you should be able to find at Air Mozilla soon – we also took part in many smaller (formal and informal) meetings, workshops, and chats.

By the way, if you watch Denelle’s presentation, you may hear something about Mozillians being awesome through helping users ;-).

This is the first in a series of posts summarizing what we talked about regarding support.mozilla.org, together with many (maaaaaany) pages of source content we have been working on and receiving from our research partners over the last few months.

We will begin with the summary of quite a heap of metrics, as delivered to us in by the analytics and research consultancy from Copenhagen – Analyse & Tal (Analysis & Numbers). You can find all the (105!) pages here but you can also read the summary below, which captures the most important information.

The A&T team used descriptive statistics (to tell a story using numbers) and network analysis (emphasizing interconnectedness and correlations), taking information from the 11 years of data available in Kitsune’s databases and 1 year of Google Analytics data.

Almost all perspectives of the analysis brought to the spotlight the amount of work contributed and the dedication of numerous Mozillians over many years. It’s hard to overstate the importance of that for Mozilla’s mission and continuous presence and support for the millions of users of open source software who want an open web. We are all equally proud and humbled that we can share this voyage with you.

As you can imagine, analyzing a project as complex and stretched in time as Mozilla’s Support site is quite challenging and we could not have done it without cooperation with Open Innovation and our external partners.

Key Takeaways
  • In the 2010-2017 period, only 124 contributors were responsible for 63% of all contributions. Given that there are hundreds of thousands of registered accounts in the system, there is a lot of work to do for us to make contributions easier and more fun.
  • There are quite a few returning contributors who contribute steadily over several years.
  • There are several hundreds of contributors who are active within a short timeframe and even more very occasional helpers. In both cases, making sure long-term contributing is appealing to them.
  • While our community has not shown to be worryingly fragile, we have to make sure we understand better how and why contributions happen and what can be done to ensure a steady future for Mozilla’s community-powered Support.
  • The Q&A support forums on the site are the most popular place for contributions, with the core and most engaged contributors present mostly there.
  • On the other hand, the Knowledge Base, even if it has fewer contributors, sees more long-term commitment from returning contributors.
  • Contributors through Twitter are a separate group, usually not engaged in other support channels and focusing on this external platform.
  • Firefox is the most active product across all channels, but Thunderbird sees a lot of action as well. Many regular contributors are active supporting both products.
  • Among other Firefox related products, Firefox for Android is the most established one.
  • The top 15 locales amount to 76 percent of the overall revisions in the Knowledge Base, with the vast majority of contributions coming from core contributors mostly.
  • Based on network analysis, Russian, Spanish, Czech, and Japanese localization efforts are the most vulnerable to changes in sustainability.
What’s Next?

Most of the findings in the report support many anecdotal observations we have had, giving us a very powerful set of perspectives grounded in over 7 years’ worth of data. Based on the analysis, we are able to create future plans for our community that are more realistic and based on facts.

The A&T team provided us with a list of their recommendations:

  • Understanding the motivations for contributing and how highly dedicated contributors were motivated to start contributing should be a high priority for further strategic decisions.
  • Our metrics should be strategically expanded and used through interactive dashboards and real time measurements. The ongoing evolution of the support community could be better understood and observed thanks to dynamic definitions of more detailed contributor segments and localization, as well as community sustainability scores.
  • A better understanding of visitors and how they use the support pages (more detailed behaviour and opinions) would be helpful for understanding where to guide contributors to ensure a both a better user experience and an enhanced level of satisfaction among contributors.

Taking our own interpretation of the data analysis and the A&T recommendations into account, over the next few weeks we will be outlining more plans for the second half of the year, focusing on areas like:

  • Contributor onboarding and motivation insights
  • A review of metrics and tools used to obtain them
  • Recruitment and learning experiments
  • Backup and contingency plans for emergency gaps in community coverage
  • Tailoring support options for new products

As always, thank you for your patience and ongoing support of Mozilla’s mission. Stay tuned for more post-All Hands mid-year summaries and updates coming your way soon – and discuss them in the Contributors or Discourse forum threads.

Categorieën: Mozilla-nl planet

Mozilla VR Blog: This Week in Mixed Reality: Issue 10

Mozilla planet - vr, 22/06/2018 - 17:39
 Issue 10

Last week, the team was in San Francisco for an all-Mozilla company meeting.

This week the team is focusing on adding new features, making improvements and fixing bugs.

Browsers

We are all hands on deck building more components and adding new UI across Firefox Reality:

  • Improve keyboard visibility detection
  • Added special characters to the keyboard
  • Added some features & research some issues in the VRB renderer, required to properly implement focus mode

Here is a preview that we showed off of the support for skybox and some of the new UX/UI:

Social

We are continuing to provide a better experience across Hubs by Mozilla:

  • Added better flow for iOS webviews
  • Added support for VM development and fast entry flow for developers
  • Began work on image proxying for sharing 2d images
  • Continuing development on 2d/3d object spawning, space editor, and pen tool

Join our public WebVR Slack #social channel to participate in on the discussion!

Content ecosystem

Found a critical bug? File it in our public GitHub repo or let us know on the public WebVR Slack #unity channel and as always, join us in our discussion!

Stay tuned for new features and improvements across our three areas!

Categorieën: Mozilla-nl planet

Mozilla Open Policy & Advocacy Blog: Parliament adopts dangerous copyright proposal – but the battle continues

Mozilla planet - vr, 22/06/2018 - 12:18

On 20 June the European Parliament’s legal affairs committee (JURI) approved its report on the copyright directive, sending the controversial and dangerous copyright reform into its final stages of lawmaking.

 

Here is a statement from Raegan MacDonald, Mozilla’s Head of EU Public Policy:

“This is a sad day for the Internet in Europe. Lawmakers in the European Parliament have just voted for a new law that would effectively impose a universal monitoring obligation on users posting content online. As bad as that is, the Parliament’s vote would also introduce a ‘link tax’ that will undermine access to knowledge and the sharing of information in Europe.

It is especially disappointing that just a few weeks after the entry into force of the GDPR – a law that made Europe a global regulatory standard bearer – Parliamentarians have approved a law that will fundamentally damage the Internet in Europe, with global ramifications. But it’s not over yet – the final text still needs to be signed off by the Parliament plenary on 4 July. We call on Parliamentarians, and all those who care for an internet that fosters creativity and competition in Europe, to overturn these regressive provisions in July.”

 

Article 11 – where press publishers can demand a license fee for snippets of text online – passed by a slim majority of 13 to 12. The provision mandating upload filters for copyright content, Article 13, was adopted 15 to 10.

Mozilla will continue to fight for copyright that suits the 21st century and fosters creativity and competition online. We encourage anyone who shares these concerns to reach out to members of the European Parliament – you can call them directly via changecopyright.org, or tweet and email them at saveyourinternet.eu.

The post Parliament adopts dangerous copyright proposal – but the battle continues appeared first on Open Policy & Advocacy.

Categorieën: Mozilla-nl planet

Mozilla Reps Community: Rep of the Month – May 2018

Mozilla planet - vr, 22/06/2018 - 10:30

Please join us in congratulating Prathamesh Chavan, our Rep of the Month for May 2018!

Prathamesh is from Pune, India and works as a Technical Support Engineer at Red Hat. From his very early days in the Mozilla community, Prathamesh used his excellect people skills to spread the community to different colleges and to evangelise many of the upcoming projects, products and Mozilla initiatives. Prathamesh is also a very resourceful person. Due to this, he did a great job at organizing some great events at Pune and creare many new Mozilla Clubs across the city there.

ad79ac8cab406498f2d8168484b3525b

 

As a Mozilla Reps Council member, Prathamesh has done some great work and has shown great leadership skills. He is always proactive in sharing important updates with the bigger community as well as raising his hand at every new initiative.

Thanks Prathamesh, keep rocking the Open Web!

Please congratulate him by heading over to the Discourse topic.

Categorieën: Mozilla-nl planet

The Firefox Frontier: Open source isn’t just for software: Opensourcery recipe

Mozilla planet - vr, 22/06/2018 - 01:43

Firefox is one of the world’s most successful open source software projects. This means we make the code that runs Firefox available for anyone to modify and use so long … Read more

The post Open source isn’t just for software: Opensourcery recipe appeared first on The Firefox Frontier.

Categorieën: Mozilla-nl planet

Pagina's