mozilla

Mozilla Nederland LogoDe Nederlandse
Mozilla-gemeenschap

Our Year in Review: How we’ve kept Firefox working for you in 2020

Mozilla Blog - di, 15/12/2020 - 14:59

This year began like any other year, with our best intentions and resolutions to carry out. Then by March, the world changed and everyone’s lives — personally and professionally — turned upside down. Despite that, we kept to our schedule to release a new Firefox every month and we were determined to keep Firefox working for you during challenging times.

We shifted our focus to work on features aimed at helping people adjust to the new way of life, and we made Firefox faster so that you could get more things done. It’s all part of fulfilling our promise to build a better internet for people. So, as we eagerly look to the end of 2020, we look back at this unprecedented year and present you with our list of top features that made 2020 a little easier.

Keeping Calm and Carrying on

How do you cope with this new way of life spent online? Here were the Firefox features we added this year, aimed at bringing some zen in your life.

  • Picture-in-Picture: An employee favorite, we rolled out Picture-in-Picture to Mac and Linux, making it available on all platforms, where previously it was only available on Windows. We continued to improve Picture-in-Picture throughout the year — adding features like keyboard controls for fast forward and rewind — so that you could multitask like never before. We, too, were seeking calming videos; eyeing election results; and entertaining the little ones while trying to juggle home and work demands.
  • No more annoying notifications: We all started browsing more as the web became our window into the outside world, so we replaced annoying notification request pop-ups to stop interrupting your browsing, and added a speech bubble in the address bar when you interacted with the site.
  • Pocket article recommendations: We brought our delightful Pocket article recommendations to Firefox users beyond the US, to Austria, Belgium, Germany, India, Ireland, Switzerland, and the United Kingdom. For anyone wanting to take a pause on doom scrolling, simply open up a new tab in Firefox and check out the positivity in the Pocket article recommendations.
  • Ease eye strain with larger screen view: We all have been staring at the screen for longer than we ever thought we should. So, we’ve improved the global level zoom setting so you can set it and forget it. Then, every website can appear larger, should you wish, to ease eye strain. We also made improvements to our high contrast mode which made text more readable for users with low vision.

 

Get Firefox

 

Getting you faster to the places you want to visit

We also looked under the hood of Firefox to improve the speed and search experiences so you could get things done no matter what 2020 handed you.

  • Speed: We made Firefox faster than ever with improved performance on both page loads and start up time. For those the technical details:
      • Websites that use flexbox-based layouts load 20% faster than before;
      • Restoring a session is 17% quicker, meaning you can more quickly pick up where you left off;
      • For Windows users, opening new windows got quicker by 10%;
      • Our JavaScript engine got a revamp improving page load performance by up to 15%, page responsiveness by up to 12%, and reduced memory usage by up to 8%, all the while making it more secure.
  • Search made faster: We were searching constantly this year — what is coronavirus; do masks work; and what is the electoral college? The team spent countless hours improving the search experience in Firefox so that you could search smarter, faster — You could type less and find more with the revamped address bar, where our search suggestions got a redesign. An updated shortcut suggests search engines, tabs, and bookmarks, getting you where you want to go right from the address bar.
  • Additional under-the-hood improvements: We made noticeable improvements to Firefox’s printing experience, which included a fillable PDF form. We also improved your shopping experience with updates to our password management and credit card autofill.
Our promise to build a better internet

This has been an unprecedented year for the world, and as you became more connected online, we stayed focused on pushing for more privacy. It’s just one less thing for you to worry about.

  • HTTPS-Only mode: If you visit a website that asks for your email address or payment info, look for that lock in the address bar, which indicates your connection to it is secure. A site that doesn’t have the lock signals its insecure. It could be as simple as an expired Secure Socket Layer (SSL) certificate. No matter, Firefox’s new HTTPS-Only mode will attempt to establish fully secure connections to every website you visit and will also ask for your permission before connecting to a website if it doesn’t support secure connections.
  • Added privacy protections: We kicked off the year by expanding our Enhanced Tracking Protection, preventing known fingerprinters from profiling our users based on their hardware, and introduced a protection against redirect tracking — always on while you are browsing more than ever.
  • Facebook Container updates: Given the circumstances of 2020, it makes sense that people turned to Facebook to stay connected to friends and family when we couldn’t visit in person. Facebook Container — which helps prevent Facebook from tracking you around the web — added improvements that allowed you to create exceptions to how and when it blocks Facebook logins, likes, and comments, giving you more control over your relationship with Facebook.

Even if you didn’t have Firefox to help with some of life’s challenges online over the past year, don’t start 2021 without it. Download the latest version of Firefox and try these privacy-protecting, easy-to-use features for yourself.

The post Our Year in Review: How we’ve kept Firefox working for you in 2020 appeared first on The Mozilla Blog.

Categorieën: Mozilla-nl planet

Mozilla’s Vision for Trustworthy AI

Mozilla Blog - di, 15/12/2020 - 12:25
Mozilla is publishing its white paper, “Creating Trustworthy AI.”

A little over two years ago, Mozilla started an ambitious project: deciding where we should focus our efforts to grow the movement of people committed to building a healthier digital world. We landed on the idea of trustworthy AI.

When Mozilla started in 1998, the growth of the web was defining where computing was going. So Mozilla focused on web standards and building a browser. Today, computing — and the digital society that we all live in — is defined by vast troves of data, sophisticated algorithms and omnipresent sensors and devices. This is the era of AI. Asking questions today such as ‘Does the way this technology works promote human agency?’ or ‘Am I in control of what happens with my data?’ is like asking ‘How do we keep the web open and free?’ 20 years ago.

This current era of computing — and the way it shapes the consumer internet technology that more than 4 billion of us use everyday — has high stakes. AI increasingly powers smartphones, social networks, online stores, cars, home assistants and almost every other type of electronic device. Given the power and pervasiveness of these technologies, the question of whether AI helps and empowers or exploits and excludes will have a huge impact on the direction that our societies head over the coming decades.

It would be very easy for us to head in the wrong direction. As we have rushed to build data collection and automation into nearly everything, we have already seen the potential of AI to reinforce long-standing biases or to point us toward dangerous content. And there’s little transparency or accountability when an AI system spreads misinformation or misidentifies a face. Also, as people, we rarely have agency over what happens with our data or the automated decisions that it drives. If these trends continue, we’re likely to end up in a dystopian AI-driven world that deepens the gap between those with vast power and those without.

On the other hand, a significant number of people are calling attention to these dangerous trends — and saying ‘there is another way to do this!’ Much like the early days of open source, a growing movement of technologists, researchers, policy makers, lawyers and activists are working on ways to bend the future of computing towards agency and empowerment. They are developing software to detect AI bias. They are writing new data protection laws. They are inventing legal tools to put people in control of their own data. They are starting orgs that advocate for ethical and just AI. If these people — and Mozilla counts itself amongst them — are successful, we have the potential to create a world where AI broadly helps rather than harms humanity.

It was inspiring conversations with people like these that led Mozilla to focus the $20M+ that it spends each year on movement building on the topic of trustworthy AI. Over the course of 2020, we’ve been writing a paper titled “Creating Trustworthy AI” to document the challenges and ideas for action that have come up in these conversations. Today, we release the final version of this paper.

This ‘paper’ isn’t a traditional piece of research. It’s more like an action plan, laying out steps that Mozilla and other like-minded people could take to make trustworthy AI a reality. It is possible to make this kind of shift, just as we have been able to make the shift to clean water and safer automobiles in response to risks to people and society. The paper suggests the code we need to write, the projects we need to fund, the issues we need to champion, and the laws we need to pass. It’s a toolkit for technologists, for philanthropists, for activists, for lawmakers.

At the heart of the paper are eight big challenges the world is facing when it comes to the use of AI in the consumer internet technologies we all use everyday. These are things like: bias; privacy; transparency; security; and the centralization of AI power in the hands of a few big tech companies. The paper also outlines four opportunities to meet these challenges. These opportunities centre around the idea that there are developers, investors, policy makers and a broad public that want to make sure AI works differently — and to our benefit. Together, we have a chance to write code, process data, create laws and choose technologies that send us in a good direction.

Like any major Mozilla project, this paper was built using an open source approach. The draft we published in May came from 18 months of conversations, research and experimentation. We invited people to comment on that draft, and they did. People and organizations from around the world weighed in: from digital rights groups in Poland to civil rights activists in the U.S, from machine learning experts in North America to policy makers at the highest levels in Europe, from activists, writers and creators to ivy league professors. We have revised the paper based on this input to make it that much stronger. The feedback helped us hone our definitions of “AI” and “consumer technology.” It pushed us to make racial justice a more prominent lens throughout this work. And it led us to incorporate more geographic, racial, and gender diversity viewpoints in the paper.

In the months and years ahead, this document will serve as a blueprint for Mozilla Foundation’s movement building work, with a focus on research, advocacy and grantmaking. We’re already starting to manifest this work: Mozilla’s advocacy around YouTube recommendations has illuminated how problematic AI curation can be. The Data Futures Lab and European AI Fund that we are developing with partner foundations support projects and initiatives that reimagine how trustworthy AI is designed and built across multiple continents. And Mozilla Fellows and Awardees like Sylvie Delacroix, Deborah Raj, and Neema Iyer are studying how AI intersects with data governance, equality, and systemic bias. Past and present work like this also fed back into the white paper, helping us learn by doing.

We also hope that this work will open up new opportunities for the people who build the technology we use everyday. For so long, building technology that valued people was synonymous with collecting no or little data about them. While privacy remains a core focus of Mozilla and others, we need to find ways to protect and empower users that also include the collection and use of data to give people experiences they want. As the paper outlines, there are more and more developers — including many of our colleagues in the Mozilla Corporation — who are carving new paths that head in this direction.

Thank you for reading — and I look forward to putting this into action together.

The post Mozilla’s Vision for Trustworthy AI appeared first on The Mozilla Blog.

Categorieën: Mozilla-nl planet

Why getting voting right is hard, Part II: Hand-Counted Paper Ballots

Mozilla Blog - ma, 14/12/2020 - 18:42

In Part I we looked at desirable properties for voting system. In this post, I want to look at the details of a specific system: hand-counted paper ballots.

Sample Ballot

Hand-counted paper ballots are probably the simplest voting system in common use (though mostly outside the US). In practice, the process usually looks something like the following:

  1. Election officials pre-print paper ballots and distribute them to polling places. Each paper ballot has a list of contests and the choices for each contest, and a box or some other location where the voter can indicate their choice, as shown above.
  2. Voters arrive at the polling place, identify themselves to election workers, and are issued a ballot. They mark the section of the ballot corresponding to their choice. They cast their ballots by putting them into a ballot box, which can be as simple as a cardboard box with a hole in the top for the ballots.
  3. Once the polls close, the election workers collect all the ballots. If they are to be locally counted, then the process is as below; if they are to be centrally counted, they are transported back to election headquarters for counting.

The counting process varies between jurisdictions, but at a high level the process is simple. The vote counters go through each ballot one at a time and determine which choice it is for. Joseph Lorenzo Hall provides a good description of the procedure for California’s statutory 1% tally here:

In practice, the hand-counting method used by counties in California seems very similar. The typical tally team uses four people consisting of two talliers, one caller and one witness:

  • The caller speaks aloud the choice on the ballot for the race being tallied (e.g., “Yes…Yes…Yes…” or “Lincoln…Lincoln…Lincoln…”).
  • The witness observes each ballot to ensure that the spoken vote corresponded to what was on the ballot and also collates ballots in cross-stacks of ten ballots.
  • Each tallier records the tally by crossing out numbers on a tally sheet to keep track of the vote tally.

Talliers announce the tally at each multiple of ten (“10”, “20”, etc.) so that they can roll-back the tally if the two talliers get out of sync.

Obviously other techniques are possible, but as long as people are able to observe, differences in technique are mostly about efficiency rather than accuracy or transparency. The key requirement here is that any observer can look at the ballots and see that they are being recorded as they are cast. Jurisdictions will usually have some mechanism for challenging the tally of a specific ballot.

Security and Verifiability

The major virtue of hand-counted paper ballots is that they are simple, with security and privacy properties that are easy for voters to understand and reason about, and for observers to verify for themselves

It’s easiest to break the election in two phases:

  • Voting and collecting the ballots
  • Counting the collected ballots

If each of these is done correctly, then we can have high confidence that the election was correctly decided.

Voting

The security properties of the voting process mostly come down to ballot handling, namely that:

  • Only authorized voters get ballots and only one ballot. Note that it’s necessary to ensure this because otherwise it’s very hard to prevent multiple voting, where an authorized voter puts in two ballots.
  • Only the ballots of authorized voters make it into the ballot box.
  • All the ballots in the ballot box and only the ballots from the ballot box make it to election headquarters.

The first two of these properties are readily observed by observers — whether independent or partisan. The last property typically relies on technical controls. For instance, in Santa Clara county ballots are taken from the ballot box and put into clear tamper-evident bags for transport to election central, which limits the ability for poll workers to replace the ballots. When put together all three properties provide a high degree of confidence that the right ballots are available to be counted. This isn’t to say that there’s no opportunity for fraud via sleight-of-hand or voter impersonation (more on this later) but it’s largely one-at-a-time fraud, affecting a few ballots at a time, and is hard to perpetrate at scale.

Counting

The counting process is even easier to verify: it’s conducted in the open and so observers have their own chance to see each ballot and be confident that it has been counted correctly. Obviously, you need a lot of observers because you need at least one for each counting team, but given that the number of voters far exceeds the number of counting teams, it’s not that impractical for a campaign to come up with enough observers.

Probably the biggest source of problems with hand-counted paper ballots is disputes about the meaning of ambiguous ballots. Ideally voters would mark their ballots according to the instructions, but it’s quite common for voters to make stray marks, mark more than one box, fill in the boxes with dots instead of Xs, or even some more exotic variations, as shown in the examples below. In each case, it needs to be determined how to handle the ballot. It’s common to apply an “Intent of the voter” standard, but this still requires judgement. One extra difficulty here is that at the point where you are interpreting each ballot, you already know what it looks like, so naturally this can lead to a fair amount of partisan bickering about whether to accept each individual ballot, as each side tries to accept ballots that seem like they are for their preferred candidate and disqualify ballots that seem like they are for their opponent.

double marklizard people

A related issue is whether a given ballot is valid. This isn’t so much an issue with ballots cast at a polling place, but for vote-by-mail ballots there can be questions about signatures on the envelopes, the number of envelopes, etc. I’ll get to this later when I cover vote by mail in a later post.

Privacy/Secrecy of the Ballot

The level of privacy provided by paper ballots depends a fair bit on the precise details of how they are used and handled. In typical elections, voters will be given some level of privacy to fill out their ballot, so they don’t have to worry too much about that stage (though presumably in theory someone could set up cameras in the polling place). Aside from that, we primarily need to worry about two classes of attack:

  1. Tracking a given voter’s ballot from checkin to counting.
  2. Determining how a voter voted from the ballot itself.

Ideally — at least from the perspective of privacy — the ballots are all identical and the ballot box is big enough that you get some level of shuffling (how much is an open question), then it’s quite hard to correlate the ballot a voter was given to when it’s counted, though you might be able to narrow it down some by looking at which polling place/box the ballot came in and where it was in the box. In some jurisdictions, ballots have serial numbers, which might make this kind of tracking easier, though only if records of which voter gets which ballot are kept and available. Apparently the UK has this kind of system but tightly controls the records.

It’s generally not possible to tell from a ballot itself which voter it belongs to unless the voter cooperates by making the ballot distinctive in some way. This might happen because the voter is being paid (or threatened) to cast their vote a certain way. While some election jurisdictions prohibit distinguishing marks, as a practical matter it’s not really possible to prevent voters from making such marks if they really want to. This is especially true when the ballots need not be machine readable and so the voter has the ability to fill in the box somewhat distinctively (there are a lot of ways to write an X!). In elections with a lot of contests, as with many places on the US, it is also possible to use what’s called a “pattern voting” attack in which you vote one contest the way you are told and then vote the downballot contests in a way that uniquely identifies you. This sort of attack is very hard to prevent, but actually checking that people voted they way they were told is of course a lot of work. There are also more exotic attacks such as fingerprinting paper stock, but none of these are easy to mount in bulk.

Accessibility

One big drawback of hand-marked ballots is that they are not very accessible, either to people with disabilities or to non-native speakers. For obvious reasons, if you’re blind or have limited dexterity it can be hard to fill in the boxes (this is even harder with optical scan type ballots). Many jurisdictions that use paper ballots will also have some accommodation for people with disabilities. Paper ballots work fine in most languages, but each language must be separately translated and then printed, and then you need to have extras of each ballot type in case more people come than you expect, so at the end of the day the logistics can get quite complicated. By contrast, electronic voting machines (which I’ll get to later) scale much better to multiple languages.

Scalability

Although hand-counting does a good job of producing accurate and verifiable counts, it does not scale very well1. Estimates of how expensive it is to count ballots vary quite a bit, but a 2010 Pew study of hand recounts in Washington and Minnesota (the 2004 Washington gubernatorial and 2008 Minnesota US Senate races) put the cost of recounting a single contest at between $0.15 and $0.60 per ballot. Of course, as noted above some of the cost here is that of disputing ambiguous ballots. If the races is not particularly competitive then these ballots can be set aside and only need to be carefully adjudicated if they have a chance of changing the result.

Importantly, the cost of hand-counting goes up with the number of ballots times the number of contests on the ballot. In the United States it’s not uncommon to have 20 or more contests per election. For example, here is a sample ballot from the 2020 general election in Santa Clara County, CA. This ballot has the following contests

Type Count President 1 US House of Representatives 1 State Assembly 1 Superior Court Judge 1 County Board of Education 1 County Board of Supervisors 1 Community College District 1 City Mayor 1 City Council (vote for two) 1 State Propositions 12 Local ballot measures 6 Total 32

In an election like this, the cost to count could be several dollars per ballot. Of course, California has an exceptionally large number of contests, but in general hand-counting represents a significant cost.

Aside from the financial impact of hand counting ballots, it just takes a long time. Pew notes that both the Washington and Minnesota recounts took around seven months to resolve, though again this is partly due to the small margin of victory. As another example, California law requires a “1% post-election manual tally” in which 1% of precincts are randomly selected for hand-counting. Even with such a restricted count, the tally can take weeks in a large county such as Los Angeles, suggesting that hand counting all the ballots would be prohibitive in this setting. This isn’t to say that hand counting can never work, obviously, merely that it’s not a good match for the US electoral system, which tends to have a lot more contests than in other countries.

Up Next: Optical Scanning

The bottom line here is that while hand counting works well in many jurisdictions it’s not a great fit for a lot of elections in the United States. So if we can’t count ballots by hand, then what can we do? The good news is that there are ballot counting mechanisms which can provide similar assurance and privacy properties to hand counting but do so much more efficiently, namely optical scan ballots. I’ll be covering that in my next post.

  1. By contrast, the marking process is very scalable: if you have a long line, you can put out more tables, pens, privacy screens, etc. 

The post Why getting voting right is hard, Part II: Hand-Counted Paper Ballots appeared first on The Mozilla Blog.

Categorieën: Mozilla-nl planet

The Mozilla Blog: Why getting voting right is hard, Part I: Introduction and Requirements

Mozilla planet - di, 08/12/2020 - 19:24

Every two years around this time, the US has an election and the rest of the world marvels and asks itself one question: What the heck is going on with US elections? I’m not talking about US politics here but about the voting systems (machines, paper, etc.) that people use to vote, which are bafflingly complex. While it’s true that American voting is a chaotic patchwork of different systems scattered across jurisdictions, running efficient secure elections is a genuinely hard problem. This is often surprising to people who are used to other systems that demand precise accounting such as banking/ATMs or large scale databases, but the truth is that voting is fundamentally different and much harder.

In this series I’ll be going through a variety of different voting systems so you can see how this works in practice. This post provides a brief overview of the basic requirements for voting systems. We’ll go into more detail about the practical impact of these requirements as we examine each system.

Requirements

To understand voting systems design, we first need to understand the requirements to which they are designed. These vary somewhat, but generally look something like the below.

Efficient Correct Tabulation

This requirement is basically trivial: collect the ballots and tally them up. The winner is the one with the most votes 1. You also need to do it at scale and within a reasonable period of time otherwise there’s not much point.

Verifiable Results

It’s not enough for the election just to produce the right result, it must also do so in a verifiable fashion. As voting researcher Dan Wallach is fond of saying, the purpose of elections is to convince the loser that they actually lost, and that means more than just trusting the election officials to count the votes correctly. Ideally, everyone in world would be able to check for themselves that the votes had been correctly tabulated (this is often called “public verifiability”), but in real-world systems it usually means that some set of election observers can personally observe parts of the process and hopefully be persuaded it was conducted correctly.

Secrecy of the Ballot

The next major requirement is what’s called “secrecy of the ballot”, i.e., ensuring that others can’t tell how you voted. Without ballot secrecy, people could be pressured to vote certain ways or face negative consequences for their votes. Ballot secrecy actually has two components (1) other people — including election officials — can’t tell how you voted and (2) you can’t prove to other people how you voted. The first component is needed to prevent wholesale retaliation and/or rewards and the second is needed to prevent retail vote buying. The actual level of ballot secrecy provided by systems varies. For instance, the UK system technically allows election officials to match ballots to the voter, but prevents it with procedural controls and vote by mail systems generally don’t do a great job of preventing you from proving how you voted, but in general most voting systems attempt to provide some level of ballot secrecy.2

Accessibility

Finally, we want voting systems to be accessible, both in the specific sense that we want people with disabilities to be able to vote and in the more general sense that we want it to be generally easy for people to vote. Because the voting-eligible population is so large and people’s situations are so varied, this often means that systems have to make accommodations, for instance for overseas or military voters or for people who speak different languages.

Limited Trust

As you’ve probably noticed, one common theme in these requirements is the desire to limit the amount of trust you place in any one entity or person. For instance, when I worked the polls in Santa Clara county elections, we would collect all the paper ballots and put them in tamper-evident bags before taking them back to election central for processing. This makes it harder for the person transporting the ballots to examine the ballots or substitute their own. For those who aren’t used to the way security people think, this often feels like saying that election officials aren’t trustworthy, but really what it’s saying is that elections are very high stakes events and critical systems like this should be designed with as few failure points as possible, and that includes preventing both outsider and insider threats, protecting even against authorized election workers themselves.

An Overconstrained Problem

Individually each of these requirements is fairly easy to meet, but the combination of them turns out to be extremely hard. For example if you publish everyone’s ballots then it’s (relatively) easy to ensure that the ballots were counted correctly, but you’ve just completely give up secrecy of the ballot.3 Conversely, if you just trust election officials to count all the votes, then it’s much easier to provide secrecy from everyone else. But these properties are both important, and hard to provide simultaneously. This tension is at the heart of why voting is so much more difficult than other superficially systems like banking. After all, your transactions aren’t secret from the bank. In general, what we find is that voting systems may not completely meet all the requirements but rather compromise on trying to do a good job on most/all of them.

Up Next: Hand-Counted Paper Ballots

In the next post, I’ll be covering what is probably the simplest common voting system: hand-counted paper ballots. This system actually isn’t that common in the US for reasons I’ll go into, but it’s widely used outside the US and provides a good introduction into some of the problems with running a real election.

  1. For the purpose of this series, we’ll mostly be assuming first past the post systems, which are the main systems in use in the US.
  2. Note that I’m talking here about systems designed for use by ordinary citizens. Legislative voting, judicial voting, etc. are qualitatively different: they usually have a much smaller number of voters and don’t try to preserve the secrecy of the ballot, so the problem is much simpler. 
  3. Thanks to Hovav Shacham for this example. 

The post Why getting voting right is hard, Part I: Introduction and Requirements appeared first on The Mozilla Blog.

Categorieën: Mozilla-nl planet

Hacks.Mozilla.Org: An update on MDN Web Docs’ localization strategy

Mozilla planet - di, 08/12/2020 - 17:20

In our previous post — MDN Web Docs evolves! Lowdown on the upcoming new platform — we talked about many aspects of the new MDN Web Docs platform that we’re launching on December 14th. In this post, we’ll look at one aspect in more detail — how we are handling localization going forward. We’ll talk about how our thinking has changed since our previous post, and detail our updated course of action.

Updated course of action

Based on thoughtful feedback from the community, we did some additional investigation and determined a stronger, clearer path forward.

First of all, we want to keep a clear focus on work leading up to the launch of our new platform, and making sure the overall system works smoothly. This means that upon launch, we still plan to display translations in all existing locales, but they will all initially be frozen — read-only, not editable.

We were considering automated translations as the main way forward. One key issue was that automated translations into European languages are seen as an acceptable solution, but automated translations into CJK languages are far from ideal — they have a very different structure to English and European languages, plus many Europeans are able to read English well enough to fall back on English documentation when required, whereas some CJK communities do not commonly read English so do not have that luxury.

Many folks we talked to said that automated translations wouldn’t be acceptable in their languages. Not only would they be substandard, but a lot of MDN Web Docs communities center around translating documents. If manual translations went away, those vibrant and highly involved communities would probably go away — something we certainly want to avoid!

We are therefore focusing on limited manual translations as our main way forward instead, looking to unfreeze a number of key locales as soon as possible after the new platform launch.

Limited manual translations

Rigorous testing has been done, and it looks like building translated content as part of the main build process is doable. We are separating locales into two tiers in order to determine which will be unfrozen and which will remain locked.

  • Tier 1 locales will be unfrozen and manually editable via pull requests. These locales are required to have at least one representative who will act as a community lead. The community members will be responsible for monitoring the localized pages, updating translations of key content once the English versions are changed, reviewing edits, etc. The community lead will additionally be in charge of making decisions related to that locale, and acting as a point of contact between the community and the MDN staff team.
  • Tier 2 locales will be frozen, and not accept pull requests, because they have no community to maintain them.

The Tier 1 locales we are starting with unfreezing are:

  • Simplified Chinese (zh-CN)
  • Traditional Chinese (zh-TW)
  • French (fr)
  • Japanese (ja)

If you wish for a Tier 2 locale to be unfrozen, then you need to come to us with a proposal, including evidence of an active team willing to be responsible for the work associated with that locale. If this is the case, then we can promote the locale to Tier 1, and you can start work.

We will monitor the activity on the Tier 1 locales. If a Tier 1 locale is not being maintained by its community, we shall demote it to Tier 2 after a certain period of time, and it will become frozen again.

We are looking at this new system as a reasonable compromise — providing a path for you the community to continue work on MDN translations providing the interest is there, while also ensuring that locale maintenance is viable, and content won’t get any further out of date. With most locales unmaintained, changes weren’t being reviewed effectively, and readers of those locales were often confused between using their preferred locale or English, their experience suffering as a result.

Review process

The review process will be quite simple.

  • The content for each Tier 1 locale will be kept in its own separate repo.
  • When a PR is made against that repo, the localization community will be pinged for a review.
  • When the content has been reviewed, an MDN admin will be pinged to merge the change. We should be able to set up the system so that this happens automatically.
  • There will also be some user-submitted content bugs filed at https://github.com/mdn/sprints/issues, as well as on the issue trackers for each locale repo. When triaged, the “sprints” issues will be assigned to the relevant localization team to fix, but the relevant localization team is responsible for triaging and resolving issues filed on their own repo.
Machine translations alongside manual translations

We previously talked about the potential involvement of machine translations to enhance the new localization process. We still have this in mind, but we are looking to keep the initial system simple, in order to make it achievable. The next step in Q1 2021 will be to start looking into how we could most effectively make use of machine translations. We’ll give you another update in mid-Q1, once we’ve made more progress.

The post An update on MDN Web Docs’ localization strategy appeared first on Mozilla Hacks - the Web developer blog.

Categorieën: Mozilla-nl planet

Mozilla Attack & Defense: Guest Blog Post: Good First Steps to Find Security Bugs in Fenix (Part 1)

Mozilla planet - di, 08/12/2020 - 16:17

This blog post is one of several guest blog posts, where we invite participants of our bug bounty program to write about bugs they’ve reported to us.

Fenix is a newly designed Firefox for Android that officially launched in August 2020. In Fenix, many components required to run as an Android app have been rebuilt from scratch, and various new features are being implemented as well. While they are re-implementing features, security bugs fixed in the past may be introduced again. If you care about the open web and you want to participate in the Client Bug Bounty Program of Mozilla, Fenix is a good target to start with.

Let’s take a look at two bugs I found in the firefox: scheme that is supported by Fenix.

Bugs Came Again with Deep Links

Fenix provides an interesting custom scheme URL firefox://open?url= that can open any specified URL in a new tab. On Android, a deep link is a link that takes you directly to a specific part of an app; and the firefox:open deep link is not intended to be called from web content, but its access was not restricted.

Web Content should not be able to link directly to a file:// URL (although a user can type or copy/paste such a link into the address bar.) While Firefox on the Desktop has long-implemented this fix, Fenix did not – I submitted Bug 1656747 that exploited this behavior and navigated to a local file from web content with the following hyperlink:

<a href="firefox://open?url=file:///sdcard/Download"> Go </a>

But actually, the same bug affected the older Firefox for Android (unofficially referred to as Fennec) and was filed three years ago Bug 1380950.

Likewise, security researcher Jun Kokatsu reported Bug 1447853, which was an <iframe> sandbox bypass in Firefox for iOS. He also abused the same type of deep link URL for bypassing the popup block brought by <iframe> sandbox.

<iframe src="data:text/html,<a href=firefox://open-url?url=https://example.com> Go </a>" sandbox></iframe>

I found this attack scenario in a test file of Firefox for iOS and I re-tested it in Fenix. I submitted Bug 1656746 which is the same issue as what he found.

Conclusion

As you can see, retesting past attack scenarios can be a good starting point. We can find past vulnerabilities from the Mozilla Foundation Security Advisories. By examining histories accumulated over a decade, we can see what are considered security bugs and how they were resolved. These resources will be useful for retesting past bugs as well as finding attack vectors for newly introduced features.

Have a good bug hunt!

Categorieën: Mozilla-nl planet

The Mozilla Blog: State of Mozilla 2019-2020: Annual Impact Report

Mozilla planet - ma, 07/12/2020 - 17:01

2020 has been a year like few others with the internet’s value and necessity front and center. The State of Mozilla for 2019-2020 makes clear that Mozilla’s mission and role in the world is more important than ever. Dive into the full report by clicking on the image below.

2019–2020 State of Mozilla

About the State of Mozilla

Mozilla releases the State of Mozilla annually. This impact report outlines how Mozilla’s products, services, advocacy and engagement have influenced technology and society over the past year. The State of Mozilla also includes details on Mozilla’s finances as a way of further demonstrating how Mozilla uses the power of its unique structure and resources to achieve its mission — an internet that is open and accessible to all.

The post State of Mozilla 2019-2020: Annual Impact Report appeared first on The Mozilla Blog.

Categorieën: Mozilla-nl planet

The Rust Programming Language Blog: The Foundation Conversation

Mozilla planet - ma, 07/12/2020 - 01:00

In August, we on the Core Team announced our plans to create a Foundation by the end of the year. Since that time, we’ve been doing a lot of work but it has been difficult to share many details, and we know that a lot of you have questions.

The "Foundation Conversation"

This blog post announces the start of the “Foundation Conversation”. This is a week-long period in which we have planned a number of forums and opportunities where folks can ask questions about the Foundation and get answers from the Core team. It includes both text-based “question-and-answer” (Q&A) periods as well as live broadcasts. We’re also going to be coming to the Rust team’s meetings to have discussions. We hope that this will help us to share our vision for the Foundation and to get the community excited about what’s to come.

A secondary goal for the Foundation Conversation is to help us develop the Foundation FAQ. Most FAQs get written before anyone has ever really asked a question, but we really wanted to write a FAQ that responds honestly to the questions that people have. We’ve currently got a draft of the FAQ which is based both on questions we thought people would ask and questions that were raised by Rust team members thus far, but we would like to extend it to include questions raised by people in the broader community. That’s where you come in!

How to join the conversation

There are many ways to participate in the Foundation Conversation:

  • Read the draft FAQ we’ve been working on. It contains the answers to some of the questions that we have been asked thus far.
  • Fill out our survey. This survey is designed to help us understand how the Rust community is feeling about the Foundation.
  • Ask questions during the Community Q&A periods. We’ve scheduled a number of 3 hour periods during which the foundation-faq-2020 repo will be open for anyone to ask questions. There will be members of the core team around during those periods to answer those questions as best we can.
  • Watch our Live Broadcasts. We’ve scheduled live broadcasts this week where members of the core team will be answering and discussing some of the questions that have come up thus far. These will be posted to YouTube later.

Read on for more details.

The foundation-faq-2020 repository

We have chosen to coordinate the Foundation Conversation using a GitHub repository called foundation-faq-2020. This repository contains the draft FAQ we’ve written so far, along with a series of issues representing the questions that people have. Last week we opened the repository for Rust team members, so you can see that we’ve already had quite a few questions raised (and answered). Once a new issue is opened, someone from the core team will come along and post an answer, and then label the question as “answered”.

Community Q&A sessions

We have scheduled a number of 3 hour periods in which the repository will be open for anyone to open new issues. Outside of these slots, the repository is generally “read only” unless you are a member of a Rust team. We are calling these slots the “Community Q&A” sessions, since it is a time for the broader community to open questions and get answers.

We’ve tried to stagger the times for the “Community Q&A” periods to be accessible from all time zones. During each slot, members of the core team will be standing by to monitor new questions and post answers. In some cases, if the question is complex, we may hold off on answering right away and instead take time to draft the response and post it later.

Here are the times that we’ve scheduled for folks to pose questions.

PST US EST US UTC Europe/Africa India China Dec 7th (View in my timezone) 3-6pm 6-9pm 23:00-2:00 4:30am-7:30am (Dec 8) 7am-10am (Dec 8) Dec 9th (View in my timezone) 4-7am 7-10am 12:00-15:00 5:30-8:30pm 8pm-11pm Dec 11th (View in my timezone) 10-1pm 1-4pm 18:00-21:00 11:30pm-2:30am 2am-5am (Dec 12) Live broadcasts

In addition to the repository, we’ve scheduled two “live broadcasts”. These sessions will feature members of the core team discussing and responding to some of the questions that have been asked thus far. Naturally, even if you can’t catch the live broadcast, the video will be available for streaming afterwards. Here is the schedule for these broadcasts:

PST US EST US UTC Europe/Africa India China Dec 9th (View in my timezone) Watch on YouTube 3-4pm 6-7pm 23:00-24:00 4:30-5:30am (Dec 10) 7-8am (Dec 10) Dec 12th (View in my timezone) Watch on YouTube 4-5am 7-8am 12:00-13:00 5:30pm-6:30pm 8-9pm

These will be hosted on our YouTube channel.

We’re very excited about the progress on the Rust foundation and we’re looking forward to hearing from all of you.

Categorieën: Mozilla-nl planet

Nicholas Nethercote: Farewell, Mozilla

Mozilla planet - vr, 04/12/2020 - 04:12

Today is my last day working for Mozilla. I will soon be starting a new job with Apple.

I have worked on a lot of different things over my twelve years at Mozilla. Some numbers:

  • Three years as a contractor, and nine as an employee.
  • 4,441 commits to mozilla-central, 560 to rustc, 148 to rustc-perf, and smaller numbers to several other repositories.
  • 2,561 bugs filed in Bugzilla, 2,118 bugs assigned to me, 27,647 comments, 2,411 patches reviewed.
  • Three module peerages and one module ownership.
  • 277 blog posts.
  • Six managers and four managees, across three teams. (One of my managees later became my manager. Thankfully, it worked well!)
  • More trans-Pacific air miles than I want to count.

Two areas of work stand out for me.

  • I started the MemShrink project and for several years played the roles of tech lead, engineering project manager, engineer, and publicist. It changed Firefox’s memory consumption from its biggest technical weakness into a strength, and enabled the use of more processes in Electrolysis (for responsiveness) and Fission (for security).
  • My work on the Rust compiler, rustc-perf, and related profilers helped the compiler become roughly 2.5x faster over a three year period, and laid a foundation for ongoing future improvements.

I have a lot of memories, and the ones relating to these two projects are at the forefront. Thank you to everyone I’ve worked with. It’s been a good time.

As I understand it, this blog will stay up in read-only mode indefinitely. I will make a copy of all the posts and if it ever goes down I will rehost them at my personal site.

All the best to everyone.

Categorieën: Mozilla-nl planet

Mozilla Privacy Blog: Mozilla reacts to publication of the EU Democracy Action Plan

Mozilla planet - do, 03/12/2020 - 12:20

The European Commission has just published its new EU Democracy Action Plan (EDAP). This is an important step forward in the efforts to better protect democracy in the digital age, and we’re happy to see the Commission take onboard many of our recommendations.

Reacting to the EDAP publication, Raegan MacDonald, Mozilla’s Head of Public Policy, said:

“Mozilla has been a leading advocate for the need for greater transparency in online political advertising. We haven’t seen adequate steps from the platforms to address these problems themselves, and it’s time for regulatory solutions. So we welcome the Commission’s signal of support for the need for broad disclosure of sponsored political content. We likewise welcome the EDAP’s acknowledgement of the risks associated with microtargeting of political content.

As a founding signatory to the EU Code of Practice on Disinformation we are encouraged that the Commission has adopted many of our recommendations for how the Code can be enhanced, particularly with respect to its implementation and its role within a more general EU policy approach to platform responsibility.

We look forward to working with the EU institutions to fine-tune the upcoming legislative proposals.”

The post Mozilla reacts to publication of the EU Democracy Action Plan appeared first on Open Policy & Advocacy.

Categorieën: Mozilla-nl planet

Daniel Stenberg: Twitter lockout, again

Mozilla planet - do, 03/12/2020 - 09:05

Status: 00:27 in the morning of December 4 my account was restored again. No words or explanations on how it happened – yet.

This morning (December 3rd, 2020) I woke up to find myself logged out from my Twitter account on the devices where I was previously logged in. Due to “suspicious activity” on my account. I don’t know the exact time this happened. I checked my phone at around 07:30 and then it has obviously already happened. So at time time over night.

Trying to log back in, I get prompted saying I need to update my password first. Trying that, it wants to send a confirmation email to an email address that isn’t mine! Someone has managed to modify the email address associated with my account.

It has only been two weeks since someone hijacked my account the last time and abused it for scams. When I got the account back, I made very sure I both set a good, long, password and activated 2FA on my account. 2FA with auth-app, not SMS.

The last time I wasn’t really sure about how good my account security was. This time I know I did it by the book. And yet this is what happened.

<figcaption>Excuse the Swedish version, but it wasn’t my choice. Still, it shows the option to send the email confirmation to an email address that isn’t mine and I didn’t set it there.</figcaption> Communication

I was in touch with someone at Twitter security and provided lots of details of my systems , software, IP address etc while they researched their end about what happened. I was totally transparent and gave them all info I had that could shed some light.

I was contacted by a Sr. Director from Twitter (late Dec 4 my time). We have a communication established and I’ve been promised more details and information at some point next week. Stay tuned.

Was I breached?

Many people have proposed that the attacker must have come through my local machine to pull this off. If someone did, it has been a very polished job as there is no trace at all of that left anywhere on my machine. Also, to reset my password I would imagine the attacker would need to somehow hijack my twitter session, need the 2FA or trigger a password reset and intercept the email. I don’t receive emails on my machine so the attacker would then have had to (also?) manage to get into my email machine and removed that email – and not too many others because I receive a lot of email and I’ve kept on receiving a lot of email during this period.

I’m not ruling it out. I’m just thinking it seems unlikely.

If the attacker would’ve breached my phone and installed something nefarious on that, it would not have removed any reset emails and it seems like a pretty touch challenge to hijack a “live” session from the Twitter client or get the 2FA code from the authenticator app. Not unthinkable either, just unlikely.

Most likely?

As I have no insights into the other end I cannot really say which way I think is the most likely that the perpetrator used for this attack, but I will maintain that I have no traces of a local attack or breach and I know of no malicious browser add-ons or twitter apps on my devices.

Details

Firefox version 83.0 on Debian Linux with Tweetdeck in a tab – a long-lived session started over a week ago (ie no recent 2FA codes used),

Browser extensions: Cisco Webex, Facebook container, multi-account containers, HTTPS Everywhere, test pilot and ublock origin.

I only use one “authorized app” with Twitter and that’s Tweetdeck.

On the Android phone, I run an updated Android with an auto-updated Twitter client. That session also started over a week ago. I used Google Authenticator for 2fa.

While this hijack took place I was asleep at home (I don’t know the exact time of it), on my WiFi, so all my most relevant machines would’ve been seen as originating from the same “NATed” IP address. This info was also relayed to Twitter security.

Restored

The actual restoration happens like this (and it was the exact same the last time): I just suddenly receive an email on how to reset my password for my account.

The email is a standard one without any specifics for this case. Just a template press the big button and it takes you to the Twitter site where I can set a new password for my account. There is nothing in the mail that indicates a human was involved in sending it. There is no text explaining what happened. Oh, right, the mail also include a bunch of standard security advice like “use a strong password”, “don’t share your password with others” and “activate two factor” etc as if I hadn’t done all that already…

It would be prudent of Twitter to explain how this happened, at least roughly and without revealing sensitive details. If it was my fault somehow, or if I just made it easier because of something in my end, I would really like to know so that I can do better in the future.

What was done to it?

No tweets were sent. The name and profile picture remained intact. I’ve not seen any DMs sent or received from while the account was “kidnapped”. Given this, it seems possible that the attacker actually only managed to change the associated account email address.

Categorieën: Mozilla-nl planet

Dustin J. Mitchell: Taskcluster's DB (Part 3) - Online Migrations

Mozilla planet - wo, 02/12/2020 - 15:45

This is part 3 of a deep-dive into the implementation details of Taskcluster’s backend data stores. If you missed the first two, see part 1 and part 2 for the background, as we’ll jump right in here!

Big Data

A few of the tables holding data for Taskcluster contain a tens or hundreds of millions of lines. That’s not what the cool kids mean when they say “Big Data”, but it’s big enough that migrations take a long time. Most changes to Postgres tables take a full lock on that table, preventing other operations from occurring while the change takes place. The duration of the operation depends on lots of factors, not just of the data already in the table, but on the kind of other operations going on at the same time.

The usual approach is to schedule a system downtime to perform time-consuming database migrations, and that’s just what we did in July. By running it a clone of the production database, we determined that we could perform the migration completely in six hours. It turned out to take a lot longer than that. Partly, this was because we missed some things when we shut the system down, and left some concurrent operations running on the database. But by the time we realized that things were moving too slowly, we were near the end of our migration window and had to roll back. The time-consuming migration was version 20 - migrate queue_tasks, and it had been estimated to take about 4.5 hours.

When we rolled back, the DB was at version 19, but the code running the Taskcluster services corresponded to version 12. Happily, we had planned for this situation, and the redefined stored functions described in part 2 bridged the gap with no issues.

Patch-Fix

Our options were limited: scheduling another extended outage would have been difficult. We didn’t solve all of the mysteries of the poor performance, either, so we weren’t confident in our prediction of the time required.

The path we chose was to perform an “online migration”. I wrote a custom migration script to accomplish this. Let’s look at how that worked.

The goal of the migration was to rewrite the queue_task_entities table into a tasks table, with a few hundred million rows. The idea with the online migration was to create an empty tasks table (a very quick operation), then rewrite the stored functions to write to tasks, while reading from both tables. Then a background task can move rows from the queue_task_entitites table to the tasks table without blocking concurrent operations. Once the old table is empty, it can be removed and the stored functions rewritten to address only the tasks table.

A few things made this easier than it might have been. Taskcluster’s tasks have a deadline after which they become immutable, typically within one week of the task’s creation. That means that the task mutation functions can change the task in-place in whichever table they find it in. The background task only moves tasks with deadlines in the past. This eliminates any concerns about data corruption if a row is migrated while it is being modified.

A look at the script linked above shows that there were some complicating factors, too – notably, two more tables to manage – but those factors didn’t change the structure of the migration.

With this in place, we ran the replacement migration script, creating the new tables and updating the stored functions. Then a one-off JS script drove migration of post-deadline tasks with a rough ETA calculation. We figured this script would run for about a week, but in fact it was done in just a few days. Finally, we cleaned up the temporary functions, leaving the DB in precisely the state that the original migration script would have generated.

Supported Online Migrations

After this experience, we knew we would run into future situations where a “regular” migration would be too slow. Apart from that, we want users to be able to deploy Taskcluster without scheduling downtimes: requiring downtimes will encourage users to stay at old versions, missing features and bugfixes and increasing our maintenance burden.

We devised a system to support online migrations in any migration. Its structure is pretty simple: after each migration script is complete, the harness that handles migrations calls a _batch stored function repeatedly until it signals that it is complete. This process can be interrupted and restarted as necessary. The “cleanup” portion (dropping unnecessary tables or columns and updating stored functions) must be performed in a subsequent DB version.

The harness is careful to call the previous version’s online-migration function before it starts a version’s upgrade, to ensure it is complete. As with the old “quick” migrations, all of this is also supported in reverse to perform a downgrade.

The _batch functions are passed a state parameter that they can use as a bookmark. For example, a migration of the tasks might store the last taskId that it migrated in its state. Then each batch can begin with select .. where task_id > last_task_id, allowing Postgres to use the index to quickly find the next task to be migrated. When the _batch function indicates that it processed zero rows, the handler calls an _is_completed function. If this function returns false, then the whole process starts over with an empty state. This is useful for tables where more rows that were skipped during the migration, such as tasks with deadlines in the future.

Testing

An experienced engineer is, at this point, boggling at the number of ways this could go wrong! There are lots of points at which a migration might fail or be interrupted, and the operators might then begin a downgrade. Perhaps that downgrade is then interrupted, and the migration re-started! A stressful moment like this is the last time anyone wants surprises, but these are precisely the circumstances that are easily forgotten in testing.

To address this, and to make such testing easier, we developed a test framework that defines a suite of tests for all manner of circumstances. In each case, it uses callbacks to verify proper functionality at every step of the way. It tests both the “happy path” of a successful migration and the “unhappy paths” involving failed migrations and downgrades.

In Practice

The impetus to actually implement support for online migrations came from some work that Alex Lopez has been doing to change the representation of worker pools in the queue. This requires rewriting the tasks table to transform the provisioner_id and worker_type columns into a single, slash-separated task_queue_id column. The pull request is still in progress as I write this, but already serves as a great practical example of an online migration (and online dowgrade, and tests).

Summary

As we’ve seen in this three-part series, Taskcluster’s data backend has undergone a radical transformation this year, from a relatively simple NoSQL service to a full Postgres database with sophisticated support for ongoing changes to the structure of that DB.

In some respects, Taskcluster is no different from countless other web services abstracting over a data-storage backend. Indeed, Django provides robust support for database migrations, as do many other application frameworks. One factor that sets Taskcluster apart is that it is a “shipped” product, with semantically-versioned releases which users can deploy on their own schedule. Unlike for a typical web application, we – the software engineers – are not “around” for the deployment process, aside from the Mozilla deployments. So, we must make sure that the migrations are well-tested and will work properly in a variety of circumstances.

We did all of this with minimal downtime and no data loss or corruption. This involved thousands of lines of new code written, tested, and reviewed; a new language (SQL) for most of us; and lots of close work with the Cloud Operations team to perform dry runs, evaluate performance, and debug issues. It couldn’t have happened without the hard work and close collaboration of the whole Taskcluster team. Thanks to the team, and thanks to you for reading this short series!

Categorieën: Mozilla-nl planet

Pagina's