mozilla

Mozilla Nederland LogoDe Nederlandse
Mozilla-gemeenschap

Mozilla waarschuwt websites om TLS 1.2 in te schakelen - Security.nl

Nieuws verzameld via Google - do, 16/05/2019 - 13:38
Mozilla waarschuwt websites om TLS 1.2 in te schakelen  Security.nl

Duizenden websites moeten het protocol dat ze gebruiken voor het opzetten van een versleutelde verbinding met bezoekers aanpassen, anders zijn ze straks ...

Categorieën: Mozilla-nl planet

Mozilla VR Blog: Making ethical decisions for the immersive web

Mozilla planet - wo, 15/05/2019 - 20:01
Making ethical decisions for the immersive web

One of the promises of immersive technologies is real time communication unrestrained by geography. This is as transformative as the internet, radio, television, and telephones—each represents a pivot in mass communications that provides new opportunities for information dissemination and creating connections between people. This raises the question, “what’s the immersive future we want?”

We want to be able to connect without traveling. Indulge our curiosity and creativity beyond our physical limitations. Revolutionize the way we visualize and share our ideas and dreams. Enrich everyday situations. Improve access to limited resources like healthcare and education.

The internet is an integral part of modern life—a key component in education, communication, collaboration, business, entertainment and society as a whole.
— Mozilla Manifesto, Principle 1

My first instinct is to say that I want an immersive future that brings joy. Do AR apps that help me maintain my car bring me joy? Not really.

What I really want is an immersive future that respects individual creators and users. Platforms and applications that thoughtfully approach issues of autonomy, privacy, bias, and accessibility in a complex environment. How do we get there? First, we need to understand the broader context of augmented and virtual reality in ethics, identifying overlap with both other technologies (e.g. artificial intelligence) and other fields (e.g. medicine and education). Then, we can identify the unique challenges presented by spatial and immersive technologies. Given the current climate of ethics and privacy, we can anticipate potential problems, identify the core issues, and evaluate different approaches.

From there, we have an origin for discussion and a path for taking practical steps that enable legitimate uses of MR while discouraging abuse and empowering individuals to make choices that are right for them.

For details and an extended discussion on these topics, see this paper.

The immersive web

Whether you have a $30 or $3000 headset, you should be able to participate in the same immersive universe. No person should be excluded due to their skin color, hairstyle, disability, class, location, or any other reason.

The internet is a global public resource that must remain open and accessible.
Mozilla Manifesto, Principle 2

The immersive web represents an evolution of the internet. Immersive technologies are already deployed in education and healthcare. It's unethical to limit their benefits to a privileged few, particularly when MR devices can improve access to limited resources. For example, Americans living in rural areas are underserved by healthcare, particularly specialist care. In an immersive world, location is no longer an obstacle. Specialists can be virtually present anywhere, just like they were in the room with the patient. Trained nurses and assistants would be required for physical manipulations and interactions, but this could dramatically improve health coverage and reduce burdens on both patients and providers.

While we can build accessibility into browsers and websites, the devices themselves need to be created with appropriate accomodations, like settings that indicate a user is in a wheelchair. When we design devices and experiences, we need to consider how they'll work for people with disabilities. It's imperative to build inclusive MR devices and experiences, both because it's unethical to exclude users due to disability, and because there are so many opportunities to use MR as an assistive technology, including:

  • Real time subtitles
  • Gaze-based navigation
  • Navigation with vehicle and obstacle detection and warning

The immersive web is for everyone.

Representation and safety

Mixed reality offers new ways to connect with each other, enabling us to be virtually present anywhere in the world instantaneously. Like most technologies, this is both a good and a bad thing. While it transforms how we can communicate, it also offers new vectors for abuse and harassment.

All social VR platforms need to have simple and obvious ways to report abusive behavior and block the perpetrators. All social platforms, whether 2D or 3D should have this, but the VR-enabled embodiment intensifies the impact of harassment. Behavior that would be limited to physical presence is no longer geographically limited, and identities can be more obfuscated. Safety is not a 'nice to have' feature — it's a requirement. Safety is a key component in inclusion and freedom of expression, as well as being a human right.

Freedom of expression in this paradigm includes both choosing how to present yourself and having the ability to maintain multiple virtual identities. Immersive social experiences allow participants to literally build their identities via avatars. Human identity is infinitely complex (and not always very human — personally, I would choose a cat avatar). Thoughtfully approaching diversity and representation in avatars isn't easy, but it is worthwhile.

Individuals must have the ability to shape the internet and their own experiences on it.
Mozilla Manifesto, Principle 5

Suppose Banksy, a graffiti artist known both for their art and their anonymity, is an accountant by day who used an HMD to conduct virtual meetings. Outside of work, Banksy is a virtual graffiti artist. However, biometric data could tie the two identities together, stripping Banksy of their anonymity. Anonymity enables free speech; it removes the threats of economic retailiation and social ostracism and allows consumers to process ideas free of predjudices about the creators. There's a long history of women who wrote under assumed names to avoid being dismissed for their gender, including JK Rowling and George Sand.

Unique considerations in mixed reality

Immersive technologies differ from others in their ability to affect our physical bodies. To achieve embodiment and properly interact with virtual elements, devices use a wide range of data derived from user biometrics, the surrounding physical world, and device orientation. As the technology advances, the data sources will expand.

The sheer amount of data required for MR experiences to function requires that we rethink privacy. Earlier, I mentioned that gaze-based navigation can be used to allow mobility impaired users to participate more fully on the immersive web. Unfortunately, gaze tracking data also exposes large amounts of nonverbal data that can be used to infer characteristics and mental states, including ADHD and sexual arousal.

Individuals’ security and privacy on the internet are fundamental and must not be treated as optional.
Mozilla Manifesto, Principle 4

While there may be a technological solution to this problem, it highlights a wider social and legal issue: we've become too used to companies monetizing our personal data. It's possible to determine that, although users report privacy concerns, they don't really care, because they 'consent' to disclosing personal information for small rewards. The reality is that privacy is hard. It's hard to define and it's harder to defend. Processing privacy policies feels like it requires a law degree and quantifying risks and tradeoffs is nondeterministic. In the US, we've focused on privacy as an individual's responsibility, when Europe (with the General Data Protection Regulation) shows that it's society's problem and should be tackled comprehensively.

Concrete steps for ethical decision making

Ethical principles aren't enough. We also need to take action — while some solutions will be technical, there are also legal, regulatory, and societal challenges that need to be addressed.

  1. Educate and assist lawmakers
  2. Establish a regulatory authority for flexible and responsive oversight
  3. Engage engineers and designers to incorporate privacy by design
  4. Empower users to understand the risks and benefits of immersive technology
  5. Incorporate experts from other fields who have addressed similar problems

Tech needs to take responsibility. We've built technology that has incredible positive potential, but also serious risks for abuse and unethical behavior. Mixed reality technologies are still emerging, so there's time to shape a more respectful and empowering immersive world for everyone.

Categorieën: Mozilla-nl planet

Hacks.Mozilla.Org: Empowering User Privacy and Decentralizing IoT with Mozilla WebThings

Mozilla planet - wo, 15/05/2019 - 19:47

Smart home devices can make busy lives a little easier, but they also require you to give up control of your usage data to companies for the devices to function. In a recent article from the New York Times’ Privacy Project about protecting privacy online, the author recommended people to not buy Internet of Things (IoT) devices unless they’re “willing to give up a little privacy for whatever convenience they provide.

This is sound advice since smart home companies can not only know if you’re at home when you say you are, they’ll soon be able to listen for your sniffles through their always-listening microphones and recommend sponsored cold medicine from affiliated vendors. Moreover, by both requiring that users’ data go through their servers and by limiting interoperability between platforms, leading smart home companies are chipping away at people’s ability to make real, nuanced technology choices as consumers.

At Mozilla, we believe that you should have control over your devices and the data that smart home devices create about you. You should own your data, you should have control over how it’s shared with others, and you should be able to contest when a data profile about you is inaccurate.

Mozilla WebThings follows the privacy by design framework, a set of principles developed by Dr. Ann Cavoukian, that takes users’ data privacy into account throughout the whole design and engineering lifecycle of a product’s data process. Prioritizing people over profits, we offer an alternative approach to the Internet of Things, one that’s private by design and gives control back to you, the user.

User Research Findings on Privacy and IoT Devices

Before we look at the design of Mozilla WebThings, let’s talk briefly about how people think about their privacy when they use smart home devices and why we think it’s essential that we empower people to take charge.

Today, when you buy a smart home device, you are buying the convenience of being able to control and monitor your home via the Internet. You can turn a light off from the office. You can see if you’ve left your garage door open. Prior research has shown that users are passively, and sometimes actively, willing to trade their privacy for the convenience of a smart home device. When it seems like there’s no alternative between having a potentially useful device or losing their privacy, people often uncomfortably choose the former.

Still, although people are buying and using smart home devices, it does not mean they’re comfortable with this status quo. In one of our recent user research surveys, we found that almost half (45%) of the 188 smart home owners we surveyed were concerned about the privacy or security of their smart home devices.

Bar graph showing about 45% of the 188 current smart home owners we surveyed were concerned about their privacy or security at least a few times per month.

User Research Survey Results

Last Fall 2018, our user research team conducted a diary study with eleven participants across the United States and the United Kingdom. We wanted to know how usable and useful people found our WebThings software. So we gave each of our research participants some Raspberry Pis (loaded with the Things 0.5 image) and a few smart home devices.

User research participants were given a raspberry Pi, a smart light, a motion sensor, a smart plug, and a door sensor.

Smart Home Devices Given to Participants for User Research Study

We watched, either in-person or through video chat, as each individual walked through the set up of their new smart home system. We then asked participants to write a ‘diary entry’ every day to document how they were using the devices and what issues they came across. After two weeks, we sat down with them to ask about their experience. While a couple of participants who were new to smart home technology were ecstatic about how IoT could help them in their lives, a few others were disappointed with the lack of reliability of some of the devices. The rest fell somewhere in between, wanting improvements such as more sophisticated rules functionality or a phone app to receive notifications on their iPhones.

We also learned more about people’s attitudes and perceptions around the data they thought we were collecting about them. Surprisingly, all eleven of our participants expected data to be collected about them. They had learned to expect data collection, as this has become the prevailing model for other platforms and online products. A few thought we would be collecting data to help improve the product or for research purposes. However, upon learning that no data had been collected about their use, a couple of participants were relieved that they would have one less thing, data, to worry about being misused or abused in the future.

By contrast, others said they weren’t concerned about data collection; they did not think companies could make use of what they believed was menial data, such as when they were turning a light on or off. They did not see the implications of how collected data could be used against them. This showed us that we can improve on how we demonstrate to users what others can learn from your smart home data. For example, one can find out when you’re not home based on when your door has opened and closed.

graph of door sensor logs over a week reveals when someone is not home

Door Sensor Logs can Reveal When Someone is Not Home

From our user research, we’ve learned that people are concerned about the privacy of their smart home data. And yet, when there’s no alternative, they feel the need to trade away their privacy for convenience. Others aren’t as concerned because they don’t see the long-term implications of collected smart home data. We believe privacy should be a right for everyone regardless of their socioeconomic or technical background. Let’s talk about how we’re doing that.

Decentralizing Data Management Gives Users Privacy

Vendors of smart home devices have architected their products to be more of service to them than to their customers. Using the typical IoT stack, in which devices don’t easily interoperate, they can build a robust picture of user behavior, preferences, and activities from data they have collected on their servers.

Take the simple example of a smart light bulb. You buy the bulb, and you download a smartphone app. You might have to set up a second box to bridge data from the bulb to the Internet and perhaps a “user cloud subscription account” with the vendor so that you can control the bulb whether you’re home or away. Now imagine five years into the future when you have installed tens to hundreds of smart devices including appliances, energy/resource management devices, and security monitoring devices. How many apps and how many user accounts will you have by then?

The current operating model requires you to give your data to vendor companies for your devices to work properly. This, in turn, requires you to work with or around companies and their walled gardens.

Mozilla’s solution puts the data back in the hands of users. In Mozilla WebThings, there are no company cloud servers storing data from millions of users. User data is stored in the user’s home. Backups can be stored anywhere. Remote access to devices occurs from within one user interface. Users don’t need to download and manage multiple apps on their phones, and data is tunneled through a private, HTTPS-encrypted subdomain that the user creates.

The only data Mozilla receives are the instances when a subdomain pings our server for updates to the WebThings software. And if a user only wants to control their devices locally and not have anything go through the Internet, they can choose that option too.

Decentralized distribution of WebThings Gateways in each home means that each user has their own private “data center”. The gateway acts as the central nervous system of their smart home. By having smart home data distributed in individual homes, it becomes more of a challenge for unauthorized hackers to attack millions of users. This decentralized data storage and management approach offers a double advantage: it provides complete privacy for user data, and it securely stores that data behind a firewall that uses best-of-breed https encryption.

The figure below compares Mozilla’s approach to that of today’s typical smart home vendor.

Comparison of Mozilla’s Approach to Typical Smart Home Vendor

Mozilla’s approach gives users an alternative to current offerings, providing them with data privacy and the convenience that IoT devices can provide.

Ongoing Efforts to Decentralize

In designing Mozilla WebThings, we have consciously insulated users from servers that could harvest their data, including our own Mozilla servers, by offering an interoperable, decentralized IoT solution. Our decision to not collect data is integral to our mission and additionally feeds into our Emerging Technology organization’s long-term interest in decentralization as a means of increasing user agency.

WebThings embodies our mission to treat personal security and privacy on the Internet as a fundamental right, giving power back to users. From Mozilla’s perspective, decentralized technology has the ability to disrupt centralized authorities and provide more user agency at the edges, to the people.

Decentralization can be an outcome of social, political, and technological efforts to redistribute the power of the few and hand it back to the many. We can achieve this by rethinking and redesigning network architecture. By enabling IoT devices to work on a local network without the need to hand data to connecting servers, we decentralize the current IoT power structure.

With Mozilla WebThings, we offer one example of how a decentralized, distributed system over web protocols can impact the IoT ecosystem. Concurrently, our team has an unofficial draft Web Thing API specification to support standardized use of the web for other IoT device and gateway creators.

While this is one way we are making strides to decentralize, there are complementary projects, ranging from conceptual to developmental stages, with similar aims to put power back into the hands of users. Signals from other players, such as FreedomBox Foundation, Daplie, and Douglass, indicate that individuals, households, and communities are seeking the means to govern their own data.

By focusing on people first, Mozilla WebThings gives people back their choice: whether it’s about how private they want their data to be or which devices they want to use with their system.

This project is an ongoing effort. If you want to learn more or get involved, check out the Mozilla WebThings Documentation, you can contribute to our documentation or get started on your own web things or Gateway.

If you live in the Bay Area, you can find us this weekend at Maker Faire Bay Area (May 17-19). Stop by our table. Or follow @mozillaiot to learn about upcoming workshops and demos.

The post Empowering User Privacy and Decentralizing IoT with Mozilla WebThings appeared first on Mozilla Hacks - the Web developer blog.

Categorieën: Mozilla-nl planet

Hacks.Mozilla.Org: TLS 1.0 and 1.1 Removal Update

Mozilla planet - wo, 15/05/2019 - 16:01
tl;dr Enable support for Transport Layer Security (TLS) 1.2 today!

 

As you may have read last year in the original announcement posts, Safari, Firefox, Edge and Chrome are removing support for TLS 1.0 and 1.1 in March of 2020. If you manage websites, this means there’s less than a year to enable TLS 1.2 (and, ideally, 1.3) on your servers, otherwise all major browsers will display error pages, rather than the content your users were expecting to find.

Screenshot of a Secure Connection Failed error page

In this article we provide some resources to check your sites’ readiness, and start planning for a TLS 1.2+ world in 2020.

Check the TLS “Carnage” list

Once a week, the Mozilla Security team runs a scan on the Tranco list (a research-focused top sites list) and generates a list of sites still speaking TLS 1.0 or 1.1, without supporting TLS ≥ 1.2.

Tranco list top sites with TLS <= 1.1

As of this week, there are just over 8,000 affected sites from the one million listed by Tranco.

There are a few potential gotchas to be aware of, if you do find your site on this list:

  • 4% of the sites are using TLS ≤ 1.1 to redirect from a bare domain (https://example.com) to www (https://www.example.com) on TLS ≥ 1.2 (or vice versa). If you were to only check your site post-redirect, you might miss a potential footgun.
  • 2% of the sites don’t redirect from bare to www (or vice versa), but do support TLS ≥ 1.2 on one of them.

The vast majority (94%), however, are just bad—it’s TLS ≤ 1.1 everywhere.

If you find that a site you work on is in the TLS “Carnage” list, you need to come up with a plan for enabling TLS 1.2 (and 1.3, if possible). However, this list only covers 1 million sites. Depending on how popular your site is, you might have some work to do regardless of whether you’re not listed by Tranco.

Run an online test

Even if you’re not on the “Carnage” list, it’s a good idea to test your servers all the same. There are a number of online services that will do some form of TLS version testing for you, but only a few will flag not supporting modern TLS versions in an obvious way. We recommend using one or more of the following:

Check developer tools

Another way to do this is open up Firefox (versions 68+) or Chrome (versions 72+) DevTools, and look for the following warnings in the console as you navigate around your site.

Firefox DevTools console warning

Chrome DevTools console warning

What’s Next?

This October, we plan on disabling old TLS in Firefox Nightly and you can expect the same for Chrome and Edge Canaries. We hope this will give enough time for sites to upgrade before affecting their release population users.

The post TLS 1.0 and 1.1 Removal Update appeared first on Mozilla Hacks - the Web developer blog.

Categorieën: Mozilla-nl planet

Cameron Kaiser: ZombieLoad doesn't affect Power Macs

Mozilla planet - wo, 15/05/2019 - 03:18
The latest in the continued death march of speculative execution attacks is ZombieLoad (see our previous analysis of Spectre and Meltdown on Power Macs). ZombieLoad uses the same types of observable speculation flaws to exfiltrate data but bases it on a new class of Intel-specific side-channel attacks utilizing a technique the investigators termed MDS, or microarchitectural data sampling. While Spectre and Meltdown attack at the cache level, ZombieLoad targets Intel HyperThreading (HT), the company's implementation of symmetric multithreading, by trying to snoop on the processor's line fill buffers (LFBs) used to load the L1 cache itself. In this case, side-channel leakages of data are possible if the malicious process triggers certain specific and ultimately invalid loads from memory -- hence the nickname -- that require microcode assistance from the CPU; these have side-effects on the LFBs which can be observed by methods similar to Spectre by other processes sharing the same CPU core. (Related attacks against other microarchitectural structures are analogously implemented.)

The attackers don't have control over the observed address, so they can't easily read arbitrary memory, but careful scanning for the type of data you're targeting can still make the attack effective even against the OS kernel. For example, since URLs can be picked out of memory, this apparent proof of concept shows a separate process running on the same CPU victimizing Firefox to extract the URL as the user types it in. This works because as the user types, the values of the individual keystrokes go through the LFB to the L1 cache, allowing the malicious process to observe the changes and extract characters. There is much less data available to the attacking process but that also means there is less to scan, making real-time attacks like this more feasible.

That said, because the attack is specific to architectural details of HT (and the authors of the attack say they even tried on other SMT CPUs without success), this particular exploit wouldn't work even against modern high-SMT count Power CPUs like POWER9. It certainly won't work against a Power Mac because no Power Mac CPU ever implemented SMT, not even the G5. While Mozilla is deploying a macOS-specific fix, we don't need it in TenFourFox, nor do we need other mitigations. It's especially bad news for Intel because nearly every Intel chip since 2011 is apparently vulnerable and the performance impact of fixing ZombieLoad varies anywhere from Intel's Pollyanna estimate of 3-9% to up to 40% if HT must be disabled completely.

Is this a major concern for users? Not as such: although the attacks appear to be practical and feasible, they require you to run dodgy software and that's a bad idea on any platform because dodgy software has any number of better ways of pwning your computer. So don't run dodgy programs!

Meanwhile, TenFourFox FPR14 final should be available for testing this weekend.

Categorieën: Mozilla-nl planet

The Rust Programming Language Blog: 4 years of Rust

Mozilla planet - wo, 15/05/2019 - 02:00

On May 15th, 2015, Rust was released to the world! After 5 years of open development (and a couple of years of sketching before that), we finally hit the button on making the attempt to create a new systems programming language a serious effort!

It’s easy to look back on the pre-1.0 times and cherish them for being the wild times of language development and fun research. Features were added and cut, syntax and keywords were tried, and before 1.0, there was a big clean-up that removed a lot of the standard library. For fun, you can check Niko’s blog post on how Rust's object system works, Marijn Haverbeke’s talk on features that never made it close to 1.0 or even the introductory slides about Servo, which present a language looking very different from today.

Releasing Rust with stability guarantees also meant putting a stop to large visible changes. The face of Rust is still very similar to Rust 1.0. Even with the changes from last year’s 2018 Edition, Rust is still very recognizable as what it was in 2015. That steadiness hides that the time of Rust’s fastest development and growth is now. With the stability of the language and easy upgrades as a base, a ton of new features have been built. We’ve seen a bunch of achievements in the last year:

This list could go on and on. While the time before and after release was a time where language changes had huge impact how Rust is perceived, it's becoming more and more important what people start building in and around it. This includes projects like whole game engines, but also many small, helpful libraries, meetup formats, tutorials other educational material. Birthdays are a great time to take a look back over the last year and see the happy parts!

Rust would be nothing, and especially not winning prizes, without its community. Community happens everywhere! We would like to thank everyone for being along on this ride, from team members to small scale contributors to people just checking the language out and finding interest in it. Your interest and curiosity is what makes the Rust community an enjoyable place to be. Some meetups are running birthday parties today to which everyone is invited. If you are not attending one, you can take the chance to celebrate in any other fashion: maybe show us a picture of what you are currently working on or talk about what excites you. If you want to take it to social media, consider tagging our Twitter account or using the hashtag #rustbirthday.

Categorieën: Mozilla-nl planet

Mike Hoye: The Next Part Of The Process

Mozilla planet - di, 14/05/2019 - 19:05

DSC_8829

I’ve announced this upcoming change and the requirements we’ve laid out for a replacement service for IRC, but I haven’t widely discussed the evaluation process in any detail, including what you can expect it to look like, how you can participate, and what you can expect from me. I apologize for that, and really should have done so sooner.

Briefly, I’ll be drafting a template doc based on our stated requirements, and once that’s in good, markdowny shape we’ll be putting it on GitHub with preliminary information for each of the stacks we’re considering and opening it up to community discussion and participation.

From there, we’re going to be taking pull requests and assembling our formal understanding of each of the candidates. As well, we’ll be soliciting more general feedback and community impressions of the candidate stacks on Mozilla’s Community Discourse forum.

I’ll be making an effort to ferry any useful information on Discourse back to GitHub, which unfortunately presents some barriers to some members of our community.

While this won’t be quite the same as a typical RFC/RFP process – I expect the various vendors as well as members the Mozilla community to be involved – we’ll be taking a lot of cues from the Rust community’s hard-won knowledge about how to effectively run a public consultation process.

In particular, it’s critical to me that this process to be as open and transparent as possible, explicitly time-boxed, and respectful of the Mozilla Community Participation Guidelines (CPG). As I’ve mentioned before, accessibility and developer productivity will both weigh heavily on our evaluation process, and the Rust community’s “no new rationale” guidelines will be respected when it comes time to make the final decision.

When it kicks off, this step will be widely announced both inside and outside Mozilla.

As part of that process, our IT team will be standing up instances of each of the candidate stacks and putting them behind the Participation Systems team’s “Mozilla-IAM” auth system. We’ll be making them available to the Mozilla community at first, and expanding that to include Github and via-email login soon afterwards for broader community testing. Canonical links to these trial systems will be prominently displayed on the GitHub repository; as the line goes, accept no substitutes.

Some things to note: we will also be using this period to evaluate these tools from a community moderation and administration perspective as well, to make sure that we have the tools and process available to meaningfully uphold the CPG.

To put this somewhat more charitably than it might deserve, we expect that some degree of this testing will be a typical if unfortunate byproduct of the participative process. But we also have plans to automate some of that stress-testing, to test both platform API usability and the effectiveness of our moderation tools. Which I suppose is long-winded way of saying: you’ll probably see some robots in there play-acting at being jerks, and we’re going to ask you to play along and figure out how to flag them as bad actors so we can mitigate the jerks of the future.

As well, we’re going to be doing the usual whats-necessaries to avoid the temporary-permanence trap, and at the end of the evaluation period all the instances of our various candidates will be shut down and deleted.

Our schedule is still being sorted out, and I’ll have more about that and our list of candidates shortly.

Categorieën: Mozilla-nl planet

Vervangen van duizenden PKI-overheidcertificaten begonnen - Security.nl

Nieuws verzameld via Google - di, 14/05/2019 - 09:00
Vervangen van duizenden PKI-overheidcertificaten begonnen  Security.nl

De komende maanden zullen duizenden PKIoverheid-certificaten worden vervangen omdat ze niet meer aan de vastgestelde eisen voldoen.

Categorieën: Mozilla-nl planet

The Rust Programming Language Blog: Announcing Rust 1.34.2

Mozilla planet - di, 14/05/2019 - 02:00

The Rust team has published a new point release of Rust, 1.34.2. Rust is a programming language that is empowering everyone to build reliable and efficient software.

If you have a previous version of Rust installed via rustup, getting Rust 1.34.2 is as easy as:

$ rustup update stable

If you don't have it already, you can get rustup from the appropriate page on our website.

What's in 1.34.2 stable

Sean McArthur reported a security vulnerability affecting the standard library that caused the Error::downcast family of methods to perform unsound casts when a manual implementation of the Error::type_id method returned the wrong TypeId, leading to security issues such as out of bounds reads/writes/etc.

The Error::type_id method was recently stabilized as part of Rust 1.34.0. This point release destabilizes it, preventing any code on the stable and beta channels to implement or use it, awaiting future plans that will be discussed in issue #60784.

An in-depth explaination of this issue was posted in yesterday's security advisory. The assigned CVE for the vulnerability is CVE-2019-12083.

Categorieën: Mozilla-nl planet

Mozilla Reps Community: Rep of the Month – April 2019

Mozilla planet - ma, 13/05/2019 - 19:07

Please join us in congratulating Lidya Christina, Rep of the Month for April 2019!

Lidya Christina is from Jakarta, Indonesia. Her contribution in SUMO event in 2016 lead her into a proud Mozillian, an active contributor of Mozilla Indonesia and last March 2019 she joined the Reps program.

1957cca0ed448d8ba2e9a2c586a4b46c

In addition to that, she’s also a key member that oversees the day to day operational work in Mozilla Community space Jakarta, while at the same time regularly organizing localization event and actively participating in campaigns like Firefox 66 Support Mozilla Sprint, Firefox Fights for you, Become a dark funnel detective and Common Voice sprints.

Congratulations and keep rocking the open web! :tada: :tada:

To congratulate Lidya, please head over to Discourse!

Categorieën: Mozilla-nl planet

Support.Mozilla.Org: Introducing Josh and Jeremy to the SUMO team

Mozilla planet - ma, 13/05/2019 - 18:55

Today the SUMO team would like to welcome Josh and Jeremy who will be joining our team from Boise, Idaho.

Josh and Jeremy will be joining our team to help out on Support for some of the new efforts Mozilla are working on towards creating a connected and integrated Firefox experience.

They will be helping out with new products, but also providing support on forums and social channels, as well as serving as an escalation point for hard to solve issues.

A bit about Josh:

Hey everyone! My name is Josh Wilson and I will be working as a contractor for Mozilla. I have been working in a variety of customer support and tech support jobs over the past ten years. I enjoy camping and hiking during the summers, and playing console RPG’s in the winters. I recently started cooking Indian food, but this has been quite the learning curve for me. I am so happy to be a part of the Mozilla community and look forward to offering my support.

A bit about Jeremy:

Hello! My name is Jeremy Sanders and I’m a contractor of Mozilla through a small company named PartnerHero. I’ve been working in the field of Information Technology since 2015 and have been working with a variety of government, educational, and private entities. In my free time, I like to get out of the office and go fly fishing, camping, or hiking. I also play quite a few video games such as Counterstrike: Global Offensive and League of Legends. I am very excited to start my time here with Mozilla and begin working in conjunction with the community to provide support for users!

Please say hi to them when you see them!

Categorieën: Mozilla-nl planet

Mozilla zoekt naar betere integratie Tor in Firefox - Techzine.be

Nieuws verzameld via Google - ma, 13/05/2019 - 09:00
Mozilla zoekt naar betere integratie Tor in Firefox  Techzine.be

Mozilla zoekt een efficiëntere manier om Tor in Firefox te integreren, gezien de huidige integratie Firefox vertraagt. Om dit te realiseren heeft Mozilla.

Categorieën: Mozilla-nl planet

The Rust Programming Language Blog: Security advisory for the standard library

Mozilla planet - ma, 13/05/2019 - 02:00

This is a cross-post of the official security advisory. The official post contains a signed version with our PGP key, as well.

The CVE for this vulnerability is CVE-2019-12083.

The Rust team was recently notified of a security vulnerability affecting manual implementations of Error::type_id and their interaction with the Error::downcast family of functions in the standard library. If your code does not manually implement Error::type_id your code is not affected.

Overview

The Error::type_id function in the standard library was stabilized in the 1.34.0 release on 2019-04-11. This function allows acquiring the concrete TypeId for the underlying error type to downcast back to the original type. This function has a default implementation in the standard library, but it can also be overridden by downstream crates. For example, the following is currently allowed on Rust 1.34.0 and Rust 1.34.1:

struct MyType; impl Error for MyType { fn type_id(&self) -> TypeId { // Enable safe casting to `String` by accident. TypeId::of::<String>() } }

When combined with the Error::downcast* family of methods this can enable safe casting of a type to the wrong type, causing security issues such as out of bounds reads/writes/etc.

Prior to the 1.34.0 release this function was not stable and could not be either implemented or called in stable Rust.

Affected Versions

The Error::type_id function was first stabilized in Rust 1.34.0, released on 2019-04-11. The Rust 1.34.1 release, published 2019-04-25, is also affected. The Error::type_id function has been present, unstable, for all releases of Rust since 1.0.0 meaning code compiled with nightly may have been affected at any time.

Mitigations

Immediate mitigation of this bug requires removing manual implementations of Error::type_id, instead inheriting the default implementation which is correct from a safety perspective. It is not the intention to have Error::type_id return TypeId instances for other types.

For long term mitigation we are going to destabilize this function. This is unfortunately a breaking change for users calling Error::type_id and for users overriding Error::type_id. For users overriding it's likely memory unsafe, but users calling Error::type_id have only been able to do so on stable for a few weeks since the last 1.34.0 release, so it's thought that the impact will not be too great to overcome.

We will be releasing a 1.34.2 point release on 2019-05-14 (tomorrow) which reverts #58048 and destabilizes the Error::type_id function. The upcoming 1.35.0 release along with the beta/nightly channels will also all be updated with a destabilization.

The final fate of the Error::type_id API isn't decided upon just yet and is the subject of #60784. No action beyond destabilization is currently planned so nightly code may continue to exhibit this issue. We hope to fully resolve this in the standard library soon.

Timeline of events
  • Thu, May 9, 2019 at 14:07 PM - Bug reported to security@rust-lang.org
  • Thu, May 9, 2019 at 15:10 PM - Alex reponds, confirming the bug
  • Fri, May 10, 2019 - Plan for mitigation developed and implemented
  • Mon, May 13, 2019 - PRs posted to GitHub for stable/beta/master branches
  • Mon, May 13, 2019 - Security list informed of this issue
  • (planned) Tue, May 14, 2019 - Rust 1.34.2 is released with a fix for this issue
Acknowledgements

Thanks to Sean McArthur, who found this bug and reported it to us in accordance with our security policy.

Categorieën: Mozilla-nl planet

Daniel Stenberg: The curl user survey 2019

Mozilla planet - ma, 13/05/2019 - 00:25

the survey

For the 6th consecutive year, the curl project is running a “user survey” to learn more about what people are using curl for, what think think of curl, what the need of curl and what they wish from curl going forward.

the survey

As in most projects, we love to learn more about our users and how to improve. For this, we need your input to guide us where to go next and what to work on going forward.

the survey

Please consider donating a few minutes of your precious time and tell me about your views on curl. How do you use it and what would you like to see us fix?

the survey

The survey will be up for 14 straight days and will be taken down at midnight (CEST) May 26th. We appreciate if you encourage your curl friends to participate in the survey.

Bonus: the analysis from the 2018 survey.

Categorieën: Mozilla-nl planet

Nick Desaulniers: f() vs f(void) in C vs C++

Mozilla planet - zo, 12/05/2019 - 22:42

TL;DR

Prefer f(void) in C to potentially save a 2B instruction per function call when targeting x86_64 as a micro-optimization. -Wstrict-prototypes can help. Doesn’t matter for C++.

The Problem

While messing around with some C code in godbolt Compiler Explorer, I kept noticing a particular funny case. It seemed with my small test cases that sometimes function calls would zero out the return register before calling a function that took no arguments, but other times not. Upon closer inspection, it seemed like a difference between function definitions, particularly f() vs f(void). For example, the following C code:

<figcaption></figcaption>1 2 3 4 5 6 7 8 int foo(); int bar(void); int baz() { foo(); bar(); return 0; }

would generate the following assembly:

<figcaption></figcaption>1 2 3 4 5 6 7 8 baz: pushq %rax # realign stack for callq xorl %eax, %eax # zero %al, non variadic callq foo callq bar # why you no zero %al? xorl %eax, %eax popq %rcx retq

In particular, focus on the call the foo vs the call to bar. foo is preceded with xorl %eax, %eax (X ^ X == 0, and is the shortest encoding for an instruction that zeroes a register on the variable length encoded x86_64, which is why its used a lot such as in setting the return value). (If you’re curious about the pushq/popq, see point #1.) Now I’ve seen zeroing before (see point #3 and remember that %al is the lowest byte of %eax and %rax), but if it was done for the call to foo, then why was it not additionally done for the call to bar? %eax being x86_64’s return register for the C ABI should be treated as call clobbered. So if you set it, then made a function call that may have clobbered it (and you can’t deduce otherwise), then wouldn’t you have to reset it to make an additional function call?

Let’s look at a few more cases and see if we can find the pattern. Let’s take a look at 2 sequential calls to foo vs 2 sequential calls to bar:

<figcaption></figcaption>1 2 3 4 5 6 int foo(); int quux() { foo(); // notice %eax is always zeroed foo(); // notice %eax is always zeroed return 0; } <figcaption></figcaption>1 2 3 4 5 6 7 8 9 quux: pushq %rax xorl %eax, %eax callq foo xorl %eax, %eax callq foo xorl %eax, %eax popq %rcx retq <figcaption></figcaption>1 2 3 4 5 6 int bar(void); int quuz() { bar(); // notice %eax is not zeroed bar(); // notice %eax is not zeroed return 0; } <figcaption></figcaption>1 2 3 4 5 6 7 quuz: pushq %rax callq bar callq bar xorl %eax, %eax popq %rcx retq

So it should be pretty clear now that the pattern is f(void) does not generate the xorl %eax, %eax, while f() does. What gives, aren’t they declaring f the same; a function that takes no parameters? Unfortunately, in C the answer is no, and C and C++ differ here.

An explanation

f() is not necessarily “f takes no arguments” but more of “I’m not telling you what arguments f takes (but it’s not variadic).” Consider this perfectly legal C and C++ code:

<figcaption></figcaption>1 2 int foo(); int foo(int x) { return 42; }

It seems that C++ inherited this from C, but only in C++ does f() seems to have the semantics of “f takes no arguments,” as the previous examples all no longer have the xorl %eax, %eax. Same for f(void) in C or C++. That’s because foo() and foo(int) are two different function in C++ thanks to function overloading (thanks reddit user /u/OldWolf2). Also, it seems that C supported this difference for backwards compatibility w/ K & R C.

<figcaption></figcaption>1 2 int bar(void); int bar(int x) { return x + 42; }

Is an error in C, but in C++ thanks to function overloading these are two separate functions! (_Z3barv vs _Z3bari). (Thanks HN user pdpi, for helping me understand this. Cunningham’s Law ftw.)

Needless to say, If you write code like that where your function declarations and definitions do not match, you will be put in prison. Do not pass go, do not collect $200). Control flow integrity analysis is particularly sensitive to these cases, manifesting in runtime crashes.

What could a sufficiently smart compiler do to help?

-Wall and -Wextra will just flag the -Wunused-parameter. We need the help of -Wmissing-prototypes to flag the mismatch between declaration and definition. (An aside; I had a hard time remembering which was the declaration and which was the definition when learning C++. The mnemonic I came up with and still use today is: think of definition as in muscle definition; where the meat of the function is. Declarations are just hot air.) It’s not until we get to -Wstrict-prototypes do we get a warning that we should use f(void). -Wstrict-prototypes is kind of a stylistic warning, so that’s why it’s not part of -Wall or -Wextra. Stylistic warnings are in bikeshed territory (*cough* -Wparentheses *cough*).

One issue with C and C++’s style of code sharing and encapsulation via headers is that declarations often aren’t enough for the powerful analysis techniques of production optimizing compilers (whether or not a pointer “escapes” is a big one that comes to mind). Let’s see if a “sufficiently smart compiler” could notice when we’ve declared f(), but via observation of the definition of f() noticed that we really only needed the semantics of f(void).

<figcaption></figcaption>1 2 3 4 5 6 7 8 9 10 11 int puts(const char*); int __attribute__((noinline)) foo2() { puts("hello"); return 0; } int quacks() { foo2(); foo2(); return 0; } <figcaption></figcaption>1 2 3 4 5 6 7 quacks: pushq %rax callq foo2 callq foo2 xorl %eax, %eax popq %rcx retq

A ha! So by having the full definition of foo2 in this case in the same translation unit, Clang was able to deduce that foo2 didn’t actually need the semantics of f(), so it could skip the xorl %eax, %eax we’d seen for f() style definitions earlier. If we change foo2 to a declaration (such as would be the case if it was defined in an external translation unit, and its declaration included via header), then Clang can no longer observe whether foo2 definition differs or not from the declaration.

So Clang can potentially save you a single instruction (xorl %eax, %eax) whose encoding is only 2B, per function call to functions declared in the style f(), but only IF the definition is in the same translation unit and doesn’t differ from the declaration, and you happen to be targeting x86_64. *deflated whew* But usually it can’t because it’s only been provided the declaration via header.

Conclusion

I certainly think f() is prettier than f(void) (so C++ got this right), but pretty code may not always be the fastest and it’s not always straightforward when to prefer one over the other.

So it seems that f() is ok for strictly C++ code. For C or mixed C and C++, f(void) might be better.

References
Categorieën: Mozilla-nl planet

Daniel Stenberg: tiny-curl

Mozilla planet - za, 11/05/2019 - 15:11

curl, or libcurl specifically, is probably the world’s most popular and widely used HTTP client side library counting more than six billion installs.

curl is a rock solid and feature-packed library that supports a huge amount of protocols and capabilities that surpass most competitors. But this comes at a cost: it is not the smallest library you can find.

Within a 100K

Instead of being happy with getting told that curl is “too big” for certain use cases, I set a goal for myself: make it possible to build a version of curl that can do HTTPS and fit in 100K (including the wolfSSL TLS library) on a typical 32 bit architecture.

As a comparison, the tiny-curl shared library when built on an x86-64 Linux, is smaller than 25% of the size as the default Debian shipped library is.

FreeRTOS

But let’s not stop there. Users with this kind of strict size requirements are rarely running a full Linux installation or similar OS. If you are sensitive about storage to the exact kilobyte level, you usually run a more slimmed down OS as well – so I decided that my initial tiny-curl effort should be done on FreeRTOS. That’s a fairly popular and free RTOS for the more resource constrained devices.

This port is still rough and I expect us to release follow-up releases soon that improves the FreeRTOS port and ideally also adds support for other popular RTOSes. Which RTOS would you like to support for that isn’t already supported?

Offer the libcurl API for HTTPS on FreeRTOS, within 100 kilobytes.

Maintain API

I strongly believe that the power of having libcurl in your embedded devices is partly powered by the libcurl API. The API that you can use for libcurl on any platform, that’s been around for a very long time and for which you can find numerous examples for on the Internet and in libcurl’s extensive documentation. Maintaining support for the API was of the highest priority.

Patch it

My secondary goal was to patch as clean as possible so that we can upstream patches into the main curl source tree for the changes makes sense and that aren’t disturbing to the general code base, and for the work that we can’t we should be able to rebase on top of the curl code base with as little obstruction as possible going forward.

Keep the HTTPS basics

I just want to do HTTPS GET

That’s the mantra here. My patch disables a lot of protocols and features:

  • No protocols except HTTP(S) are supported
  • HTTP/1 only
  • No cookie support
  • No date parsing
  • No alt-svc
  • No HTTP authentication
  • No DNS-over-HTTPS
  • No .netrc parsing
  • No HTTP multi-part formposts
  • No shuffled DNS support
  • No built-in progress meter

Although they’re all disabled individually so it is still easy to enable one or more of these for specific builds.

Downloads and versions?

Tiny-curl 0.9 is the first shot at this and can be downloaded from wolfSSL. It is based on curl 7.64.1.

Most of the patches in tiny-curl are being upstreamed into curl in the #3844 pull request. I intend to upstream most, if not all, of the tiny-curl work over time.

License

The FreeRTOS port of tiny-curl is licensed GPLv3 and not MIT like the rest of curl. This is an experiment to see how we can do curl work like this in a sustainable way. If you want this under another license, we’re open for business over at wolfSSL!

Categorieën: Mozilla-nl planet

Mozilla verwijdert alle telemetriedata die verzameld werd in kader fix add-ons - Tweakers

Nieuws verzameld via Google - za, 11/05/2019 - 09:00
Mozilla verwijdert alle telemetriedata die verzameld werd in kader fix add-ons  Tweakers

Mozilla gaat alle telemetriedata van Firefox daterend van zaterdag 4 mei tot en met zaterdag 11 mei verwijderen. De fix die op de korte termijn de problematiek ...

Categorieën: Mozilla-nl planet

The Mozilla Blog: Google’s Ad API is Better Than Facebook’s, But…

Mozilla planet - vr, 10/05/2019 - 18:00
… with a few important omissions. Google’s tool meets four of experts’ five minimum standards

 

Last month, Mozilla released an analysis of Facebook’s ad archive API, a tool that allows researchers to understand how political ads are being targeted to Facebook users. Our goal: To determine if Facebook had fulfilled its promise to make political advertising more transparent. (It did not.)

Today, we’re releasing an analysis of Google’s ad archive API. Google also promised the European Union it would release an ad transparency tool ahead of the 2019 EU Parliament elections.

Our finding: Google’s API is a lot better than Facebook’s, but is still incomplete. Google’s API meets four of experts’ five minimum standards. (Facebook met two.)

Google does much better than Facebook in providing access to the data in a format that allows for real research and analysis. That is a hugely important requirement; this is a baseline researchers need. But while the data is usable, it isn’t complete. Google doesn’t provide data on the targeting criteria advertisers use, making it more difficult to determine whom people are trying to influence or how information is really spreading across the platform.

Below are the specifics of our Google API analysis:

[1]

Researchers’ guideline: A functional, open API should have comprehensive political advertising content.

Google’s API: The full list of ads, campaigns, and advertisers are available, and can be searched and filtered. The entire database can be downloaded in bulk and analyzed at scale. There are shortcomings, however: There is no data on the audience the ads reached, like their gender, age, or region. And Google has included fewer ads in their database than Facebook, perhaps due to a narrower definition of “political ads.”

[2] ❌

Researchers’ guideline: A functional, open API should provide the content of the advertisement and information about targeting criteria.

Google’s API: While Google’s API does provide the content of the advertisements, like Facebook, it provides no information on targeting criteria, nor does the API provide engagement data (e.g., clicks). Targeting and engagement data is critical for researchers because it lets them see what types of users an advertiser is trying to influence, and whether or not their attempts were successful.

[3]

Researchers’ guideline: A functional, open API should have up-to-date and historical data access.

Google’s API: The API appears to be up to date.

[4] ✅

Researchers’ guideline: A functional, open API should be accessible to and shareable with the general public.

Google’s API: Public access to the API is available through the Google Cloud Public Datasets program.

[5] ✅

Researchers’ guideline: A functional, open API should empower, not limit, research and analysis.

Google’s API: The tool has components that facilitate research, like: bulk download capabilities; no problematic bandwidth limits; search filters; and unique URLs for ads.

 

Overall: While the company gets a passing grade, Google doesn’t sufficiently allow researchers to study disinformation on its platform. The company also significantly delayed the release of their API, unveiling it only weeks before the upcoming EU elections and nearly two months after the originally promised deadline.

With the EU elections fewer than two weeks away, we hope Google (and Facebook) take action swiftly to improve their ad APIs — action that should have been taken months ago.

The post Google’s Ad API is Better Than Facebook’s, But… appeared first on The Mozilla Blog.

Categorieën: Mozilla-nl planet

Google’s Ad API is Better Than Facebook’s, But…

Mozilla Blog - vr, 10/05/2019 - 18:00
… with a few important omissions. Google’s tool meets four of experts’ five minimum standards

 

Last month, Mozilla released an analysis of Facebook’s ad archive API, a tool that allows researchers to understand how political ads are being targeted to Facebook users. Our goal: To determine if Facebook had fulfilled its promise to make political advertising more transparent. (It did not.)

Today, we’re releasing an analysis of Google’s ad archive API. Google also promised the European Union it would release an ad transparency tool ahead of the 2019 EU Parliament elections.

Our finding: Google’s API is a lot better than Facebook’s, but is still incomplete. Google’s API meets four of experts’ five minimum standards. (Facebook met two.)

Google does much better than Facebook in providing access to the data in a format that allows for real research and analysis. That is a hugely important requirement; this is a baseline researchers need. But while the data is usable, it isn’t complete. Google doesn’t provide data on the targeting criteria advertisers use, making it more difficult to determine whom people are trying to influence or how information is really spreading across the platform.

Below are the specifics of our Google API analysis:

[1]

Researchers’ guideline: A functional, open API should have comprehensive political advertising content.

Google’s API: The full list of ads, campaigns, and advertisers are available, and can be searched and filtered. The entire database can be downloaded in bulk and analyzed at scale. There are shortcomings, however: There is no data on the audience the ads reached, like their gender, age, or region. And Google has included fewer ads in their database than Facebook, perhaps due to a narrower definition of “political ads.”

[2] ❌

Researchers’ guideline: A functional, open API should provide the content of the advertisement and information about targeting criteria.

Google’s API: While Google’s API does provide the content of the advertisements, like Facebook, it provides no information on targeting criteria, nor does the API provide engagement data (e.g., clicks). Targeting and engagement data is critical for researchers because it lets them see what types of users an advertiser is trying to influence, and whether or not their attempts were successful.

[3]

Researchers’ guideline: A functional, open API should have up-to-date and historical data access.

Google’s API: The API appears to be up to date.

[4] ✅

Researchers’ guideline: A functional, open API should be accessible to and shareable with the general public.

Google’s API: Public access to the API is available through the Google Cloud Public Datasets program.

[5] ✅

Researchers’ guideline: A functional, open API should empower, not limit, research and analysis.

Google’s API: The tool has components that facilitate research, like: bulk download capabilities; no problematic bandwidth limits; search filters; and unique URLs for ads.

 

Overall: While the company gets a passing grade, Google doesn’t sufficiently allow researchers to study disinformation on its platform. The company also significantly delayed the release of their API, unveiling it only weeks before the upcoming EU elections and nearly two months after the originally promised deadline.

With the EU elections fewer than two weeks away, we hope Google (and Facebook) take action swiftly to improve their ad APIs — action that should have been taken months ago.

The post Google’s Ad API is Better Than Facebook’s, But… appeared first on The Mozilla Blog.

Categorieën: Mozilla-nl planet

Support.Mozilla.Org: SUMO/Firefox Accounts integration

Mozilla planet - vr, 10/05/2019 - 17:29

One of Mozilla’s goals is to deepen relationships with our users and better connect them with our products. For support this means integrating Firefox Accounts (FxA) as the authentication layer on support.mozilla.org

What does this mean?

Currently support.mozilla.org is using its own auth/login system where users are logging in using their username and password. We will replace this auth system with Firefox Accounts and both users and contributors will be asked to connect their existing profiles to FxA.

This will not just help align support.mozilla.org with other Mozilla products but also be a vehicle for users to discover FxA and its many benefits.

In order to achieve this we are looking at the following milestones (the dates are tentative):

Transition period (May-June)

We will start with a transition period where users can log in using both their old username/password as well as Firefox Accounts. During this period new users registering to the site will only be able to create an account through Firefox Accounts. Existing users will get a recommendation to connect their Firefox Account through their existing profile but they will still be able to use their old username/password auth method if they wish. Our intention is to have banners across the site that will let users know about the change and how to switch to Firefox Accounts. We will also send email communications to active users (logged in at least once in the last 3 years).

Switching to Firefox Accounts will also bring a small change to our AAQ (Ask a Question) flow. Currently when users go through the Ask a Question flow they are prompted to login/create an account in the middle of the flow (which is a bit of a frustrating experience). As we’re switching to Firefox Accounts and that login experience will no longer work, we will be moving the login/sign up step at the beginning of the flow – meaning users will have to log in first before they can go through the AAQ. During the transition period non-authenticated users will not be able to use the AAQ flow. This will get back to normal during the Soft Launch period.

Soft Launch (end of June)

After the transition period we will enter a so-called “Soft Launch” period where we integrate the full new log in/sign up experiences and do the fine tuning. By this time the AAQ flow should have been updated and non-authenticated users can use it again. We will also send more emails to active users who haven’t done the switch yet and continue having banners on the site to inform people of the change.

Full Launch (July-August)

If the testing periods above go well, we should be ready to do the full switch in July or August. This means that no old SUMO logins will be accepted and all users will be asked to switch over to Firefox Accounts. We will also do a final round of communications.

Please note: As we’re only changing the authentication mechanism we don’t expect there to be any changes to your existing profile, username and contribution history. If you do encounter an issue please reach out to Madalina or Tasos (or file a bug through Bugzilla).

We’re excited about this change, but are also aware that we might encounter a few bumps on the way. Thank you for your support in making this happen.

If you want to help out, as always you can follow our progress on Github and/or join our weekly calls.

SUMO staff team

Categorieën: Mozilla-nl planet

Pagina's