mozilla

Mozilla Nederland LogoDe Nederlandse
Mozilla-gemeenschap

Hacks.Mozilla.Org: MDN localization in March — Tier 1 locales unfrozen, and future plans

Mozilla planet - do, 25/03/2021 - 17:05

Since we last talked about MDN localization, a lot of progress has been made. In this post we’ll talk you through the unfreezing of Tier 1 locales, and the next steps in our plans to stop displaying non-active and unmaintained locales.

Tier 1 locales unfrozen!

It has been a long time coming, but we’ve finally achieved our goal of unfreezing our Tier 1 locales. the fr, ja, ru, zh-CN, and zh-TW locales can now be edited, and we have active teams working on each of these locales. We added Russian (ru) to the list very recently, after great interest from the community helped us to rapidly assemble a team to maintain those docs — we are really excited about making progress here!

If you are interested in helping out with these locales, or asking questions, you can find all the information you need at our all-new translated-content README. This includes:

  • How to contribute
  • The policies in place to govern the work
  • Who is in the active localization teams
  • How the structure is kept in sync with the en-US version.

We’d like to thank everyone who helped us get to this stage, especially the localization team members who have stepped up to help us maintain our localized content:

Stopping the display of unmaintained locales on MDN

Previously we said that we were planning to stop the display of all locales except for en-US, and our Tier 1 locales.

We’ve revised this plan a little since then — we looked at the readership figures of each locale, as a percentage of the total MDN traffic, and decided that we should keep a few more than just the 5 we previously mentioned. Some of the viewing figures for non-active locales are quite high, so we thought it would be wise to keep them and try to encourage teams to start maintaining them.

In the end, we decided to keep the following locales:

  • en-US
  • es
  • ru (already unfrozen)
  • fr (already unfrozen)
  • zh-CN (already unfrozen)
  • ja (already unfrozen)
  • pt-BR
  • ko
  • de
  • pl
  • zh-TW (already unfrozen)

We are planning to stop displaying the other 21 locales. Many of them have very few pages, a high percentage of which are out-of-date or otherwise flawed, and we estimate that the total traffic we will lose by removing all these locales is less than 2%.

So what does this mean?

We are intending to stop displaying all locales outside the top ten by a certain date. The date we have chosen is April 30th.

We will remove all the source content for those locales from the translated-content repo, and put it in a new retired translated content repo, so that anyone who still wants to use this content in some way is welcome to do so. We highly respect the work that so many people have done on translating MDN content over the years, and want to preserve it in some way.

We will redirect the URLs for all removed articles to their en-US equivalents — this solves an often-mentioned issue whereby people would rather view the up-to-date English article than the low-quality or out-of-date version in their own language, but find it difficult to do so because of they way MDN works.

We are also intending to create a new tool whereby if you see a really outdated page, you can press a button saying “retire content” to open up a pull request that when merged will check it out to the retired content repo.

After this point, we won’t revive anything — the journey to retirement is one way. This may sound harsh, but we are taking determined steps to clean up MDN and get rid of out-of-date and out-of-remit content that has been around for years in some cases.

The post MDN localization in March — Tier 1 locales unfrozen, and future plans appeared first on Mozilla Hacks - the Web developer blog.

Categorieën: Mozilla-nl planet

Mozilla Addons Blog: Friend of Add-ons: Mélanie Chauvel

Mozilla planet - do, 25/03/2021 - 16:00

I’m pleased to announce our newest Friend of Add-ons, Mélanie Chauvel! After becoming interested in free and open source software in 2012, Mélanie started contributing code to Tab Center Redux, a Firefox extension that displays tabs vertically on the sidebar. When the developer stopped maintaining it, she forked a version and released it as Tab Center Reborn.

As she worked on Tab Center Reborn, Mélanie became thoroughly acquainted with the tabs API. After running into a number of issues where the API didn’t behave as expected, or didn’t provide the functionality her extension needed, she started filing bugs and proposing new features for the WebExtensions API.

Changing code in Firefox can be scary to new contributors because of the size and complexity of the codebase. As she started looking into her pain points, Mélanie realized that she could make some of the changes she wanted to see. “WebExtensions APIs are implemented in JavaScript and are relatively isolated from the rest of the codebase,” she says. “I saw that I could fix some of the issues that bothered me and took a stab at it.”

Mélanie added two new APIs: sidebarAction.toggle, which can toggle the visibility of the sidebar if it belongs to an extension, and tabs.warmup, which can reduce the amount of time it takes for an inactive tab to load. She also made several improvements to the tabs.duplicate API. Thanks to her contributions, new duplicated tabs are activated as soon as they are opened, extensions can choose where a duplicate tab should be opened, and duplicating a pinned tab no longer causes unexpected visual glitches.

Mélanie is also excited to see and help others contribute to open source projects. One of her most meaningful experiences at Mozilla has been filing an issue and seeing a new contributor fix it a few weeks later. “It made me happy to be part of the path of someone else contributing to important projects like Firefox. We often feel powerless in our lives, and I’m glad I was able to help others participate in something bigger than them,” Mélanie says.

These days, Mélanie is working on translating Tab Center Reborn into French and Esperanto and contributing code to other open-source projects including Mastodon, Tusky, Rust, Exa, and KDE. She also enjoys playing puzzle games, exploring vegan cooking and baking, and watching TV shows and movies with friends.

Thank you for all of your contributions, Mélanie! If you’re a fan of Mélanie’s work and wish to offer support, you can buy her a coffee or contribute on Liberapay.

If you are interested in contributing to the add-ons ecosystem, please visit our Contribution wiki.

The post Friend of Add-ons: Mélanie Chauvel appeared first on Mozilla Add-ons Blog.

Categorieën: Mozilla-nl planet

Mozilla Thunderbird: Mailfence Encrypted Email Suite in Thunderbird

Mozilla planet - do, 25/03/2021 - 12:33
Mailfence Encrypted Email Suite in Thunderbird

Today, the Thunderbird team is happy to announce that we have partnered with Mailfence to offer their encrypted email service in Thunderbird’s account setup. To check this out, you click on “Get a new email address…” when you are setting up an account. We are excited that those using Thunderbird will have this easily accessible option to get a new email address from a privacy-focused provider with just a few clicks.

Why partner with Mailfence?

It comes down to two important shared values: a commitment to privacy and open standards. Mailfence has built a private and secure email experience, whilst using open standards that ensure its users can use clients like Thunderbird with no extra hoops to jump through – which respects their freedom. Also, Mailfence has been doing this for longer than most providers have been around and this shows real commitment to their cause.

We’ve known we wanted to work with the Mailfence team for well over a year, and this is just the beginning of our collaboration. We’ve made it easy to get an email address from Mailfence, and their team has created many great guides on how to get the most out of their service in Thunderbird. But this is just the beginning. The goal is that, in the near future, Mailfence users will benefit from the automatic sync of their contacts and calendars – as well as their email.

Why is this important?

If we’ve learned anything about the tech landscape these last few years it’s that big tech doesn’t always have your best interests in mind. Big tech has based its business model on the harvesting and exploitation of data. Your data that the companies gobble up is used for discrimination and manipulation – not to mention the damage done when this data is sold to or stolen by really bad actors.

We wanted to give our users an alternative, and we want to continue to show our users that you can communicate online and leverage the power of the Internet without giving up your right to privacy. Mailfence is a great service that we want to share with our community and users, to show there are good options out there.

Patrick De-Schutter, Co-Founder of Mailfence, makes an excellent case for why this partnership is important:

“Thunderbird’s mission and values completely align with ours. We live in times of ever growing Internet domination by big tech companies. These have repeatedly shown a total disrespect of online privacy and oblige their users to sign away their privacy through unreadable Terms of Service. We believe this is wrong and dangerous. Privacy is a fundamental human right. With this partnership, we create a user-friendly privacy-respecting alternative to the Big Tech offerings that are centered around the commodification of personal data.”

How to try out Mailfence

If you want to give Mailfence a try right now (and are already using Thunderbird), just open Thunderbird account settings, click “Account Actions” and then “Add Mail Account”, it is there that you will see the option to “Get a new email address”. There you can select Mailfence as your provider and choose your desired username, then you will be prompted to set up your account. Once you have done this your account will be set up in Thunderbird and you will be able to start your Mailfence trial.

It is our sincere hope that our users will give Mailfence a try because using services that respect your freedom and privacy is better for you, and better for society at large. We look forward to deepening our relationship with Mailfence and working hand-in-hand with them to improve the Thunderbird experience for those using their service.

We’ll share more about our partnership with Mailfence, as well as our other efforts to promote privacy and open standards as the year progresses. We’re so grateful to get to work with great people who share our values, and to then share that work with the world.

Categorieën: Mozilla-nl planet

Niko Matsakis: Async Vision Doc Writing Sessions II

Mozilla planet - do, 25/03/2021 - 05:00

I’m scheduling two more public drafting sessions for tomorrow, Match 26th:

If you’re available and have interest in one of those issues, please join us! Just ping me on Discord or Zulip and I’ll send you the Zoom link.

I also plan to schedule more sessions next week, so stay tuned!

The vision…what?

Never heard of the async vision doc? It’s a new thing we’re trying as part of the Async Foundations Working Group:

We are launching a collaborative effort to build a shared vision document for Async Rust. Our goal is to engage the entire community in a collective act of the imagination: how can we make the end-to-end experience of using Async I/O not only a pragmatic choice, but a joyful one?

Read the full blog post for more.

Categorieën: Mozilla-nl planet

The Rust Programming Language Blog: Announcing Rust 1.51.0

Mozilla planet - do, 25/03/2021 - 01:00

The Rust team is happy to announce a new version of Rust, 1.51.0. Rust is a programming language that is empowering everyone to build reliable and efficient software.

If you have a previous version of Rust installed via rustup, getting Rust 1.51.0 is as easy as:

rustup update stable

If you don't have it already, you can get rustup from the appropriate page on our website, and check out the detailed release notes for 1.51.0 on GitHub.

What's in 1.51.0 stable

This release represents one of the largest additions to the Rust language and Cargo in quite a while, stabilizing an MVP of const generics and a new feature resolver for Cargo. Let's dive right into it!

Const Generics MVP

Before this release, Rust allowed you to have your types be parameterized over lifetimes or types. For example if we wanted to have a struct that is generic over the element type of an array, we'd write the following:

struct FixedArray<T> { // ^^^ Type generic definition list: [T; 32] // ^ Where we're using it. }

If we then use FixedArray<u8>, the compiler will make a monomorphic version of FixedArray that looks like:

struct FixedArray<u8> { list: [u8; 32] }

This is a powerful feature that allows you to write reusable code with no runtime overhead. However, until this release it hasn't been possible to easily be generic over the values of those types. This was most notable in arrays which include their length in their type definition ([T; N]), which previously you could not be generic over. Now with 1.51.0 you can write code that is generic over the values of any integer, bool, or char type! (Using struct or enum values is still unstable.)

This change now lets us have our own array struct that's generic over its type and its length. Let's look at an example definition, and how it can be used.

struct Array<T, const LENGTH: usize> { // ^^^^^^^^^^^^^^^^^^^ Const generic definition. list: [T; LENGTH] // ^^^^^^ We use it here. }

Now if we then used Array<u8, 32>, the compiler will make a monomorphic version of Array that looks like:

struct Array<u8, 32> { list: [u8; 32] }

Const generics adds an important new tool for library designers in creating new, powerful compile-time safe APIs. If you'd like to learn more about const generics you can also check out the "Const Generics MVP Hits Beta" blog post for more information about the feature and its current restrictions. We can't wait to see what new libraries and APIs you create!

array::IntoIter Stabilisation

As part of const generics stabilising, we're also stabilising a new API that uses it, std::array::IntoIter. IntoIter allows you to create a by value iterator over any array. Previously there wasn't a convenient way to iterate over owned values of an array, only references to them.

fn main() { let array = [1, 2, 3, 4, 5]; // Previously for item in array.iter().copied() { println!("{}", item); } // Now for item in std::array::IntoIter::new(array) { println!("{}", item); } }

Note that this is added as a separate method instead of .into_iter() on arrays, as that currently introduces some amount of breakage; currently .into_iter() refers to the slice by-reference iterator. We're exploring ways to make this more ergonomic in the future.

Cargo's New Feature Resolver

Dependency management is a hard problem, and one of the hardest parts of it is just picking what version of a dependency to use when it's depended on by two different packages. This doesn't just include its version number, but also what features are or aren't enabled for the package. Cargo's default behaviour is to merge features for a single package when it's referred to multiple times in the dependency graph.

For example, let's say you had a dependency called foo with features A and B, which was being used by packages bar and baz, but bar depends on foo+A and baz depends on foo+B. Cargo will merge both of those features and compile foo as foo+AB. This has a benefit that you only have to compile foo once, and then it can be reused for both bar and baz.

However, this also comes with a downside. What if a feature enabled in a build-dependency is not compatible with the target you are building for?

A common example of this in the ecosystem is the optional std feature included in many #![no_std] crates, that allows crates to provide added functionality when std is available. Now imagine you want to use the #![no_std] version of foo in your #![no_std] binary, and use the foo at build time in your build.rs. If your build time dependency depends on foo+std, your binary now also depends on foo+std, which means it will no longer compile because std is not available for your target platform.

This has been a long-standing issue in cargo, and with this release there's a new resolver option in your Cargo.toml, where you can set resolver="2" to tell cargo to try a new approach to resolving features. You can check out RFC 2957 for a detailed description of the behaviour, which can be summarised as follows.

  • Dev dependencies — When a package is shared as a normal dependency and a dev-dependency, the dev-dependency features are only enabled if the current build is including dev-dependencies.
  • Host Dependencies — When a package is shared as a normal dependency and a build-dependency or proc-macro, the features for the normal dependency are kept independent of the build-dependency or proc-macro.
  • Target dependencies — When a package appears multiple times in the build graph, and one of those instances is a target-specific dependency, then the features of the target-specific dependency are only enabled if the target is currently being built.

While this can lead to some crates compiling more than once, this should provide a much more intuitive development experience when using features with cargo. If you'd like to know more, you can also read the "Feature Resolver" section in the Cargo Book for more information. We'd like to thank the cargo team and everyone involved for all their hard work in designing and implementing the new resolver!

[package] resolver = "2" # Or if you're using a workspace [workspace] resolver = "2" Splitting Debug Information

While not often highlighted in the release, the Rust teams are constantly working on improving Rust's compile times, and this release marks one of the largest improvements in a long time for Rust on macOS. Debug information maps the binary code back to your source code, so that the program can give you more information about what went wrong at runtime. In macOS, debug info was previously collected into a single .dSYM folder using a tool called dsymutil, which can take some time and use up quite a bit of disk space.

Collecting all of the debuginfo into this directory helps in finding it at runtime, particularly if the binary is being moved. However, it does have the drawback that even when you make a small change to your program, dsymutil will need to run over the entire final binary to produce the final .dSYM folder. This can sometimes add a lot to the build time, especially for larger projects, as all dependencies always get recollected, but this has been a necessary step as without it Rust's standard library didn't know how to load the debug info on macOS.

Recently, Rust backtraces switched to using a different backend which supports loading debuginfo without needing to run dsymutil, and we've stabilized support for skipping the dsymutil run. This can significantly speed up builds that include debuginfo and significantly reduce the amount of disk space used. We haven't run extensive benchmarks, but have seen a lot of reports of people's builds being a lot faster on macOS with this behavior.

You can enable this new behaviour by setting the -Csplit-debuginfo=unpacked flag when running rustc, or by setting the split-debuginfo [profile] option to unpacked in Cargo. The "unpacked" option instructs rustc to leave the .o object files in the build output directory instead of deleting them, and skips the step of running dsymutil. Rust's backtrace support is smart enough to know how to find these .o files. Tools such as lldb also know how to do this. This should work as long as you don't need to move the binary to a different location while retaining the debug information.

[profile.dev] split-debuginfo = "unpacked" Stabilized APIs

In total, this release saw the stabilisation of 18 new methods for various types like slice and Peekable. One notable addition is the stabilisation of ptr::addr_of! and ptr::addr_of_mut!, which allow you to create raw pointers to unaligned fields. Previously this wasn't possible because Rust requires &/&mut to be aligned and point to initialized data, and &addr as *const _ would then cause undefined behaviour as &addr needs to be aligned. These two macros now let you safely create unaligned pointers.

use std::ptr; #[repr(packed)] struct Packed { f1: u8, f2: u16, } let packed = Packed { f1: 1, f2: 2 }; // `&packed.f2` would create an unaligned reference, and thus be Undefined Behavior! let raw_f2 = ptr::addr_of!(packed.f2); assert_eq!(unsafe { raw_f2.read_unaligned() }, 2);

The following methods were stabilised.

Other changes

There are other changes in the Rust 1.51.0 release: check out what changed in Rust, Cargo, and Clippy.

Contributors to 1.51.0

Many people came together to create Rust 1.51.0. We couldn't have done it without all of you. Thanks!

Categorieën: Mozilla-nl planet

The Firefox Frontier: How two women are taking on the digital ad industry one brand at a time

Mozilla planet - wo, 24/03/2021 - 22:31

In the fall of 2016, Nandini Jammi co-founded Sleeping Giants to expose for brands how their digital advertisements were showing up on websites that they didn’t intend their marketing efforts … Read more

The post How two women are taking on the digital ad industry one brand at a time appeared first on The Firefox Frontier.

Categorieën: Mozilla-nl planet

The Firefox Frontier: Mozilla Explains: What is an IP address?

Mozilla planet - wo, 24/03/2021 - 19:34

Every time you are on the internet, IP addresses are playing an essential role in the information exchange to help you see the sites you are requesting. Yet, there is … Read more

The post Mozilla Explains: What is an IP address? appeared first on The Firefox Frontier.

Categorieën: Mozilla-nl planet

Support.Mozilla.Org: Play Store Support Program Updates

Mozilla planet - wo, 24/03/2021 - 15:32

TL;DR: By the end of March, 2021, the Play Store Support program will be moving from the Respond Tool to Conversocial. If you want to keep helping Firefox for Android users by responding to their reviews in the Google Play Store, please fill out this form to request a Conversocial account. You can learn more about the program here

 

In late August last year, to support the transition of Firefox for Android from the old engine (fennec) to the new one (fenix), we officially introduced a tool that we build in-house called the Respond Tool to support the Play Store Support campaign. The Respond Tool lets contributors and staff provide answers to reviews under 3-stars on the Google Play Store. That program was known as Play Store Support.

We learned a lot from the campaign and identified a number of improvements to functionality and user experience that were necessary. In the end, we decided to migrate the program from the Respond Tool to Conversocial, a third-party tool that we are already using with our community to support users on Twitter. This change will enable us to:

  • Segment reviews and set priorities.
  • Filter out reviews with profanity.
  • See when users change their ratings.
  • Track trends with a powerful reporting dashboard.
  • Save costs and engineering resources.

As a consequence of this change, we’re going to decommission the Respond Tool by March 31, 2021. You’re encouraged to request an account in Conversocial if you want to keep supporting Firefox for Android users. You can read more about the decommission plan in the Contributor Forum.

We have also updated the guidelines to reflect this change that you can learn more from the following article: Getting started with Play Store Support.

This will not be possible without your help

All this will not be possible without contributors like you, who have been helping us to provide great support for Firefox for Android users through the Respond Tool. From the Play Store Support campaign last year until today, 99 contributors have helped to reply to a total of 14484 reviews on the Google Play Store.

I’d like to extend my gratitude to Paul W, Christophe V, Andrew Truong, Danny Colin, and Ankit Kumar who have been very supportive and accommodating by giving us feedback throughout the transition process.

We’re excited about this change and hope that you can help us to spread the word and share this announcement to your fellow contributors.

Let’s keep on rocking the helpful web!

 

On behalf of the SUMO team,

Kiki

Categorieën: Mozilla-nl planet

Giorgio Maone: Welcome SmartBlock: Script Surrogates for the masses!

Mozilla planet - di, 23/03/2021 - 19:29

Today Mozilla released Firefox 87, introducing SmartBlock, a new feature which "intelligently fixes up web pages that are broken by our tracking protections, without compromising user privacy [...] by providing local stand-ins for blocked third-party tracking scripts. These stand-in scripts behave just enough like the original ones to make sure that the website works properly. They allow broken sites relying on the original scripts to load with their functionality intact."

As long time NoScript users may recall, this is exactly the concept behind "Script Surrogates", which I developed more than ten years ago as a NoScript "Classic" module.

In facts, in its launch post Mozilla kindly wants "to acknowledge the NoScript and uBlock Origin teams for helping to pioneer this approach.".

It's not the first time that concepts pioneered by NoScript percolate into mainstream browsers: from content blocking to XSS filters, I must admit it gets me emotional every time :)

Script Surrogates unfortunately could not be initially ported to NoScript Quantum, due to the radically different browser extensions technology it was forced into. Since then, many people using NoScript and other content blockers have been repeatedly asking for this feature to come back because it "fixed" many sites without requiring unwanted scripts (such as Google Analytics, for instance) to be enabled or ad-blocking / anti-tracking extensions to be disabled.

Script Surrogates were significantly more powerful, flexible and user-hackable than SmartBlock, and I find myself missing them in several circumstances.

I'm actually planning (i.e. trying to secure time and funds) to bring back Script Surrogates as a stand-alone extension for Firefox-based and Chromium-based browsers, both on desktop and mobile devices. This tool would complement and enhance the whole class of content blockers (including but not limited to NoScript), without requiring the specific installation of NoScript itself. Furthermore, its core functionality (on-demand script injection/replacement, native object wrapping/emulation...) would be implemented as NoScript Commons Library modules, ready to be reused by other browser extensions, like already happening with FSF's in-progress project JS-Shield.

In the meanwhile, we can all enjoy Script Surrogate's "light", mainstream young sibling, built-in in Firefox (and therefore coming soon in the Tor Browser too). Yay Mozilla!

Categorieën: Mozilla-nl planet

Daniel Stenberg: Github steel

Mozilla planet - di, 23/03/2021 - 17:02

I honestly don’t know what particular thing I did to get this, but GitHub gave me a 3D-printed steel version of my 2020 GitHub contribution “matrix”. You know that thing on your GitHub profile that normally looks something like this:

The gift package included this friendly note:

Hi @bagder,

As we welcome 2021, we want to thank and congratulate you on what you brought to 2020. Amidst the year’s challenges, you found time to continue giving back and contributing to the community.

Your hard work, care, and attention haven’t gone unnoticed.

Enclosed is your 2020 GitHub contribution graph, 3D printed in steel. You can also view it by pointing your browser to https://github.co/skyline. It tells a personal story only you can truly interpret.

Please accept this small gift as a token of appreciation on behalf of all of us here at GitHub, and everyone who benefits from your work.

Thank you and all the best for the year ahead!

With <3, from GitHub

I think I’ll put it under one of my screens here on my desk for now. The size is 145 mm x 30 mm x 30 mm. 438 grams.

Thanks GitHub!

Update: the print is done by shapeways.com

Categorieën: Mozilla-nl planet

Hacks.Mozilla.Org: In March, we see Firefox 87

Mozilla planet - di, 23/03/2021 - 16:56

Nearing the end of March now, and we have a new version of Firefox ready to deliver some interesting new features to your door. This month, we’ve got some rather nice DevTools additions in the form of prefers-color-scheme media query emulation and toggling :target pseudo-classes, some very useful additions to editable DOM elements: the beforeinput event and getTargetRanges() method, and some nice security, privacy, and macOS screenreader support updates.

This blog post provides merely a set of highlights; for all the details, check out the following:

Developer tools

In developer tools this time around, we’ve first of all updated the Page Inspector to allow simulation of prefers-color-scheme media queries, without having to change the operating system to trigger light or dark mode.

Open the DevTools, and you’ll see a new set of buttons in the top right corner:

Two buttons marked with sun and moon icons

When pressed, these enable the light and dark preference, respectively. Selecting either button deselects the other. If neither button is selected then the simulator does not set a preference, and the browser renders using the default feature value set by the operating system.

And another nice addition to mention is that the Page Inspector’s CSS pane can now be used to toggle the :target pseudo-class for the currently selected element, in addition to a number of others that were already available (:hover, :active, etc.)

Firefox devtools CSS rules pane, showing a body selector with a number of following declarations, and a bar up the top with several pseudo classes written inside it

Find more out about this at Viewing common pseudo-classes.

Better control over user input: beforeinput and getTargetRanges()

The beforeinput event and getTargetRanges() method are now enabled by default. They allow web apps to override text edit behavior before the browser modifies the DOM tree, providing more control over text input to improve performance.

The global beforeinput event is sent to an <input> element — or any element whose contenteditable attribute is set to true — immediately before the element’s value changes. The getTargetRanges() method of the InputEvent interface returns an array of static ranges that will be affected by a change to the DOM if the input event is not canceled.

As an example, say we have a simple comment system where users are able to edit their comments live using a contenteditable container, but we don’t want them to edit the commenter’s name or other valuable meta data? Some sample markup might look like so:

<p contenteditable> <span>Mr Bungle:</span> This is my comment; isn't it good! <em>-- 09/16/21, 09.24</em> </p>

Using beforeinput and getTargetRanges(), this is now really simple:

const editable = document.querySelector('[contenteditable]'); editable.addEventListener('beforeinput', e => { const targetRanges = e.getTargetRanges(); if(targetRanges[0].startContainer.parentElement.tagName === 'SPAN' || targetRanges[0].startContainer.parentElement.tagName === 'EM') { e.preventDefault(); }; })

Here we respond to the beforeinput event so that each time a change to the text is attempted, we get the target range that would be affected by the change, find out if it is inside a <span> or <em> element, and if so, run preventDefault() to stop the edit happening. Voila — non-editable text regions inside editable text. Granted, this could be handled in other ways, but think beyond this trivial example — there is a lot of power to unlock here in terms of the control you’ve now got over text input.

Security and privacy

Firefox 87 sees some valuable security and privacy changes.

Referrer-Policy changes

First of all, the default Referrer-Policy has been changed to strict-origin-when-cross-origin (from no-referrer-when-downgrade), reducing the risk of leaking sensitive information in cross-origin requests. Essentially this means that by default, path and query string information are no longer included in HTTP Referrers.

You can find out more about this change at Firefox 87 trims HTTP Referrers by default to protect user privacy.

SmartBlock

We also wanted to bring our new SmartBlock feature to the attention of our readers. SmartBlock provides stand-ins for tracking scripts blocked by Firefox (e.g. when in private browsing mode), getting round the often-experienced problem of sites failing to load or not working properly when those tracking scripts are blocked and therefore not present.

The provided stand-in scripts behave close enough to the original ones that they allow sites that rely on them to load and behave normally. And best of all, these stand-ins are bundled with Firefox. No communication needs to happen with the third-party at all, so the potential for any tracking to occur is greatly diminished, and the affected sites may even load quicker than before.

Learn more about SmartBlock at Introducing SmartBlock

VoiceOver support on macOS

Firefox 87 sees us shipping our VoiceOver screen reader support on macOS! No longer will you have to switch over to Chrome or Safari to do significant parts of your accessibility testing.

Check it out now, and let us know what you think.

The post In March, we see Firefox 87 appeared first on Mozilla Hacks - the Web developer blog.

Categorieën: Mozilla-nl planet

Andrew Halberstadt: Understanding Mach Try

Mozilla planet - di, 23/03/2021 - 15:51

There is a lot of confusion around mach try. People frequently ask “How do I get task X in mach try fuzzy?” or “How can I avoid getting backed out?”. This post is not so much a tip, rather an explanation around how mach try works and its relationship to the CI system (taskgraph). Armed with this knowledge, I hope you’ll be able to use mach try a little more effectively.

Categorieën: Mozilla-nl planet

Mozilla Accessibility: VoiceOver Support for macOS in Firefox 87

Mozilla planet - di, 23/03/2021 - 14:35

Screen readers, an assistive technology that allows people to engage with computers through synthesized speech or a braille display, are available on all of the platforms where Firefox runs. However, until today we’ve had a gap in our support for this important technology. Firefox for Windows, Linux, Android, and iOS all work with the popular and included screen readers on those platforms, but macOS screen reader support has been absent.

For over a year the Firefox accessibility team has worked to bring high quality VoiceOver support to Firefox on macOS. Last August we delivered a developer preview of Firefox working with VoiceOver and in December we expanded that preview to all Firefox consumers. With Firefox 87, we think it’s complete enough for everyday use. Firefox 87 supports all the most common VoiceOver features and with plenty of performance. Users should be able to easily navigate through web content and all of the browser’s primary interface without problems.

If you’re a Mac user, and you rely on a screen reader, now’s the time to give Firefox another try. We think you’ll enjoy the experience and look forward to your feedback. You can learn more about Firefox 87 and download a copy at the Firefox release notes.

The post VoiceOver Support for macOS in Firefox 87 appeared first on Mozilla Accessibility.

Categorieën: Mozilla-nl planet

Mozilla Security Blog: Firefox 87 introduces SmartBlock for Private Browsing

Mozilla planet - di, 23/03/2021 - 13:55

Today, with the launch of Firefox 87, we are excited to introduce SmartBlock, a new intelligent tracker blocking mechanism for Firefox Private Browsing and Strict Mode. SmartBlock ensures that strong privacy protections in Firefox are accompanied by a great web browsing experience.

Privacy is hard

At Mozilla, we believe that privacy is a fundamental right and that everyone deserves to have their privacy protected while they browse the web. Since 2015, as part of the effort to provide a strong privacy option, Firefox has included the built-in Content Blocking feature that operates in Private Browsing windows and Strict Tracking Protection Mode. This feature automatically blocks third-party scripts, images, and other content from being loaded from cross-site tracking companies reported by Disconnect. By blocking these tracking components, Firefox Private Browsing windows prevent them from watching you as you browse.

In building these extra-strong privacy protections in Private Browsing windows and Strict Mode, we have been confronted with a fundamental problem: introducing a policy that outright blocks trackers on the web inevitably risks blocking components that are essential for some websites to function properly. This can result in images not appearing, features not working, poor performance, or even the entire page not loading at all.

New Feature: SmartBlock

To reduce this breakage, Firefox 87 is now introducing a new privacy feature we are calling SmartBlock. SmartBlock intelligently fixes up web pages that are broken by our tracking protections, without compromising user privacy.

SmartBlock does this by providing local stand-ins for blocked third-party tracking scripts. These stand-in scripts behave just enough like the original ones to make sure that the website works properly. They allow broken sites relying on the original scripts to load with their functionality intact.

The SmartBlock stand-ins are bundled with Firefox: no actual third-party content from the trackers are loaded at all, so there is no chance for them to track you this way. And, of course, the stand-ins themselves do not contain any code that would support tracking functionality.

In Firefox 87, SmartBlock will silently stand in for a number of common scripts classified as trackers on the Disconnect Tracking Protection List. Here’s an example of a performance improvement:

 before and after SmartBlock.

An example of SmartBlock in action. Previously (left), the website tiny.cloud had poor loading performance in Private Browsing windows in Firefox because of an incompatibility with strong Tracking Protection. With SmartBlock (right), the website loads properly again, while you are still fully protected from trackers found on the page.

We believe the SmartBlock approach provides the best of both worlds: strong protection of your privacy with a great browsing experience as well.

These new protections in Firefox 87 are just the start! Stay tuned for more SmartBlock innovations in upcoming versions of Firefox.

The team

This work was carried out in a collaboration between the Firefox webcompat and anti-tracking teams, including Thomas Wisniewski, Paul Zühlcke and Dimi Lee with support from many Mozillians including Johann Hofmann, Rob Wu, Wennie Leung, Mikal Lewis, Tim Huang, Ethan Tseng, Selena Deckelmann, Prangya Basu, Arturo Marmol, Tanvi Vyas, Karl Dubost, Oana Arbuzov, Sergiu Logigan, Cipriani Ciocan, Mike Taylor, Arthur Edelstein, and Steven Englehardt.

We also want to acknowledge the NoScript and uBlock Origin teams for helping to pioneer this approach.

 

The post Firefox 87 introduces SmartBlock for Private Browsing appeared first on Mozilla Security Blog.

Categorieën: Mozilla-nl planet

About:Community: Contributors To Firefox 87

Mozilla planet - ma, 22/03/2021 - 23:25

With the release of Firefox 87 we are delighted to introduce the contributors who’ve shipped their first code changes to Firefox in this release, all of whom were brand new volunteers! Please join us in thanking each of these diligent, committed individuals, and take a look at their contributions:

Categorieën: Mozilla-nl planet

Hacks.Mozilla.Org: How MDN’s site-search works

Mozilla planet - ma, 22/03/2021 - 18:02

tl;dr: Periodically, the whole of MDN is built, by our Node code, in a GitHub Action. A Python script bulk-publishes this to Elasticsearch. Our Django server queries the same Elasticsearch via /api/v1/search. The site-search page is a static single-page app that sends XHR requests to the /api/v1/search endpoint. Search results’ sort-order is determined by match and “popularity”.

Jamstack’ing

The challenge with “Jamstack” websites is with data that is too vast and dynamic that it doesn’t make sense to build statically. Search is one of those. For the record, as of Feb 2021, MDN consists of 11,619 documents (aka. articles) in English. Roughly another 40,000 translated documents. In English alone, there are 5.3 million words. So to build a good search experience we need to, as a static site build side-effect, index all of this in a full-text search database. And Elasticsearch is one such database and it’s good. In particular, Elasticsearch is something MDN is already quite familiar with because it’s what was used from within the Django app when MDN was a wiki.

Note: MDN gets about 20k site-searches per day from within the site.

Build

When we build the whole site, it’s a script that basically loops over all the raw content, applies macros and fixes, dumps one index.html (via React server-side rendering) and one index.json. The index.json contains all the fully rendered text (as HTML!) in blocks of “prose”. It looks something like this:

{ "doc": { "title": "DOCUMENT TITLE", "summary": "DOCUMENT SUMMARY", "body": [ { "type": "prose", "value": { "id": "introduction", "title": "INTRODUCTION", "content": "<p>FIRST BLOCK OF TEXTS</p>" } }, ... ], "popularity": 0.12345, ... }0

You can see one here: /en-US/docs/Web/index.json

Indexing

Next, after all the index.json files have been produced, a Python script takes over and it traverses all the index.json files and based on that structure it figures out the, title, summary, and the whole body (as HTML).

Next up, before sending this into the bulk-publisher in Elasticsearch it strips the HTML. It’s a bit more than just turning <p>Some <em>cool</em> text.</p> to Some cool text. because it also cleans up things like <div class="hidden"> and certain <div class="notecard warning"> blocks.

One thing worth noting is that this whole thing runs roughly every 24 hours and then it builds everything. But what if, between two runs, a certain page has been removed (or moved), how do you remove what was previously added to Elasticsearch? The solution is simple: it deletes and re-creates the index from scratch every day. The whole bulk-publish takes a while so right after the index has been deleted, the searches won’t be that great. Someone could be unlucky in that they’re searching MDN a couple of seconds after the index was deleted and now waiting for it to build up again.
It’s an unfortunate reality but it’s a risk worth taking for the sake of simplicity. Also, most people are searching for things in English and specifically the Web/ tree so the bulk-publishing is done in a way the most popular content is bulk-published first and the rest was done after. Here’s what the build output logs:

Found 50,461 (potential) documents to index Deleting any possible existing index and creating a new one called mdn_docs Took 3m 35s to index 50,362 documents. Approximately 234.1 docs/second Counts per priority prefixes: en-us/docs/web 9,056 *rest* 41,306

So, yes, for 3m 35s there’s stuff missing from the index and some unlucky few will get fewer search results than they should. But we can optimize this in the future.

Searching

The way you connect to Elasticsearch is simply by a URL it looks something like this:

https://USER:PASSWD@HASH.us-west-2.aws.found.io:9243

It’s an Elasticsearch cluster managed by Elastic running inside AWS. Our job is to make sure that we put the exact same URL in our GitHub Action (“the writer”) as we put it into our Django server (“the reader”).
In fact, we have 3 Elastic clusters: Prod, Stage, Dev.
And we have 2 Django servers: Prod, Stage.
So we just need to carefully make sure the secrets are set correctly to match the right environment.

Now, in the Django server, we just need to convert a request like GET /api/v1/search?q=foo&locale=fr (for example) to a query to send to Elasticsearch. We have a simple Django view function that validates the query string parameters, does some rate-limiting, creates a query (using elasticsearch-dsl) and packages the Elasticsearch results back to JSON.

How we make that query is important. In here lies the most important feature of the search; how it sorts results.

In one simple explanation, the sort order is a combination of popularity and “matchness”. The assumption is that most people want the popular content. I.e. they search for foreach and mean to go to /en-US/docs/Web/JavaScript/Reference/Global_Objects/Array/forEach not /en-US/docs/Web/API/NodeList/forEach both of which contains forEach in the title. The “popularity” is based on Google Analytics pageviews which we download periodically, normalize into a floating-point number between 1 and 0. At the time of writing the scoring function does something like this:

rank = doc.popularity * 10 + search.score

This seems to produce pretty reasonable results.

But there’s more to the “matchness” too. Elasticsearch has its own API for defining boosting and the way we apply is:

  • match phrase in the title: Boost = 10.0
  • match phrase in the body: Boost = 5.0
  • match in title: Boost = 2.0
  • match in body: Boost = 1.0

This is then applied on top of whatever else Elasticsearch does such as “Term Frequency” and “Inverse Document Frequency” (tf and if). This article is a helpful introduction.

We’re most likely not done with this. There’s probably a lot more we can do to tune this myriad of knobs and sliders to get the best possible ranking of documents that match.

Web UI

The last piece of the puzzle is how we display all of this to the user. The way it works is that developer.mozilla.org/$locale/search returns a static page that is blank. As soon as the page has loaded, it lazy-loads JavaScript that can actually issue the XHR request to get and display search results. The code looks something like this:

function SearchResults() { const [searchParams] = useSearchParams(); const sp = createSearchParams(searchParams); // add defaults and stuff here const fetchURL = `/api/v1/search?${sp.toString()}`; const { data, error } = useSWR( fetchURL, async (url) => { const response = await fetch(URL); // various checks on the response.statusCode here return await response.json(); } ); // render 'data' or 'error' accordingly here

A lot of interesting details are omitted from this code snippet. You have to check it out for yourself to get a more up-to-date insight into how it actually works. But basically, the window.location (and pushState) query string drives the fetch() call and then all the component has to do is display the search results with some highlighting.

The /api/v1/search endpoint also runs a suggestion query as part of the main search query. This extracts out interest alternative search queries. These are filtered and scored and we issue “sub-queries” just to get a count for each. Now we can do one of those “Did you mean…”. For example: search for intersections.

In conclusion

There are a lot of interesting, important, and careful details that are glossed over here in this blog post. It’s a constantly evolving system and we’re constantly trying to improve and perfect the system in a way that it fits what users expect.

A lot of people reach MDN via a Google search (e.g. mdn array foreach) but despite that, nearly 5% of all traffic on MDN is the site-search functionality. The /$locale/search?... endpoint is the most frequently viewed page of all of MDN. And having a good search engine that’s reliable is nevertheless important. By owning and controlling the whole pipeline allows us to do specific things that are unique to MDN that other websites don’t need. For example, we index a lot of raw HTML (e.g. <video>) and we have code snippets that needs to be searchable.

Hopefully, the MDN site-search will elevate from being known to be very limited to something now that can genuinely help people get to the exact page better than Google can. Yes, it’s worth aiming high!

(Originally posted on personal blog)

The post How MDN’s site-search works appeared first on Mozilla Hacks - the Web developer blog.

Categorieën: Mozilla-nl planet

Wladimir Palant: Follow-up on Amazon Assistant’s data collection

Mozilla planet - ma, 22/03/2021 - 17:16

In my previous article on Amazon Assistant, one sentence caused considerable irritation:

Mind you, I’m not saying that Amazon is currently doing any of this.

Yes, when I wrote that article I didn’t actually know how Amazon was using the power they’ve given themselves. The mere potential here, what they could do with a minimal and undetectable change on one of their servers, that was scary enough for me. I can see that other people might prefer something more tangible however.

So this article now analyzes what data Amazon actually collects. Not the kind of data that necessarily flows to Amazon servers to make the product work. No, we’ll look at a component dedicated exclusively to “analytics,” collecting data without providing any functionality to the user.

Amazon Assistant log with a borg eye<figcaption> Image credits: Amazon, nicubunu, OpenClipart </figcaption>

The logic explained here applies to Amazon Assistant browser extension for Mozilla Firefox, Google Chrome and Microsoft Edge. It is also used by Amazon Assistant for Android, to a slightly limited extent however: Amazon Assistant can only access information from the Google Chrome browser here, and it has less information available to it. Since this logic resides on an Amazon web server, I can only show what is happening for me right now. It could change any time in either direction, for all Amazon Assistant users or only a selected few.

Contents Summary of the findings

The “TitanClient” process in Amazon Assistant is its data collection component. While it’s hard to determine which websites it is active on, it’s definitely active on Google search pages as well as shopping websites such as eBay, AliExpress, Zalando, Apple, Best Buy, Barnes & Noble. And not just the big US or international brands, German building supplies stores like Hornbach and Hagebau are on its list as well, just like the Italian book shop IBS. You can get a rough idea of Amazon’s interests here. While belonging to a different Amazon Assistant feature, this list appears to be a subset of all affected websites.

When active on a website, the TitanClient process transmits the following data for each page loaded:

  • The page address (the path part is hashed but can usually be recovered)
  • The referring page if any (again, the path part is hashed but can usually be recovered)
  • Tab identifier, allowing to distinguish different tabs in your browsing session
  • Time of the visit
  • A token linked to user’s Amazon account, despite the privacy policy claiming that no connection to your account is being established

In addition, the following data is dependent on website configuration. Any or all of these data pieces can be present:

  • Page type
  • Canonical address
  • Product identifier
  • Product title
  • Product price
  • Product availability
  • Search query (this can be hashed, but usually isn’t)
  • Number of the current search page
  • Addresses of search results (sometimes hashed but can usually be recovered)
  • Links to advertised products

This is sufficient to get a very thorough look at your browsing behavior on the targeted websites. In particular, Amazon knows what you search for, what articles you look at and how much competition wants to have for these.

How do we know that TitanClient isn’t essential extension functionality?

As mentioned in the previous article, Amazon Assistant loads eight remote “processes” and gives them considerable privileges. The code driving these processes is very complicated, and at that point I couldn’t quite tell what these are responsible for. So why am I now singling out the TitanClient process as the one responsible for analytics? Couldn’t it be implementing some required extension functionality?

The consumed APIs of the process as currently defined in FeatureManifest.js file are a good hint:

"consumedAPIs" : { "Platform" : [ "getPageDimensionData", "getPageLocationData", "getPagePerformanceTimingData", "getPageReferrer", "scrape", "getPlatformInfo", "getStorageValue", "putStorageValue", "deleteStorageValue", "publish" ], "Reporter" : [ "appendMetricData" ], "Storage" : [ "get", "put", "delete" ], "Dossier" : [ "buildURLs" ], "Identity" : [ "getCohortToken", "getPseudoIdToken", "getAllWeblabTreatments", "getRTBFStatus", "confirmRTBFExecution" ] },

If you ignore extension storage access and event publishing, it’s all data retrieval functionality such as the scrape function. There are other processes also using the scrape API, for example one named PComp. This one also needs various website manipulation functions such as createSandbox however: PComp is the component actually implementing functionality on third-party websites, so it needs to display overlays with Amazon suggestions there. TitanClient does not need that, it is limited to data extraction.

So while processes like PComp and AAWishlistProcess collect data as a side-effect of doing their job, with TitanClient it isn’t a side-effect but the only purpose. The data collected here shows what Amazon is really interested in. So let’s take a closer look at its inner workings.

When is TitanClient enabled?

Luckily, Amazon made this job easier by providing an unminified version of TitanClient code. A comment in function BITTitanProcess.prototype._handlePageTurnEvent explains when a tab change notification (called “page turn” in Amazon Assistant) is ignored:

/** * Ignore page turn event if any of the following conditions: * 1. Page state is not {@link PageState.Loading} or {@link PageState.Loaded} then * 2. Data collection is disabled i.e. All comparison toggles are turned off in AA * settings. * 3. Location is not supported by titan client. */

The first one is obvious: TitanClient will wait for a page to be ready. For the second one we have to take a look at TitanDataCollectionToggles.prototype.isTitanDataCollectionDisabled function:

return !(this._isPCompEnabled || this._isRSCompEnabled || this._isSCompEnabled);

This refers to extension settings that can be found in the “Comparison Settings” section: “Product,” “Retail Searches” and “Search engines” respectively. If all of these are switched off, the data collection will be disabled. Is the data collection related to these settings in any way? No, these settings normally apply to the PComp process which is a completely separate component. The logic is rather: if Amazon Assistant is allowed to mess with third-party websites in some way, it will collect data there.

Finally, there is a third point: which locations are supported by TitanClient? When it starts up, it will make a request to aascraperservice.prod.us-east-1.scraper.assistant.a2z.com. The response contains a spaceReferenceMap value: an address pointing to aa-scraper-supported-prod-us-east-1.s3.amazonaws.com, some binary data. This binary data is a Bloom filter, a data structure telling TitanService which websites it should be active on. Obfuscation bonus: it’s impossible to tell which websites this data structure contains, one can only try some guesses.

The instructions for “supported” websites

What happens when you visit a “supported” website such as www.google.com? First, aascraperservice.prod.us-east-1.scraper.assistant.a2z.com will be contacted again for instructions:

POST / HTTP/1.1 Host: aascraperservice.prod.us-east-1.scraper.assistant.a2z.com Content-Type: application/json; charset=UTF-8 Content-Length: 73 {"originURL":"https://www.google.com:443","isolationZones":["ANALYTICS"]}

It’s exactly the same request that PComp process is sending, except that the latter sets isolationZones value to "FEDERATION". The response contains lots of JSON data with scraping instructions. I’ll quote some interesting parts only, e.g. the instructions for extracting the search query:

{ "cleanUpRules": [], "constraint": [{ "type": "None" }], "contentType": "SearchQuery", "expression": ".*[?#&]q=([^&]+).*\n$1", "expressionType": "UrlJsRegex", "isolationZones": ["ANALYTICS"], "scraperSource": "Alexa", "signature": "E8F21AE75595619F581DA3589B92CD2B" }

The extracted value will sometimes be passed through MD5 hash function before being sent. This isn’t a reason to relax however. While technically speaking a hash function cannot be reversed, some web services have huge databases of pre-calculated MD5 hashes, so MD5 hashes of typical search queries can all be found there. Even worse: an additional result with type FreudSearchQuery will be sent where the query is never hashed. A comment in the source code explains:

// TODO: Temporary experiment to collect search query only blessed by Freud filter.

Any bets on how long this “temporary” experiment has been there? There are comments referring to the Freud filter dated 2019 in the codebase.

The following will extract links to search results:

{ "attributeSource": "href", "cleanUpRules": [], "constraint": [{ "type": "None" }], "contentType": "SearchResult", "expression": "//div[@class='g' and (not(ancestor::div/@class = 'g kno-kp mnr-c g-blk') and not(ancestor::div/@class = 'dfiEbb'))] // div[@class='yuRUbf'] /a", "expressionType": "Xpath", "isolationZones": ["ANALYTICS"], "scraperSource": "Alexa", "signature": "88719EAF6FD7BE959B447CDF39BCCA5D" }

These will also sometimes be hashed using MD5. Again, in theory MD5 cannot be reversed. However, you can probably guess that Amazon wouldn’t collect useless data. So they certainly have a huge database with pre-calculated MD5 hashes of all the various links they are interested in, watching these pop up in your search results.

Another interesting instruction is extracting advertised products:

{ "attributeSource": "href", "cleanUpRules": [], "constraint": [{ "type": "None" }], "contentType": "ProductLevelAdvertising", "expression": "#tvcap .commercial-unit ._PD div.pla-unit-title a", "expressionType": "Css", "isolationZones": ["ANALYTICS"], "scraperSource": "Alexa", "signature": "E796BF66B6D2BDC3B5F48429E065FE6F" }

No hashing here, this is sent as plain text.

Data sent back

Once the data is extracted from a page, TitanClient generates an event and adds it to the queue. You likely won’t see it send out data immediately, the queue is flushed only every 15 minutes. When this happens, you will typically see three requests to titan.service.amazonbrowserapp.com with data like:

{ "clientToken": "gQGAA3ikWuk…", "isolationZoneId": "FARADAY", "clientContext": { "marketplace": "US", "region": "NA", "partnerTag": "amz-mkt-chr-us-20|1ba00-01000-org00-linux-other-nomod-de000-tclnt", "aaVersion": "10.2102.26.11554", "cohortToken": { "value": "30656463…" }, "pseudoIdToken": { "value": "018003…" } }, "events": [{ "sequenceNumber": 43736904, "eventTime": 1616413248927, "eventType": "View", "location": "https://www.google.com:443/06a943c59f33a34bb5924aaf72cd2995", "content": [{ "contentListenerId": "D61A4C…", "contentType": "SearchResult", "scraperSignature": "88719EAF6FD7BE959B447CDF39BCCA5D", "properties": { "searchResult": "[\"391ed66ea64ce5f38304130d483da00f\",…]" } }, { "contentListenerId": "D61A4C…", "contentType": "PageType", "scraperSignature": "E732516A4317117BCF139DE1D4A89E20", "properties": { "pageType": "Search" } }, { "contentListenerId": "D61A4C…", "contentType": "SearchQuery", "scraperSignature": "E8F21AE75595619F581DA3589B92CD2B", "properties": { "searchQuery": "098f6bcd4621d373cade4e832627b4f6", "isObfuscated": "true" } }, { "contentListenerId": "D61A4C…", "contentType": "FreudSearchQuery", "scraperSignature": "E8F21AE75595619F581DA3589B92CD2B", "properties": { "searchQuery": "test", "isObfuscated": "false" } }], "listenerId": "D61A4C…", "context": "59", "properties": { "referrer": "https://www.google.com:443/d41d8cd98f00b204e9800998ecf8427e" }, "userTrustLevel": "Unknown", "customerProperties": {} }], "clientTimeStamp": 1616413302828, "oldClientTimeStamp": 1616413302887 }

The three requests differ by isolationZoneId: the values are ANALYTICS, HERMES and FARADAY. Judging by the configuration, browser extensions always send data to all three, with different clientToken values. Amazon Assistant for Android however only messages ANALYTICS. Code comments give slight hints towards the difference between these zones, e.g. ANALYTICS:

* {@link IsolationZoneId#ANALYTICS} is tied to a Titan Isolation Zone used * for association with business analytics data * Such data include off-Amazon prices, domains, search queries, etc.

HERMES is harder to understand:

* {@link IsolationZoneId#HERMES} is tied to a Titan Isolation Zone used for * P&C purpose.

If anybody can guess what P&C means: let me know. Should it mean “Privacy & Compliance,” this seems to be the wrong way to approach it. As to FARADAY, the comment is self-referring here:

* {@link IsolationZoneId#FARADAY} is tied to a Titan Isolation Zone used for * collect data for Titan Faraday integration.

An important note: FARADAY is the only zone where pseudoIdToken is sent along. This one is generated by the Identity service for the given Amazon account and session identifier. So here Amazon can easily say “Hello” to you personally.

The remaining tokens are fairly unspectacular. The cohortToken appears to be a user-independent value used for A/B testing. When decoded, it contains some UUIDs, cryptographic keys and encrypted data. partnerTag contains information about this specific Android Assistant build and the platform it is running on.

As to the actual event data, location has the path part of the address “obfuscated,” yet it’s easy to find out that 06a943c59f33a34bb5924aaf72cd2995 is the MD5 hash of the word search. So the location is actually https://www.google.com:443/search. At least query parameters and anchor are being stripped here. referrer is similarly “obfuscated”: d41d8cd98f00b204e9800998ecf8427e is the MD5 hash of an empty string. So I came here from https://www.google.com:443/. And context indicates that this is all about tab 59, allowing to distinguish actions performed in different tabs.

The values under content are results of scraping the page according to the rules mentioned above. SearchResult lists ten MD5 hashes representing the results of my search, and it is fairly easy to find out what they represent. For example, 391ed66ea64ce5f38304130d483da00f is the MD5 hash of https://www.test.de/.

Page type has been recognized as Search, so there are two more results indicating my search query. Here, the “regular” SearchQuery result contains yet another MD5 hash: a quick search will quickly tell that 098f6bcd4621d373cade4e832627b4f6 means test. But in case anybody still has doubts, the “experimental” FreudSearchQuery result confirms that this is indeed what I searched for. Same query string as plain text here.

Who is Freud?

You might have wondered why Amazon would invoke the name of Sigmund Freud. As it appears, Freud has the deciding power over which searches should be private and which can just be shared with Amazon without any obfuscation.

TitanClient will break up each search query into words, removing English stop words like “each” or “but.” The remaining words will be hashed individually using SHA-256 hash and the hashes sent to aafreudservice.prod.us-east-1.freud.titan.assistant.a2z.com. As with MD5, SHA-256 cannot technically be reversed but one can easily build a database of hashes for every English word. The Freud service uses this database to decide for each word whether it is “blessed” or not.

And if TitanClient receives Freud’s blessing for a particular search query, it considers it fine to be sent in plain text. And: no, Freud does not seem to object to sex of whatever kind. He appears to object to any word when used together with “test” however.

That might be the reason why Amazon doesn’t quite seem to trust Freud at this point. Most of the decisions are made by a simpler classifier which works like this:

* We say page is blessed if * 1. At least one PLA is present in scrapped content. OR * 2. If amazon url is there in organic search results.

For reference: PLA means “Product-Level Advertising.” So if your Google search displays product ads or if there is a link to Amazon in the results, all moderately effective MD5-based obfuscation will be switched off. The search query, search results and everything else will be sent as plain text.

What about the privacy policy?

The privacy policy for Amazon Assistant currently says:

Information We Collect Automatically. Amazon Assistant automatically collects information about websites you view where we may have relevant product or service recommendations when you are not interacting with Amazon Assistant. … You can also control collection of “Information We Collect Automatically” by disabling the Configure Comparison Settings.

This explains why TitanClient is only enabled on search sites and web shops, these are websites where Amazon Assistant might recommend something. It also explains why TitanClient is disabled if all features under “Comparison Settings” settings are disabled. It has been designed to fit in with this privacy policy without having to add anything too suspicious here. Albeit not quite:

We do not connect this information to your Amazon account, except when you interact with Amazon Assistant

As we’ve seen above, this isn’t true for data going to the FARADAY isolation zone. The pseudoIdToken value sent here is definitely connected to the user’s Amazon account.

For example, we collect and process the URL, page metadata, and limited page content of the website you are visiting to find a comparable Amazon product or service for you

This formulation carefully avoids mentioning search queries, even though it is vague enough that it doesn’t really exclude them either. And it seems to implicate that the purpose is only suggesting Amazon products, even though that’s clearly not the only purpose. As the previous sentence admits:

This information is used to operate, provide, and improve … Amazon’s marketing, products, and services (including for business analytics and fraud detection).

I’m not a lawyer, so I cannot tell whether sending conflicting messages like that is legit. But Amazon clearly goes for “we use this for anything we like.” Now does the data at least stay within Amazon?

Amazon shares this information with Amazon.com, Inc. and subsidiaries that Amazon.com, Inc. controls

This sounds like P&C above doesn’t mean “Peek & Cloppenburg,” since sharing data with this company (clearly not controlled by Amazon) would violate this privacy policy. Let’s hope that this is true and the data indeed stays within Amazon. It’s not like I have a way of verifying that.

Categorieën: Mozilla-nl planet

Mozilla Security Blog: Firefox 87 trims HTTP Referrers by default to protect user privacy

Mozilla planet - ma, 22/03/2021 - 11:00

 

We are pleased to announce that Firefox 87 will introduce a stricter, more privacy-preserving default Referrer Policy. From now on, by default, Firefox will trim path and query string information from referrer headers to prevent sites from accidentally leaking sensitive user data.

 

Referrer headers and Referrer Policy

Browsers send the HTTP Referrer header (note: original specification name is ‘HTTP Referer’) to signal to a website which location “referred” the user to that website’s server. More precisely, browsers have traditionally sent the full URL of the referring document (typically the URL in the address bar) in the HTTP Referrer header with virtually every navigation or subresource (image, style, script) request. Websites can use referrer information for many fairly innocent uses, including analytics, logging, or for optimizing caching.

Unfortunately, the HTTP Referrer header often contains private user data: it can reveal which articles a user is reading on the referring website, or even include information on a user’s account on a website.

The introduction of the Referrer Policy in browsers in 2016-2018 allowed websites to gain more control over the referrer values on their site, and hence provided a mechanism to protect the privacy of their users. However, if a website does not set any kind of referrer policy, then web browsers have traditionally defaulted to using a policy of ‘no-referrer-when-downgrade’, which trims the referrer when navigating to a less secure destination (e.g., navigating from https: to http:) but otherwise sends the full URL including path, and query information of the originating document as the referrer.

 

A new Policy for an evolving Web

The ‘no-referrer-when-downgrade’ policy is a relic of the past web, when sensitive web browsing was thought to occur over HTTPS connections and as such should not leak information in HTTP requests. Today’s web looks much different: the web is on a path to becoming HTTPS-only, and browsers are taking steps to curtail information leakage across websites. It is time we change our default Referrer Policy in line with these new goals.

 

Firefox 87 new default Referrer Policy ‘strict-origin-when-cross-origin’ trimming user sensitive information like path and query string to protect privacy.

 

Starting with Firefox 87, we set the default Referrer Policy to ‘strict-origin-when-cross-origin’ which will trim user sensitive information accessible in the URL. As illustrated in the example above, this new stricter referrer policy will not only trim information for requests going from HTTPS to HTTP, but will also trim path and query information for all cross-origin requests. With that update Firefox will apply the new default Referrer Policy to all navigational requests, redirected requests, and subresource (image, style, script) requests, thereby providing a significantly more private browsing experience.

If you are a Firefox user, you don’t have to do anything to benefit from this change. As soon as your Firefox auto-updates to version 87, the new default policy will be in effect for every website you visit. If you aren’t a Firefox user yet, you can download it here to start taking advantage of all the ways Firefox works to improve your privacy step by step with every new release.”

The post Firefox 87 trims HTTP Referrers by default to protect user privacy appeared first on Mozilla Security Blog.

Categorieën: Mozilla-nl planet

Niko Matsakis: Async Vision Doc Writing Sessions

Mozilla planet - ma, 22/03/2021 - 05:00

Hey folks! As part of the Async Vision Doc effort, I’m planning on holding two public drafting sessions tomorrow, March 23rd:

During these sessions, we’ll be looking over the status quo issues and writing a story or two! If you’d like to join, ping me on Discord or Zulip and I’ll send you the Zoom link.

The vision…what?

Never heard of the async vision doc? It’s a new thing we’re trying as part of the Async Foundations Working Group:

We are launching a collaborative effort to build a shared vision document for Async Rust. Our goal is to engage the entire community in a collective act of the imagination: how can we make the end-to-end experience of using Async I/O not only a pragmatic choice, but a joyful one?

Read the full blog post for more.

Categorieën: Mozilla-nl planet

William Lachance: Blog moving back to wrla.ch

Mozilla planet - zo, 21/03/2021 - 08:12

House keeping news: I’m moving this blog back to the wrla.ch domain from wlach.github.io. This domain sorta kinda worked before (I set up a netlify deploy a couple years ago), but the software used to generate this blog referenced github all over the place in its output, so it didn’t really work as you’d expect. Anyway, this will be the last entry published on wlach.github.io: my plan is to turn that domain into a set of redirects in the future.

I don’t know how many of you are out there who still use RSS, but if you do, please update your feeds. I have filed a bug to update my Planet Mozilla entry, so hopefully the change there will be seamless.

Why? Recent events have made me not want to tie my public web presence to a particular company (especially a larger one, like Microsoft). I don’t have any immediate plans to move this blog off of github, but this gives me that option in the future. For those wondering, the original rationale for moving to github is in this post. Looking back, the idea of moving away from a VPS and WordPress made sense, the move away from my own domain less so. I think it may have been harder to set up static hosting (esp. with HTTPS) at that time… or I might have just been ignorant.

In related news, I decided to reactivate my twitter account: you can once again find me there as @wrlach (my old username got taken in my absence). I’m not totally thrilled about this (I basically stand by what I wrote a few years ago, except maybe the concession I made to Facebook being “ok”), but Twitter seems to be where my industry peers are. As someone who doesn’t have a large organic following, I’ve come to really value forums where I can share my work. That said, I’m going to be very selective about what I engage with on that site: I appreciate your understanding.

Categorieën: Mozilla-nl planet

Pagina's