mozilla

Mozilla Nederland LogoDe Nederlandse
Mozilla-gemeenschap

The Rust Programming Language Blog: Announcing Rust 1.75.0

Mozilla planet - to, 28/12/2023 - 01:00

The Rust team is happy to announce a new version of Rust, 1.75.0. Rust is a programming language empowering everyone to build reliable and efficient software.

If you have a previous version of Rust installed via rustup, you can get 1.75.0 with:

$ rustup update stable

If you don't have it already, you can get rustup from the appropriate page on our website, and check out the detailed release notes for 1.75.0.

If you'd like to help us out by testing future releases, you might consider updating locally to use the beta channel (rustup default beta) or the nightly channel (rustup default nightly). Please report any bugs you might come across!

What's in 1.75.0 stable async fn and return-position impl Trait in traits

As announced last week, Rust 1.75 supports use of async fn and -> impl Trait in traits. However, this initial release comes with some limitations that are described in the announcement post.

It's expected that these limitations will be lifted in future releases.

Pointer byte offset APIs

Raw pointers (*const T and *mut T) used to primarily support operations operating in units of T. For example, <*const T>::add(1) would add size_of::<T>() bytes to the pointer's address. In some cases, working with byte offsets is more convenient, and these new APIs avoid requiring callers to cast to *const u8/*mut u8 first.

Code layout optimizations for rustc

The Rust compiler continues to get faster, with this release including the application of BOLT to our binary releases, bringing a 2% mean wall time improvements on our benchmarks. This tool optimizes the layout of the librustc_driver.so library containing most of the rustc code, allowing for better cache utilization.

We are also now building rustc with -Ccodegen-units=1, which provides more opportunity for optimizations in LLVM. This optimization brought a separate 1.5% wall time mean win to our benchmarks.

In this release these optimizations are limited to x86_64-unknown-linux-gnu compilers, but we expect to expand that over time to include more platforms.

Stabilized APIs

These APIs are now stable in const contexts:

Other changes

Check out everything that changed in Rust, Cargo, and Clippy.

Contributors to 1.75.0

Many people came together to create Rust 1.75.0. We couldn't have done it without all of you. Thanks!

Categorieën: Mozilla-nl planet

Support.Mozilla.Org: 2023 in a nutshell

Mozilla planet - fr, 22/12/2023 - 19:37

Hey SUMO nation,

As we’re inching closer towards 2024, I’d like to take a step back to reflect on what we’ve accomplished in 2023. It’s a lot, so let’s dive in! 

  • Overall pageviews

From Jan 1st to the end of November, we’ve got a total of 255+ million pageviews on SUMO. We’ve been in a consistent pageview number drop since 2018, and this time around, we’re down 7% from last year. This is far from bad, though, as this is our lowest yearly drop since 2018.

  • Forum

In the forum, we’ve seen an average of 2.8k questions per month this year. This is a 6.67% down turn from last year. We also see a downturn in our answer rate within 72 hours, 71% compared to 75% last year. We also see a drop in our solved rate, 10% this year compared to 14% last year. On a typical month, our average contributors on the forum excluding OP is around 200 (compared to 240 last year).

*See Support glossary
  • KB

We see an increase over different metrics on KB contribution this year, though. In total, we’ve got a total of 1990 revisions (14% increase from last year) from 136 non staff members. Our review rate this year is 80%, while our approval rate is 96%, compared to 73% and 95% in 2022). In total, we’ve got 29 non-staff reviewers this year.

  • Localization

On the localization side, the number is overall pretty normal. Total revision is around 13K (same as last year) from 400 non-staff members, with 93% review rate and 99% approval rate (compared to 90% and 99% last year) from a total of 118 non-staff reviewers.

  • Social Support

From year to date, the Social Support contributors have sent a total of 850 responses (compared to 908 last year) and interacted with 1645 conversations. Our resolved rate has dropped to 40.74%, compared to 70% last year. We have made major improvements on other metrics, though. For example, this year, our contributors were responsible for more replies from our total responses (75% in total compared to 39.6% last year). Our conversion rate is also improving from 20% in 2022 to 52% this year. It means, our contributors have taken more role in answering the overall inbounds and have replied more consistently than last year.

  • Mobile Store Support

On the Mobile Store Support side, our contributors this year have contributed to 1260 replies and interacted with 3149 conversations in total. That makes our conversion rate at 36% this year, compared to 46% last year. And those are mostly contributions to non-English reviews.

In addition to the regular contribution, here are some of the community highlights from 2023:

  • We did some internal assessment and external benchmarking in Q1, which informed our experiments in Q2. Learn the results of those experiments from this call.
  • We also updated our contributor guidelines, including article review guidelines and created a new policy around the use of generative AI.
  • By the end of the year, the Spanish community has done something really amazing. They have managed to translate and update 70% of in-product desktop articles (as opposed to 11% when we started the call for help.

We’d also like to take this opportunity to highlight some Customer Experience team’s projects that we’ve tackled this year (some with close involvement and help from the community).

We split this one into two concurrent projects:

  • Phase 1 Navigation Improvements — initial phase aims to:
    • Surface the community forums in a clearer way
    • Streamline the Ask a Question user flow
    • Improve link text and calls-to-action to better match what users might expect when navigating on the site
    • Updates to the main navigation and small changes to additional site UI (like sidebar menus, page headers, etc.) can be expected
  • Cross-system content structure and hierarchy — the goal of this project is to:
    • Improve our ability to gather data metrics across functional areas of SUMO (KB, ticketing, and forums)
    • Improve recommended “next steps” by linking related content across KB and Forums
    • Create opportunities for grouping and presenting content on SUMO by alternate categories and not just by product

Project Background:

    • This research was conducted between August 2023 and November 2023. The goal of this project is to provide actionable insights on how to improve the customer experience of SUMO.
    • Research approach:
      • Stakeholder engagement process
      • Surveyed 786 Mozilla Support users
      • Conducted three rounds of interviews recruited from survey respondents:
        • Sprint 1: Evaluated content and article structure
        • Sprint 2: Evaluated the overall SUMO customer experience
        • Sprint 3: Co-design of an improved SUMO experience
      • This research was conducted by PH1 Research, who have conducted similar research for Mozilla in 2022.
  • Please consider: Participants for this study were recruited via a banner ad in SUMO. As a result, these findings only reflect the experiences and needs of users who actively use SUMO. It does not reflect users who may not be aware of SUMO or have decided not to use it. 

Executive Summary:

  • Users consider SUMO a trustworthy and content-rich resource. SUMO offers resources that can appropriately help users of different technical levels. The most common user flow is via Google search. Very few are logging in to SUMO directly.
  • The goal of SUMO should be to assist Mozilla users to improve their product experience. Content should be consolidated and optimized to show fewer, high quality results on Google search and SUMO search. The article experience should aim to boost relevance and task success. The SUMO website should aid users to diagnose systems, understand problems, find solutions, and discover additional resources when needed.

Recommendations:

  • Our recommendation is that SUMO’s strategy should be to provide a self-service experience that makes users feel that Mozilla cares about their problems and offers a range of solutions appealing to various persona types (technical/non-technical).
  • The pillars for making SUMO valuable to users should be:
    • Confidence: As a user, I need to be confident that the resource provided will resolve my problem.
    • Guidance: As a user, I need to feel guided through the experience of finding a solution, even when I don’t understand the problem or solutions available.
    • Trust: As a user, I need to trust that the resources have been provided by a trustworthy authority on the subject (SUMO scores well here because of Mozilla).
      • Modernizing our CMS can provide significant benefits in terms of user experience, performance, security, flexibility, collaboration, and analytics.
      • This resulted in a decision to move forward with the plan to migrate our CMS to Wagtail — a modern, open-source content management system focused on flexibility and user experience.
      • We are currently in the process of planning the next phases for implementation.
    • Pocket migration to SUMO
      • We successfully migrated and published 100% of previously identified Pocket help center content from HelpScout’s CMS to SUMO’s CMS, with proper redirects in place to ensure a seamless transition for the user.
      • The localization community began efforts to help us localize the content, which had previously only been available in en-US.
    • Firefox account to Mozilla account rebrand in early November.
    • Officially supporting account users and login less support flow (read more about that here).
    • This was a very challenging project, not only because we had to migrate our large codebase and very large data set from MySQL, but also because of the challenge of performing the actual data migration within a reasonable period of time, on the order of a few hours at most, so that we could minimize the disruption to users and contributors. In the end, it was a multi-month project comprising coordinated research, planning and effort between our engineering team and our SRE (Site Reliability Engineering) team. We’re now on a much better database foundation for the future, because:
      • Postgres is better suited for enterprise-level applications like ours, with very large datasets, frequent write operations and complex queries.
      • We can also take advantage of connection pooling via PgBouncer, which will improve our resilience under huge and often malicious traffic spikes (which have been occurring much more frequently during the past year).
      • Last but not least, our database now supports the full unicode character set, which means it can fully handle all characters, including emoji’s , in all languages. Our MySQL database had only limited unicode support, due to its initial configuration, and rather than invest in resolving that, which would have meant a significant chunk of work, we decided to invest instead in Postgres.

This year, you all continue to impress us with the persistence and dedication that you show to Mozilla by contributing to our platform, despite the current state of our world right now. To every single one of you who contributed in one way or another to SUMO, I’d like to express my sincere gratitude because without you all, our platform is just an empty shell. To celebrate this, we’ve prepared this simple dashboard with contribution data that you can filter based on username so you can see how much you’ve accomplished this year (we talked about this in our last community call this year).

Let’s be proud of what we’ve accomplished to keep the internet as a global & public resource for everybody, and let’s keep on rocking the helpful web through 2024 and beyond!

If you’re a looker and interested in contributing to Mozilla Support, please head over to our Contribute page to learn more about our programs!

Categorieën: Mozilla-nl planet

Mozilla Privacy Blog: Mozilla’s Comments to FCC: Net Neutrality Essential for Competition, Innovation, Privacy

Mozilla planet - fr, 22/12/2023 - 15:44

[Read our full submission here]

Net neutrality – the concept that your internet provider should not be able to block, throttle, or prioritize elements of your internet service, such as to favor their own products or business partners – is on the docket again in the United States. With the FCC putting out a notice of proposed rulemaking (NPRM) to reinstate net neutrality, Mozilla weighed in last week with a clear message: the FCC should reestablish these common sense rules as soon as possible.

We have been fighting for net neutrality around the world for the better part of a decade and a half. Most notably, this included Mozilla’s challenge to the Trump FCC’s dismantling of net neutrality in 2018.

American internet users are on the cusp of renewed protections for the open internet. Our recently submitted comment to the FCC’s NPRM took a step back to remind the FCC and the public of the real benefits of net neutrality: Competition, Grassroots Innovation, Privacy, and Transparency and Accountability.

Simply put, if the FCC moves forward with reclassification of broadband as a Title II service, it will protect innovation in edge services; unlock vital privacy safeguards; and prevent ISPs from leveraging their market power to control people’s experiences online. With vast increases in our dependence on the internet since the COVID-19 pandemic, these protections are more important than ever.

We encourage others who are passionate about the open internet to file reply comments on the proceeding, which are due January 17, 2024.

You can read our full comment here.

The post Mozilla’s Comments to FCC: Net Neutrality Essential for Competition, Innovation, Privacy appeared first on Open Policy & Advocacy.

Categorieën: Mozilla-nl planet

The Mozilla Blog: CAPTCHA successor Privacy Pass has no easy answers for online abuse

Mozilla planet - to, 21/12/2023 - 22:00

As much as the Web continues to inspire us, we know that sites put up with an awful lot of abuse in order to stay online. Denial of service attacks, fraud and other flavors of abusive behavior are a constant pressure on website operators.

One way that sites protect themselves is to find some way to sort “good” visitors from “bad.” CAPTCHAs are a widely loathed and unreliable means of distinguishing human visitors from automated solvers. Even worse, beneath this sometimes infuriating facade is a system that depends extensively on invasive tracking and profiling.

(You can find a fun overview of the current state of CAPTCHA here.)

Finding a technical solution to this problem that does not involve such privacy violations is an appealing challenge, but a difficult one. Well-meaning attempts can easily fail without giving due consideration to other factors. For instance, Google’s Web Environment Integrity proposal fell flat because of its potential to be used to unduly constrain personal choice in how to engage online (see our position for details).

Privacy Pass is a framework published by the IETF that is seen as having the potential to help address this difficult problem. It is a generalization of a system originally developed by Cloudflare to reduce their dependence on CAPTCHAs and tracking. For the Web, the central idea is that Privacy Pass might provide websites with a clean indication that a visitor is OK, separate from the details of their browsing history.

The way Privacy Pass works is that one website hands out special tokens to people the site thinks are OK. Other sites can ask people to give them a token. The second site then knows that a visitor with a token is considered OK by the first site, but they don’t learn anything else. If the second site trusts the first, they might treat people with tokens more favorably than those without.

The cryptography that backs Privacy Pass provides two interlocked guarantees: 

  • authenticity: the recipient of a token can guarantee that it came from the issuer
  • privacy: the recipient of the token cannot trace the token to its issuance, which prevents them from learning who was issued each token

The central promise of Privacy Pass is that the privacy guarantee would allow the exchange of tokens to be largely automated, with your browser forwarding tokens between sites that trust you to sites that are uncertain. This would happen without your participation. Sites could use these tokens to reduce their dependence on annoying and ineffective CAPTCHAs.

Our analysis of Privacy Pass shows that while the technology is sound, applying that technology to an open system like the Web comes with a host of non-technical hazards.

We examine the privacy properties of Privacy Pass, how useful it might be, whether it could improve equity of access, and whether it might bias toward centralization. We find problems that aren’t technical in nature and hard to reconcile. 

In considering how Privacy Pass might be deployed, there is a direct tension between privacy and open participation. The system requires token providers to be widely trusted to respect privacy, but our vision of an open Web means that restrictions on participation cannot be imposed lightly. Resolving this tension is necessary when deciding who can provide tokens.

The analysis concludes that the problem of abuse is not one that will yield to a technical solution like Privacy Pass. For a problem this challenging, technical options might not provide a comprehensive solution, but they need to do more than shift problems around. Technical solutions need to complement other measures. Privacy Pass does allow us to focus on the central problem of identifying abusive visitors, but there is a need to have safeguards in place that prevent a number of serious secondary problems.

Our analysis does not ultimately identify a path to building the non-technical safeguards necessary for a successful deployment of Privacy Pass on the Web.

Finally, we look at the deployments of Privacy Pass in Safari and Chrome browsers. We conclude that these deployments have inadequate safeguards for the problems we identify.

The post CAPTCHA successor Privacy Pass has no easy answers for online abuse appeared first on The Mozilla Blog.

Categorieën: Mozilla-nl planet

The Rust Programming Language Blog: Announcing `async fn` and return-position `impl Trait` in traits

Mozilla planet - to, 21/12/2023 - 01:00

The Rust Async Working Group is excited to announce major progress towards our goal of enabling the use of async fn in traits. Rust 1.75, which hits stable next week, will include support for both -> impl Trait notation and async fn in traits.

This is a big milestone, and we know many users will be itching to try these out in their own code. However, we are still missing some important features that many users need. Read on for recommendations on when and how to use the stabilized features.

What's stabilizing

Ever since the stabilization of RFC #1522 in Rust 1.26, Rust has allowed users to write impl Trait as the return type of functions (often called "RPIT"). This means that the function returns "some type that implements Trait". This is commonly used to return closures, iterators, and other types that are complex or impossible to write explicitly.

/// Given a list of players, return an iterator /// over their names. fn player_names( players: &[Player] ) -> impl Iterator<Item = &String> { players .iter() .map(|p| &p.name) }

Starting in Rust 1.75, you can use return-position impl Trait in trait (RPITIT) definitions and in trait impls. For example, you could use this to write a trait method that returns an iterator:

trait Container { fn items(&self) -> impl Iterator<Item = Widget>; } impl Container for MyContainer { fn items(&self) -> impl Iterator<Item = Widget> { self.items.iter().cloned() } }

So what does all of this have to do with async functions? Well, async functions are "just sugar" for functions that return -> impl Future. Since these are now permitted in traits, we also permit you to write traits that use async fn.

trait HttpService { async fn fetch(&self, url: Url) -> HtmlBody; // ^^^^^^^^ desugars to: // fn fetch(&self, url: Url) -> impl Future<Output = HtmlBody>; } Where the gaps lie -> impl Trait in public traits

The use of -> impl Trait is still discouraged for general use in public traits and APIs for the reason that users can't put additional bounds on the return type. For example, there is no way to write this function in a way that is generic over the Container trait:

fn print_in_reverse(container: impl Container) { for item in container.items().rev() { // ERROR: ^^^ // the trait `DoubleEndedIterator` // is not implemented for // `impl Iterator<Item = Widget>` eprintln!("{item}"); } }

Even though some implementations might return an iterator that implements DoubleEndedIterator, there is no way for generic code to take advantage of this without defining another trait. In the future we plan to add a solution for this. For now, -> impl Trait is best used in internal traits or when you're confident your users won't need additional bounds. Otherwise you should consider using an associated type.1

async fn in public traits

Since async fn desugars to -> impl Future, the same limitations apply. In fact, if you use bare async fn in a public trait today, you'll see a warning.

warning: use of `async fn` in public traits is discouraged as auto trait bounds cannot be specified --> src/lib.rs:7:5 | 7 | async fn fetch(&self, url: Url) -> HtmlBody; | ^^^^^ | help: you can desugar to a normal `fn` that returns `impl Future` and add any desired bounds such as `Send`, but these cannot be relaxed without a breaking API change | 7 - async fn fetch(&self, url: Url) -> HtmlBody; 7 + fn fetch(&self, url: Url) -> impl std::future::Future<Output = HtmlBody> + Send; |

Of particular interest to users of async are Send bounds on the returned future. Since users cannot add bounds later, the error message is saying that you as a trait author need to make a choice: Do you want your trait to work with multithreaded, work-stealing executors?

Thankfully, we have a solution that allows using async fn in public traits today! We recommend using the trait_variant::make proc macro to let your users choose. This proc macro is part of the trait-variant crate, published by the rust-lang org. Add it to your project with cargo add trait-variant, then use it like so:

#[trait_variant::make(HttpService: Send)] pub trait LocalHttpService { async fn fetch(&self, url: Url) -> HtmlBody; }

This creates two versions of your trait: LocalHttpService for single-threaded executors and HttpService for multithreaded work-stealing executors. Since we expect the latter to be used more commonly, it has the shorter name in this example. It has additional Send bounds:

pub trait HttpService: Send { fn fetch( &self, url: Url, ) -> impl Future<Output = HtmlBody> + Send; }

This macro works for async because impl Future rarely requires additional bounds other than Send, so we can set our users up for success. See the FAQ below for an example of where this is needed.

Dynamic dispatch

Traits that use -> impl Trait and async fn are not object-safe, which means they lack support for dynamic dispatch. We plan to provide utilities that enable dynamic dispatch in an upcoming version of the trait-variant crate.

How we hope to improve in the future

In the future we would like to allow users to add their own bounds to impl Trait return types, which would make them more generally useful. It would also enable more advanced uses of async fn. The syntax might look something like this:

trait HttpService = LocalHttpService<fetch(): Send> + Send;

Since these aliases won't require any support on the part of the trait author, it will technically make the Send variants of async traits unnecessary. However, those variants will still be a nice convenience for users, so we expect that most crates will continue to provide them.

Of course, the goals of the Async Working Group don't stop with async fn in traits. We want to continue building features on top of it that enable more reliable and sophisticated use of async Rust, and we intend to publish a more extensive roadmap in the new year.

Frequently asked questions Is it okay to use -> impl Trait in traits?

For private traits you can use -> impl Trait freely. For public traits, it's best to avoid them for now unless you can anticipate all the bounds your users might want (in which case you can use #[trait_variant::make], as we do for async). We expect to lift this restriction in the future.

Should I still use the #[async_trait] macro?

There are a couple of reasons you might need to continue using async-trait:

  • You want to support Rust versions older than 1.75.
  • You want dynamic dispatch.

As stated above, we hope to enable dynamic dispatch in a future version of the trait-variant crate.

Is it okay to use async fn in traits? What are the limitations?

Assuming you don't need to use #[async_trait] for one of the reasons stated above, it's totally fine to use regular async fn in traits. Just remember to use #[trait_variant::make] if you want to support multithreaded runtimes.

The biggest limitation is that a type must always decide if it implements the Send or non-Send version of a trait. It cannot implement the Send version conditionally on one of its generics. This can come up in the middleware pattern, for example, RequestLimitingService<T> that is HttpService if T: HttpService.

Why do I need #[trait_variant::make] and Send bounds?

In simple cases you may find that your trait appears to work fine with a multithreaded executor. There are some patterns that just won't work, however. Consider the following:

fn spawn_task(service: impl HttpService + 'static) { tokio::spawn(async move { let url = Url::from("https://rust-lang.org"); let _body = service.fetch(url).await; }); }

Without Send bounds on our trait, this would fail to compile with the error: "future cannot be sent between threads safely". By creating a variant of your trait with Send bounds, you avoid sending your users into this trap.

Note that you won't see a warning if your trait is not public, because if you run into this problem you can always add the Send bounds yourself later.

For a more thorough explanation of the problem, see this blog post.2

Can I mix async fn and impl trait?

Yes, you can freely move between the async fn and -> impl Future spelling in your traits and impls. This is true even when one form has a Send bound.3 This makes the traits created by trait_variant nicer to use.

trait HttpService: Send { fn fetch(&self, url: Url) -> impl Future<Output = HtmlBody> + Send; } impl HttpService for MyService { async fn fetch(&self, url: Url) -> HtmlBody { // This works, as long as `do_fetch(): Send`! self.client.do_fetch(url).await.into_body() } } Why don't these signatures use impl Future + '_?

For -> impl Trait in traits we adopted the 2024 Capture Rules early. This means that the + '_ you often see today is unnecessary in traits, because the return type is already assumed to capture input lifetimes. In the 2024 edition this rule will apply to all function signatures. See the linked RFC for more.

Why am I getting a "refine" warning when I implement a trait with -> impl Trait?

If your impl signature includes more detailed information than the trait itself, you'll get a warning:

pub trait Foo { fn foo(self) -> impl Debug; } impl Foo for u32 { fn foo(self) -> String { // ^^^^^^ // warning: impl trait in impl method signature does not match trait method signature self.to_string() } }

The reason is that you may be leaking more details of your implementation than you meant to. For instance, should the following code compile?

fn main() { // Did the implementer mean to allow // use of `Display`, or only `Debug` as // the trait says? println!("{}", 32.foo()); }

Thanks to refined trait implementations it does compile, but the compiler asks you to confirm your intent to refine the trait interface with #[allow(refining_impl_trait)] on the impl.

Conclusion

The Async Working Group is excited to end 2023 by announcing the completion of our primary goal for the year! Thank you to everyone who helpfully participated in design, implementation, and stabilization discussions. Thanks also to the users of async Rust who have given great feedback over the years. We're looking forward to seeing what you build, and to delivering continued improvements in the years to come.

  1. Note that associated types can only be used in cases where the type is nameable. This restriction will be lifted once impl_trait_in_assoc_type is stabilized.

  2. Note that in that blog post we originally said we would solve the Send bound problem before shipping async fn in traits, but we decided to cut that from the scope and ship the trait-variant crate instead.

  3. This works because of auto-trait leakage, which allows knowledge of auto traits to "leak" from an item whose signature does not specify them.

Categorieën: Mozilla-nl planet

Mozilla Localization (L10N): 2024 Pontoon survey results

Mozilla planet - wo, 20/12/2023 - 19:13

The results from the 2024 Pontoon survey are in and the 3 top-voted features we commit to implement are:

  1. Add ability to edit Translation Memory entries (611 votes).
  2. Improve performance of Pontoon translation workspace and dashboards (603 votes).
  3. Add ability to propose new Terminology entries (595 votes).

The remaining features ranked as follows:

  1. Add ability to preview Fluent strings in the editor (572 votes).
  2. Link project names in Concordance search results to corresponding strings (540 votes).
  3. Add “Copy translation from another locale as suggestion” batch action (523 votes).
  4. Add ability to receive automated notifications via email (521 votes).
  5. Add Timeline tab with activity to Project, Locale, ProjectLocale dashboards (501 votes).
  6. Add ability to read notifications one by one, or mark notifications as unread (495 votes).
  7. Add virtual keyboard with special characters to the editor (469 votes).

We thank everyone who dedicated their time to share valuable responses and suggest potential features for us to consider implementing!

A total of 365 Pontoon users participated in the survey, 169 of which voted on all features. Each user could give each feature 1 to 5 votes. Check out the full report.

We look forward to implementing these new features and working towards a more seamless and efficient translation experience with Pontoon. Stay tuned for updates!

Categorieën: Mozilla-nl planet

Firefox Developer Experience: Firefox WebDriver Newsletter — 121

Mozilla planet - ti, 19/12/2023 - 16:34

WebDriver is a remote control interface that enables introspection and control of user agents. As such it can help developers to verify that their websites are working and performing well with all major browsers. The protocol is standardized by the W3C and consists of two separate specifications: WebDriver classic (HTTP) and the new WebDriver BiDi (Bi-Directional).

This newsletter gives an overview of the work we’ve done as part of the Firefox 121 release cycle.

Contributions

With Firefox being an open source project, we are grateful to get contributions from people outside of Mozilla.

WebDriver code is written in JavaScript, Python, and Rust so any web developer can contribute! Read how to setup the work environment and check the list of mentored issues for Marionette.

WebDriver BiDi New: “browsingContext.contextDestroyed” event

browsingContext.contextDestroyed is a new event that allows clients to be notified when a context is discarded. This event will be emitted for instance when a tab is closed or when a frame is removed from the DOM. The event’s payload contains the context which was destroyed, the url of the context and the parent context id (for child contexts). Note that when closing a tab containing iframes, only a single event will be emitted for the top-level context to avoid unnecessary protocol traffic.

Support for “userActivation” parameter in script.callFunction and script.evaluate

The userActivation parameter is a boolean which allows the script.callFunction and script.evaluate commands to execute JavaScript while simulating that the user is currently interacting with the page. This can be useful to use features which are only available on user activation, such as interacting with the clipboard. The default value for this parameter is false.

Support for “defaultValue” field in browsingContext.userPromptOpened event

The browsingContext.userPromptOpened event will now provide a defaultValue field set to the default value of user prompts of type “prompt“. If the default value was not provided (or was an empty string), the defaultValue field is omitted.

Here is an example payload for a window.prompt usage:

{ "type": "event", "method": "browsingContext.userPromptOpened", "params": { "context": "67b77507-0728-496f-b951-72650ead8c8a", "type": "prompt", "message": "What is your favorite automation protocol", "defaultValue": "WebDriver BiDi" } } Prompt example on a webpage.<figcaption class="wp-element-caption">Prompt example on a webpage.</figcaption> Updates for the browsingContext.captureScreenshot command

The browsingContext.captureScreenshot command received several updates, some of which are non backwards-compatible.

First, the scrollIntoView parameter was removed. The parameter could lead to confusing results as it does not ensure the scrolled element becomes fully visible. If needed, it is easy to scroll into view using script.evaluate.

The clip parameter value BoxClipRectangle renamed its type property from “viewport” to “box“.

Finally, a new origin parameter was added with two possible values: “document” or “viewport” (defaults to “viewport“). This argument allows clients to define the origin and bounds of the screenshot. Typically, in order to take “full page” screenshots, using the “document” value will allow the screenshot to expand beyond the viewport, without having to scroll manually. In combination with the clip parameter, this should allow more flexibility to take page, viewport or element screenshots.

Typically, you can use the origin set to “document” and the clip type “element” to take screenshots of elements without worrying about the scroll position or the viewport size:

{ "context": "67b77507-0728-496f-b951-72650ead8c8a", "origin": "document", "clip": { "type": "element", "element": { "sharedId": "67b77507-0728-496f-b951-72650ead8c8a" } } }  screenshot of the page footer, which was scrolled-out and taller than the viewport, using origin "document" and clip type "element".<figcaption class="wp-element-caption">Left: an example page scrolled to the top. Right: screenshot of the page footer, which was scrolled-out and taller than the viewport, using origin “document” and clip type “element”.</figcaption> Added context property for Window serialization

Serialized Window or Frame objects now contain a context property which contains the corresponding context id. This id can then be used to send commands to this Window/Frame and can also be exchanged with WebDriver Classic (Marionette).

Bug Fixes Marionette (WebDriver classic) Added support for Window and Frame serialization

Marionette now supports serialization and deserialization of Window and Frame objects.

Categorieën: Mozilla-nl planet

Mozilla Thunderbird: When Will Thunderbird For Android Be Released?

Mozilla planet - mo, 18/12/2023 - 19:01

When will Thunderbird for Android be released? This is a question that comes up quite a lot, and we appreciate that you’re all excited to finally put Thunderbird in your pocket. It’s not a simple answer, but we’ll do our best to explain why things are taking longer than expected.

We have always been a bit vague on when we were going to release Thunderbird for Android. At first this was because we still had to figure out what features we wanted to add to K-9 Mail before we were comfortable calling it Thunderbird. Once we had a list, we estimated how long it would take to add those features to the app. Then something happened that always happens in software projects – things took longer than expected. So we cut down on features and aimed for a release at the end of 2023. As we got closer to the end of the year, it became clear that even with the reduced set of features, the release date would have almost certainly slipped into early 2024.

We then sat together and reevaluated the situation. In the end we decided that there’s no rush. We’ll work on the features we wanted in the app in the first place, because you deserve the best mobile experience we can give you. Once those features have been added, we’ll release the app as Thunderbird for Android.

Why Wait? Try K-9 Mail Now

But of course you don’t have to wait until then. All our development happens out in the open. The stable version of K-9 Mail contains all of the features we have already completed. The beta version of K-9 Mail contains the feature(s) we’re currently working on.

Both stable and beta versions can be installed via F-Droid or Google Play.

Thunderbird for Android / K-9 Mail: November/December 2023 Progress Report K-9 Mail’s Future

Side note: Quite a few people seem to love K-9 Mail and have asked us to keep the robot dog around. We believe it should be relatively little effort to build two apps from one code base. The apps would be virtually identical and only differ in app name, app icon, and the color scheme. So our current plan is to keep K-9 Mail around.

Whether you prefer metal dogs or mythical birds, we’ve got you covered.

The post When Will Thunderbird For Android Be Released? appeared first on The Thunderbird Blog.

Categorieën: Mozilla-nl planet

Mozilla Thunderbird: Thunderbird for Android / K-9 Mail: November/December 2023 Progress Report

Mozilla planet - mo, 18/12/2023 - 19:01

a dark background with thunderbird and k-9 mail logos centered, with the text "Thunderbird for Android, November 2023 progress report"

In February 2023 we started publishing monthly reports on the progress of transforming K-9 Mail into Thunderbird for Android. Somewhat to my surprise, we managed to keep this up throughout the entire year. 

But since the end of the year company shutdown is coming up and both Wolf and I have some vacation days left, this will be the last progress report of the year, covering both November and December. If you need a refresher on where we left off previously, know that the progress report for October is only one click away.

New Home On Google Play

If you’ve recently visited K-9 Mail’s page on Google Play you might have noticed that the developer name changed from “K-9 Dog Walkers” to “Mozilla Thunderbird”. That’s because we finally got around to moving the app to a developer account owned by Thunderbird.

I’d like to use this opportunity to thank Jesse Vincent, who not only founded the K-9 Mail project, but also managed the Google Play developer account for all these years. Thank you ♥

When Will Thunderbird For Android Be Released? Asking For Android permissions

Previously, the app asked the user to grant the permission to access contacts when the message list or compose screens were displayed. 

<figcaption class="wp-element-caption">Permission prompt in message list screen</figcaption> <figcaption class="wp-element-caption">Permission prompt in compose screen</figcaption>

The app asked for the contacts permission every time one of these screens was opened. That’s not as bad as it sounds. Android automatically ignores such a request after the user has selected the “deny” option twice. Unfortunately, dismissing the dialog e.g. by using the back button, doesn’t count as denying the permission request. So users who chose that option to get rid of the dialog were asked again and again. Clearly not a great experience.

So we changed it. Now, the app no longer asks for the contacts permission in those screens. Instead, asking the user to grant permissions is now part of the onboarding flow. After adding the first account, users will see the following screen:

The keen observer will have noticed that the app is now also asking for the permission to create notifications. Since the introduction of notification categories in Android 8, users have always had the option to disable some or all notifications created by an app. But starting with Android 13, users now have to explicitly grant the permission to create notifications.

While the app will work without the notification permission, you should still grant it to the app, at least for now. Currently, some errors (e.g. when sending an email has failed) are only communicated via a notification. 

And don’t worry, granting the permission doesn’t mean you’ll be bombarded with notifications. You can still configure whether you want to get notifications for new messages on a per account basis.

Improved Account Setup

This section has been a fixture in the last couple of progress reports. The new account setup code has been a lot of work. And we’re still not quite done yet. However, it already is in a state where it’s a vast improvement over what we had previously.

Bug fixes

Thanks to feedback from beta testers, we identified and fixed a couple of bugs.

  • The app was crashing when trying to display an error message after the user had entered an invalid or unsupported email address.
  • While fixing the bug above, we also noticed that some placeholder code to validate email addresses was still used. We replaced that code and improved error messages, e.g. when encountering a syntactically valid, but deliberately unsupported email address like test@[127.0.0.1].
  • A user reported a crash when trying to set up an account with a particular email domain. We tracked this down to an MX DNS record containing an underscore. That’s not a valid character for a hostname. The app already checked for that, but the error wasn’t caught and so crashed the app.
User experience improvements

Thanks to feedback from people who went through the manual setup flow multiple times, we identified a couple of usability issues. We made some changes like disabling auto-correct in the server name text field and copying the password entered in the incoming server settings screen to the outgoing server settings screen.

Hopefully, automatic account setup will just work for you. But if you have to use the manual setup route, at least now it should be a tiny bit less annoying.

Edit server settings

Editing incoming or outgoing server settings is not strictly part of setting up an account. However, the same screens used in the manual account setup flow are also used when editing server settings of an existing account (e.g. by going to Settings → [Account] → Fetching mail → Incoming server). 

<figcaption class="wp-element-caption">Incoming server settings screen during manual account setup</figcaption> <figcaption class="wp-element-caption">Incoming server settings screen when editing an existing account</figcaption>

The screens don’t behave exactly the same in both instances, so some changes were necessary. In November we finally got around to adapting the screens. And now the new UI is also used when editing server settings.

Targeting Android 13

Every year Google requires Android developers to change their apps to support the new (security) features and restrictions of the Android version that was released the prior year. This is automatically enforced by only allowing developers to publish app updates on Google Play when they “target” the required Android version. This year’s deadline was August 31.

There was only one change in Android 13 that affected K-9 Mail. Once an app targets this Android version, it has to ask the user for permission before being able to create notifications. Since our plans already included adding a new screen to ask for permissions during onboarding, we didn’t spend too much time worrying about the deadline.

But due to us being busy working on other features, we only got around to adding the permission screen in November. We requested an extension to the deadline, which (to my surprise) seems to have been granted automatically. Still, there was a brief period of time where we weren’t able to publish new beta versions because we missed the extended deadline by a couple of days.

We’ll prioritize updating the app to target the latest Android version in the future.

Push Not Working On Android 14

When Push is enabled, K-9 Mail uses what the developer documentation calls “exact alarms” to periodically refresh its Push connection to the server. Starting with Android 12, apps need to request a separate permission to use exact alarms. But the permission itself was granted automatically.

In Android 14 (released in October 2023) Google changed the behavior and Android no longer pre-grants this permission to newly installed apps. However, instead of limiting this to apps targeting Android 14, for some reason they decided to extend this behavior change to apps targeting Android 13.

This unfortunate choice by the creator of Android means that Push is currently not working for users who perform a fresh install of K-9 Mail 6.712 or newer on Android 14. Upgrading from a previous version of K-9 Mail should be fine since the permission was then granted automatically in the past.

At the beginning of next year we’ll be working on adding a screen to guide the user to grant the necessary permission when enabling Push on Android 14. Until then, you can manually grant the permission by opening Android’s App info screen for the app, then enable Allow setting alarms and reminders under Alarms & reminders.

Community Contributions

In November and December the following contributions by community members were merged into K-9 Mail:

Thanks for the contributions! ❤

Releases

If you want to help shape future versions of the app, become a beta tester and provide feedback on new features while they are still in development.

The post Thunderbird for Android / K-9 Mail: November/December 2023 Progress Report appeared first on The Thunderbird Blog.

Categorieën: Mozilla-nl planet

The Talospace Project: Firefox 121

Mozilla planet - mo, 18/12/2023 - 07:56
We're still in the process of finding a place to live at the new job and alternating back and forth to the tune of 400 miles each way. Still, this weekend I updated Firefox on the Talos II to Fx121, which fortunately also builds fine with the WebRTC patch from Fx116 (or --disable-webrtc in your .mozconfig), the PGO-LTO patch from Fx117 and the .mozconfigs from Firefox 105.

Unfortunately I had intended to also sit down with the Blackbird and do a test upgrade to Fedora 39 before doing so on the Talos II, but the Blackbird BMC's persistent storage seems to be hosed, the BMC password is whacked and the clock is permanently stuck in June 2022, causing signature checks on the upgrade to fail (even with --nopgpcheck). This is going to require a little work with a serial console and I just didn't have enough spare cycles over the weekend, so I'll do that over the Christmas holiday when we have a few free days. Hopefully I can also get some more work done on upstreaming the JIT at the same time.

Categorieën: Mozilla-nl planet

The Servo Blog: This year in Servo: over 1000 pull requests and beyond

Mozilla planet - mo, 18/12/2023 - 01:00

Servo is well and truly back.

 453 (44%) by Igalia, 195 (19%) by non-Igalia, 389 (37%) by bots <figcaption>Contributors to servo/servo in 2023.</figcaption>

This year, to date, we’ve had 53 unique contributors (+140% over 22 last year), landing 1037 pull requests (+382% over 215) and 2485 commits (+375% over 523), and that’s just in our main repo!

Individual contributors are especially important for the health of the project, and of the pull requests made by humans (rather than our friendly bots), 30% were by people outside Igalia, and 18% were by non-reviewers.

Servo has been featured in six conference talks this year, including at RustNL, Web Engines Hackfest, LF Europe Member Summit, Open Source Summit Europe, GOSIM Workshop, and GOSIM Conference.

Servo now has a usable “minibrowser” UI, now supports offscreen rendering, its experimental WebGPU support (--pref dom.webgpu.enabled) has been updated, and Servo is now listed on wpt.fyi again (click Edit to add Servo).

Our new layout engine is now proving its strengths, with support for iframes, floats, stacking context improvements, inline layout improvements, margin collapsing, ‘position: sticky’, ‘min-width’ and ‘min-height’, ‘max-width’ and ‘max-height’, ‘align-content’, ‘justify-content’, ‘white-space’, ‘text-indent’, ‘text-align: justify’, ‘outline’ and ‘outline-offset’, and ‘filter: drop-shadow()’.

 17% + 64pp in floats, 18% + 55pp in floats-clear, 63% + 15pp in key CSS2 tests, 80% + 14pp in abspos, 34% + 14pp in CSS position module, 67% + 13pp in margin-padding-clear, 49% + 13pp in CSSOM, 51% + 10pp in all CSS tests, 49% + 6pp in all WPT tests <figcaption style="margin: 0 auto;">Pass rates in parts of the Web Platform Tests with our new layout engine, showing the improvement we’ve made since the start of our data in April 2023.</figcaption>

Floats are notoriously tricky, to the point we found them impossible to implement correctly in our legacy layout engine, but thanks to the move from eager to opportunistic parallelism, they are now supported fairly well. Whereas legacy layout was only ever able to reach 53.9% in the floats tests and 68.2% in floats-clear, we’re now at 82.2% in floats (+28.3pp over legacy) and 73.3% in floats-clear (+5.1pp over legacy).

Acid1 now passes in the new layout engine, and we’ve also surpassed legacy layout in the CSS2 abspos (by 50.0pp), CSS2 positioning (by 6.5pp), and CSS Position (by 4.4pp) test suites, while making big strides in others, like the CSSOM tests (+13.1pp) and key parts of the CSS2 test suite (+15.8pp).

Next year, our funding will go towards maintaining Servo, releasing nightlies on Android, finishing our integration with Tauri (thanks to NLNet), and implementing tables and better support for floats and non-Latin text (thanks to NLNet).

Servo will also be at FOSDEM 2024, with Rakhi Sharma speaking about embedding Servo in Rust projects on 3 February at 16:45 local time (15:45 UTC). See you there!

There’s a lot more we would like to do, so if you or a company you know are interested in sponsoring the development of an embeddable, independent, memory-safe, modular, parallel web rendering engine, we want to hear from you! Head over to our sponsorship page, or email join@servo.org for enquiries.

In a decade that many people feared would become the nadir of browser engine diversity, we hope we can help change that with Servo.

Categorieën: Mozilla-nl planet

The Rust Programming Language Blog: Launching the 2023 State of Rust Survey

Mozilla planet - mo, 18/12/2023 - 01:00

It’s time for the 2023 State of Rust Survey!

Since 2016, the Rust Project has collected valuable information and feedback from the Rust programming language community through our annual State of Rust Survey. This tool allows us to more deeply understand how the Rust Project is performing, how we can better serve the global Rust community, and who our community is composed of.

Like last year, the 2023 State of Rust Survey will likely take you between 10 and 25 minutes, and responses are anonymous. We will accept submissions until Monday, January 15th, 2024. Trends and key insights will be shared on blog.rust-lang.org as soon as possible in 2024.

We invite you to take this year’s survey whether you have just begun using Rust, you consider yourself an intermediate to advanced user, or you have not yet used Rust but intend to one day. Your responses will help us improve Rust over time by shedding light on gaps to fill in the community and development priorities, and more.

Once again, we are offering the State of Rust Survey in the following languages (if you speak multiple languages, please pick one). Language options are available on the main survey page:

  • English
  • Simplified Chinese
  • French
  • German
  • Japanese
  • Russian
  • Spanish

Please help us spread the word by sharing the survey link via your social media networks, at meetups, with colleagues, and in any other community that makes sense to you.

This survey would not be possible without the time, resources, and attention of members of the Survey Working Group, the Rust Foundation, and other collaborators. Thank you!

If you have any questions, please see our frequently asked questions.

We appreciate your participation!

Click here to read a summary of last year's survey findings.

Categorieën: Mozilla-nl planet

Patrick Cloke: Matrix Intentional Mentions explained

Mozilla planet - fr, 15/12/2023 - 21:41

Previously I have written about how push rules generate notifications and how read receipts mark notificiations as read in the Matrix protocol. This article is about a change that I instigated to improve when a “mention” (or “ping”) notification is created. (This is a “highlight” notification in the Matrix specification.)

This was part of the work I did at Element to reduce unintentional pings. I preferred thinking of it in the positive — that we should only generate a mention on purpose, hence “intentional” mentions. MSC3952 details the technical protocol changes, but this serves as a bit of a higher-level overview (some of this content is copied from the MSC).

Note

This blog post assumes that default push rules are enabled, these can be heavily modified, disabled, etc. but that is ignored in this post.

Legacy mentions

The legacy mention system searches for the current user’s display name or the localpart of the Matrix ID [1] in the text content of an event. For example, an event like the following would generate a mention for me:

{ // Additional fields ignored. "content": { "body": "Hello @clokep:matrix.org!" } }

A body content field [2] containing clokep or Patrick Cloke would cause a “highlight” notification (displayed as red in Element). This isn’t uncommon in chat protocols and is how IRC and XMPP.

Some of the issues with this are:

There were some prior attempts to fix this, but I would summarize them as attempting to reduce edge-cases instead of attempting to rethink how mentions are done.

Intentional mentions

I chose to call this “intentional” mentions since the protocol now requires explicitly referring to the Matrix IDs to mention in a dedicated field, instead of implicit references in the text content.

The overall change is simple: include a list of mentioned users in a new content field, e.g.:

{ // Additional fields ignored. "content": { "body": "Hello @clokep:matrix.org!" "m.mentions": { "user_ids": ["@clokep:matrix.org"] } } }

Only the m.mentions field is used to generate mentions, the body field is no longer involved. Not only does this remove a whole class of potential bugs, but also allows for “hidden” mentions and paves the way for mentions in extensible events (see MSC4053).

That’s the gist of the change, although the MSC goes deeper into backwards compatibility, and interacting with replies or edits.

Comparison to other protocols

The m.mentions field is similar to how Twitter, Mastodon, Discord, and Microsoft Teams handle mentioning users. The main downside of this approach is that it is not obvious where in the text the user’s mention is (and allows for hidden mentions).

The other seriously considered approach was searching for “pills” in the HTML content of the event. This is similar to how Slack handles mentions, where the user ID is encoded with some markup [3]. This has a major downside of requiring HTML parsing on a hotpath of processing notifications (and it is unclear how this would work for non-HTML clients).

Can I use this?

You can! The MSC was approved and included in Matrix 1.7, Synapase has had support since v1.86.0; it is pretty much up to clients to implement it!

Element Web has handled (and sent intentional mentions) since v1.11.37, although I’m not aware of other clients which do (Element X might now). Hopefully it will become used throughout the ecosystem since many of the above issues are still common complaints I see with Matrix.

[1]This post ignores room-mentions, but they’re handled very similarly. [2]Note that the plaintext content of the event is searched not the “formatted” content (which is usually HTML). [3]This solution should also reduce the number of unintentional mentions, but doesn’t allow for hidden mentions.
Categorieën: Mozilla-nl planet

Patrick Cloke: Matrix Presence

Mozilla planet - fr, 15/12/2023 - 17:24

I put together some notes on presence when implementing multi-device support for presence in Synapse, maybe this is helpful to others! This is a combination of information from the specification, as well as some information about how Synapse works.

Note

These notes are true as of the v1.9 of the Matrix spec and also cover some Matrix spec changes which may or may not have been merged since.

Presence in Matrix

Matrix includes basic presence support, which is explained decently from the specification:

Each user has the concept of presence information. This encodes:

  • Whether the user is currently online
  • How recently the user was last active (as seen by the server)
  • Whether a given client considers the user to be currently idle
  • Arbitrary information about the user’s current status (e.g. “in a meeting”).

This information is collated from both per-device (online, idle, last_active) and per-user (status) data, aggregated by the user’s homeserver and transmitted as an m.presence event. Presence events are sent to interested parties where users share a room membership.

User’s presence state is represented by the presence key, which is an enum of one of the following:

  • online : The default state when the user is connected to an event stream.
  • unavailable : The user is not reachable at this time e.g. they are idle. [1]
  • offline : The user is not connected to an event stream or is explicitly suppressing their profile information from being sent.

MSC3026 defines a busy presence state:

the user is online and active but is performing an activity that would prevent them from giving their full attention to an external solicitation, i.e. the user is online and active but not available.

Presence information is returned to clients in the presence key of the sync response as a m.presence EDU which contains:

  • currently_active: Whether the user is currently active (boolean)
  • last_active_ago: The last time since this used performed some action, in milliseconds.
  • presence: online, unavailable, or offline (or busy)
  • status_msg: An optional description to accompany the presence.
Updating presence

Clients can call PUT /_matrix/client/v3/presence/{userId}/status to update the presence state & status message or can set the presence state via the set_presence parameter on /sync request.

Note that when using the set_presence parameter, offline is equivalent to “do not make a change”.

User activity

From the Matrix spec on last active ago:

The server maintains a timestamp of the last time it saw a pro-active event from the user. A pro-active event may be sending a message to a room or changing presence state to online. This timestamp is presented via a key called last_active_ago which gives the relative number of milliseconds since the pro-active event.

If the presence is set to online then last_active_ago is not part of the /sync response and currently_active is returned instead.

Idle timeout

From the Matrix spec on automatically idling users:

The server will automatically set a user’s presence to unavailable if their last active time was over a threshold value (e.g. 5 minutes). Clients can manually set a user’s presence to unavailable. Any activity that bumps the last active time on any of the user’s clients will cause the server to automatically set their presence to online.

MSC3026 also recommends:

If a user’s presence is set to busy, it is strongly recommended for implementations to not implement a timer that would trigger an update to the unavailable state (like most implementations do when the user is in the online state). Presence in Synapse

Note

This describes Synapse’s behavior after v1.93.0. Before that version Synapse did not account for multiple devices, essentially meaning that the latest device update won.

This also only applies to local users; per-device information for remote users is not available, only the combined per-user state.

User’s devices can set a device’s presence state and a user’s status message. A user’s device knows better than the server whether they’re online and should send that state as part of /sync calls (e.g. sending online or unavailable or offline).

Thus a device is only ever able to set the “minimum” presence state for the user. Presence states are coalesced across devices as busy > online > unavailable > offline. You can build simple truth tables of how these combine with multiple devices:

Device 1 Device 2 User state online unavailable online busy online busy unavailable offline unavailable

Additionally, users expect to see the latest activity time across all devices. (And therefore if any device is online and the latest activity is recent then the user is currently active).

The status message is global and setting it should always override any previous state (and never be cleared automatically).

Automatic state transitions

Note

Note that the below only describes the logic for local users. Data received over federation is handled differently.

If a device is unavailable or offline it should transition to online if a “pro-active event” occurs. This includes sending a receipt or event, or syncing without set_presence or set_presence=online.

If a device is offline it should transition to unavailable if it is syncing with set_presence=unavailable.

If a device is online (either directly or implicitly via user actions) it should transition to unavailable (idle) after a period of time [2] if the device is continuing to sync. (Note that this implies the sync is occurring with set_presence=unavailable as otherwise the device is continuing to report as online). [3]

If a device is online or unavailable it should transition to offline after a period of time if it is not syncing and not making other actions which would transition the device to online. [4]

Note if a device is busy it should not transition to other states. [5]

There’s a huge testcase which checks all these transitions.

Examples
  1. Two devices continually syncing, one online and one unavailable. The end result should be online. [6]
  2. One device syncing with set_presence=unavailable but had a “pro-active” action, after a period of time the user should be unavailable if no additional “pro-active” actions occurred.
  3. One device that stops syncing (and no other “pro-active” actions” are occurring), after a period of time the user should be offline.
  4. Two devices continually syncing, one online and one unavailable. The online device stops syncing, after a period of time the user should be unavailable.
[1]This should be called idle. [2]The period of time is implementation specific. [3]Note that syncing with set_presence=offline does not transition to offline, it is equivalent to not syncing. (It is mostly for mobile applications to process push notifications.) [4]The spec doesn’t seem to ever say that devices can transition to offline. [5]See the open thread on the MSC3026. [6]This is essentially the bug illustrated by the change in Element Web’s behavior.
Categorieën: Mozilla-nl planet

The Rust Programming Language Blog: A Call for Proposals for the Rust 2024 Edition

Mozilla planet - fr, 15/12/2023 - 01:00

The year 2024 is soon to be upon us, and as long-time Rust aficionados know, that means that a new Edition of Rust is on the horizon!

What is an Edition?

You may be aware that a new version of Rust is released every six weeks. New versions of the language can both add things as well as change things, but only in backwards-compatible ways, according to Rust's 1.0 stability guarantee.

But does that mean that Rust can never make backwards-incompatible changes? Not quite! This is what an Edition is: Rust's mechanism for introducing backwards-incompatible changes in a backwards-compatible way. If that sounds like a contradiction, there are three key properties of Editions that preserve the stability guarantee:

  1. Editions are opt-in; crates only receive breaking changes if its authors explicitly ask for them.

  2. Crates that use older editions never get left behind; a crate written for the original Rust 2015 Edition is still supported by every Rust release, and can still make use of all the new goodies that accompany each new version, e.g. new library APIs, compiler optimizations, etc.

  3. An Edition never splits the library ecosystem; crates using new Editions can depend on crates using old Editions (and vice-versa!), so nobody ever has to worry about Edition-related incompatibility.

In order to keep churn to a minimum, a new Edition of Rust is only released once every three years. We've had the 2015 Edition, the 2018 Edition, the 2021 Edition, and soon, the 2024 Edition. And we could use your help!

A call for proposals for the Rust 2024 Edition

We know how much you love Rust, but let's be honest, no language is perfect, and Rust is no exception. So if you've got ideas for how Rust could be better if only that pesky stability guarantee weren't around, now's the time to share! Also note that potential Edition-related changes aren't just limited to the language itself: we'll also consider changes to both Cargo and rustfmt as well.

Please keep in mind that the following criteria determine the sort of changes we're looking for:

  1. A change must be possible to implement without violating the strict properties listed in the prior section. Specifically, the ability of crates to have cross-Edition dependencies imposes restrictions on changes that would take effect across crate boundaries, e.g. the signatures of public APIs. However, we will occasionally discover that an Edition-related change that was once thought to be impossible actually turns out to be feasible, so hope is not lost if you're not sure if your idea meets this standard; propose it just to be safe!
  1. We strive to ensure that nearly all Edition-related changes can be applied to existing codebases automatically (via tools like cargo fix), in order to make upgrading to a new Edition as painless as possible.

  2. Even if an Edition could make any given change, that doesn't mean that it should. We're not looking for hugely-invasive changes or things that would fundamentally alter the character of the language. Please focus your proposals on things like fixing obvious bugs, changing annoying behavior, unblocking future feature development, and making the language easier and more consistent.

To spark your imagination, here's a real-world example. In the 2015 and 2018 Editions, iterating over a fixed-length array via [foo].into_iter() will yield references to the iterated elements; this is is surprising because, on other types, calling .into_iter() produces an iterator that yields owned values rather than references. This limitation existed because older versions of Rust lacked the ability to implement traits for all possible fixed-length arrays in a generic way. Once Rust finally became able to express this, all Editions at last gained the ability to iterate over owned values in fixed-length arrays; however, in the specific case of [foo].into_iter(), altering the existing behavior would have broken lots of code in the wild. Therefore, we used the 2021 Edition to fix this inconsistency for the specific case of [foo].into_iter(), allowing us to address this long-standing issue while preserving Rust's stability guarantees.

How to contribute

Just like other changes to Rust, Edition-related proposals follow the RFC process, as documented in the Rust RFCs repository. Please follow the process documented there, and please consider publicizing a draft of your RFC to collect preliminary feedback before officially submitting it, in order to expedite the RFC process once you've filed it for real! (And in addition to the venues mentioned in the prior link, please feel free to announce your pre-RFC to our Zulip channel.)

Please file your RFCs as soon as possible! Our goal is to release the 2024 Edition in the second half of 2024, which means we would like to get everything implemented (not only the features themselves, but also all the Edition-related migration tooling) by the end of May, which means that RFCs should be accepted by the end of February. And since RFCs take time to discuss and consider, we strongly encourage you to have your RFC filed by the end of December, or the first week of January at the very latest.

We hope to have periodic updates on the ongoing development of the 2024 Edition. In the meantime, if you have any questions or if you would like to help us make the new Edition a reality, we invite you to come chat in the #edition channel in the Rust Zulip.

Categorieën: Mozilla-nl planet

Mozilla Addons Blog: A new world of open extensions on Firefox for Android has arrived

Mozilla planet - to, 14/12/2023 - 19:20

Woo-hoo you did it! Hundreds of add-on developers heeded the call to make their desktop extensions compatible for today’s debut of a new open ecosystem of Firefox for Android extensions. More than 450 Firefox for Android extensions are now discoverable on the addons.mozilla.org (AMO) Android homepage. It’s a strong start to an exciting new frontier of mobile browser customization. Let’s see where this goes.

Are you a developer who hasn’t migrated your desktop extension to Firefox for Android yet? Here’s a good starting point for developing extensions for Firefox for Android.

If you’ve already embarked on the mobile extension journey and have questions/insights/feedback to offer as we continue to optimize the mobile development experience, we invite you to join the discussion about top APIs missing on Firefox for Android.

Have you found any Firefox for Android bugs? Do tell!

The post A new world of open extensions on Firefox for Android has arrived appeared first on Mozilla Add-ons Community Blog.

Categorieën: Mozilla-nl planet

Mozilla Performance Blog: New Sheriffing feature and significant updates to KPI reporting queries

Mozilla planet - wo, 13/12/2023 - 11:01

A year ago I was sharing how a Mozilla Performance Sheriff catches performance regressions, the entire Workflow they go through, and the incoming improvements. Since I joined the Performance Tools Team (formerly Performance Test), almost five years ago, a whole lot of improvements have been made, and features have been added.

In this article, I want to focus on a special set of features, that give the Performance Sheriffs more control over the Sheriffing Workflow (from when an alert is triggered, triaged to when the regression bug is filed and linked to the alert). We call them time-to-triage (from alert to triage) and time-to-bug (from alert to bug). They are actually the object of our Sheriffing Team’s KPIs, the KPIs that measure the performance of the Performance Sheriffs team (I like puns).

The time-to-triage KPI measures the time since an alert was triggered by a performance change to when it was triaged (basically first-time analysis). It is at most 3 days, and at least 80% of the sheriffed alerts have to meet this deadline (or 20% is allowed not to). However, our team does not work weekends and they have to be excluded. For example, if an alert was created on a Friday (any), the three-day-triage time ends on Monday instead of Wednesday when the three business days actually expire. This means we basically only get a single day to triage it. So every time something like this happens, we have to manually exclude those alerts from the old queries of the KPI report that do not exclude the weekends from those times. The new queries do this exclusion automatically.

 

Triage Response Times (time-to-triage)Year To Date

Triage Response Times (time-to-triage)
Year To Date

Triage Response Times (New Query)Year To Date

Triage Response Times (New Query)
Year To Date

Alerts Exceeding Triage TargetYear To Date

Alerts Exceeding Triage Target
Year To Date

The same thing is true for an alert created on a weekend, where a part of the alert-to-triage time falls on the weekend. Actually, the only alerts that can not capture weekends are the ones created Monday and Tuesday.

The time-to-bug KPI measures the time since an alert was triggered by a performance change to when a bug was linked to the alert. It is at most 5 days, and at least 80% of the valid regression alerts must meet this deadline (or 20% is allowed not to). The only alerts that can not capture weekends within this KPI are the ones created on Monday, the first hour in the morning, whose KPI ends Friday in the last hour of the day.

Regression Bug Response TimesYear To Date

Regression Bug Response Times
Year To Date

Regression Bug Response Times (New Query)Year To Date

Regression Bug Response Times (New Query)
Year To Date

Regressions Exceeding Bug TargetYear To Date

Regressions Exceeding Bug Target
Year To Date

In the images above, you can see a difference in the percentages of time-to-triage (86.9% vs. 97.9% old query vs. new query) and time-to-bug (75.7% vs. 97% old query vs. new query). This is not because the Sheriffing Team is doing a better job, they were doing this the whole time. It is because the feature we developed helps measure the percentages accurately by excluding the weekends from the calculated times. According strictly to the percentages, the impact of this feature is significant, taking us from an average – maybe struggling – performance, to a really good one. Of course, the inclusion of weekends in the report of the KPIs was known a while ago, but having a bigger picture and concrete metrics is more revealing.

The development of these time-to-triage/time-to-bug features is full-stack and involved:

  • Helping our manager’s Sheriffing report calculate the times more accurately (to whom I am grateful for supporting this initiative);
  • Modifying the performance_alert_summary database table to store due dates;
  • Implementing the accurate calculation in the backend as described above;
  • Showing in the UI the countdown until the alert goes overdue gives the Performance Sheriffs more control and the ability to organize themselves throughout the Sheriffing Workflow better.

I didn’t mention the countdown feature yet. It is shown in the image below, right next to the status dropdown of the alert summary (top-right corner). Here are displayed:

  • The type of due date that is in effect (Triage in this case);
  • The amount of time. When the time goes under 24 hours, the timer will switch to showing the hours left.

The alert will become triaged and the counter will switch from triage to bug when the first-time analysis is performed on it (star, assign, add tag, add note).

Alert with Triage due date status

Alert with Triage due date status

 

Below is an example of a time-to-bug timer (the time left until linking the alert to a bug will go due). By default the timer counter is green, but when the timer goes under 24 hours, it will go orange.

Alert with Bug due date status

Alert with Bug due date status

When the timer goes overdue, we can see in the image below that the counter icon becomes red and the “Overdue” status is shown up.

Alert with Overdue status (this is for demo purposes only, the alert wasn’t overdue for real)

Alert with Overdue status
(this is for demo purposes only, the alert wasn’t overdue for real)

Lastly, after the alert is finally linked to a bug, the counter will turn into a green checkmark and the countdown status will be “Ready for acknowledge”.

Alert with Ready for acknowledge status

Alert with Ready for acknowledge status

Now, instead of manually excluding the times inflated by the weekends, we have an automated feature to closely control the alert lifecycle and report the KPI percentages more accurately.

The development of this feature was a personal initiative, encouraged by our manager and by the whole team (without their support I couldn’t have done this). This is part of a wider initiative I support, improvements to Performance Sheriffing Workflow. It improves the developer experience while working with performance regressions and helps the Performance Sheriffs be more efficient by improving their tools and automating as much as possible their workflow.

Categorieën: Mozilla-nl planet

Tiger Oakes: Takeaways from React Day Berlin & TestJS Summit 2023

Mozilla planet - wo, 13/12/2023 - 01:00
What I learned from a conference double feature.
Categorieën: Mozilla-nl planet

Hacks.Mozilla.Org: Puppeteer Support for the Cross-Browser WebDriver BiDi Standard

Mozilla planet - ti, 12/12/2023 - 17:14

We are pleased to share that Puppeteer now supports the next-generation, cross-browser WebDriver BiDi standard. This new protocol makes it easy for web developers to write automated tests that work across multiple browser engines.

How Do I Use Puppeteer With Firefox?

The WebDriver BiDi protocol is supported starting with Puppeteer v21.6.0. When calling puppeteer.launch pass in "firefox" as the product option, and "webDriverBiDi" as the protocol option:

const browser = await puppeteer.launch({   product: 'firefox',   protocol: 'webDriverBiDi', })

You can also use the "webDriverBiDi" protocol when testing in Chrome, reflecting the fact that WebDriver BiDi offers a single standard for modern cross-browser automation.

In the future we expect "webDriverBiDi" to become the default protocol when using Firefox in Puppeteer.

Doesn’t Puppeteer Already Support Firefox?

Puppeteer has had experimental support for Firefox based on a partial re-implementation of the proprietary Chrome DevTools Protocol (CDP). This approach had the advantage that it worked without significant changes to the existing Puppeteer code. However the CDP implementation in Firefox is incomplete and has significant technical limitations. In addition, the CDP protocol itself is not designed to be cross browser, and undergoes frequent breaking changes, making it unsuitable as a long-term solution for cross-browser automation.

To overcome these problems, we’ve worked with the WebDriver Working Group at the W3C to create a standard automation protocol that meets the needs of modern browser automation clients: this is WebDriver BiDi. For more details on the protocol design and how it compares to the classic HTTP-based WebDriver protocol, see our earlier posts.

As the standardization process has progressed, the Puppeteer team has added a WebDriver BiDi backend in Puppeteer, and provided feedback on the specification to ensure that it meets the needs of Puppeteer users, and that the protocol design enables existing CDP-based tooling to easily transition to WebDriver BiDi. The result is a single protocol based on open standards that can drive both Chrome and Firefox in Puppeteer.

Are All Puppeteer Features Supported?

Not yet; WebDriver BiDi is still a work in progress, and doesn’t yet cover the full feature set of Puppeteer.

Compared to the Chrome+CDP implementation, there are some feature gaps, including support for accessing the cookie store, network request interception, some emulation features, and permissions. These features are actively being standardized and will be integrated as soon as they become available. For Firefox, the only missing feature compared to the Firefox+CDP implementation is cookie access. In addition, WebDriver BiDi already offers improvements, including better support for multi-process Firefox, which is essential for testing some websites. More information on the complete set of supported APIs can be found in the Puppeteer documentation, and as new WebDriver-BiDi features are enabled in Gecko we’ll publish details on the Firefox Developer Experience blog.

Nevertheless, we believe that the WebDriver-based Firefox support in Puppeteer has reached a level of quality which makes it suitable for many real automation scenarios. For example at Mozilla we have successfully ported our Puppeteer tests for pdf.js from Firefox+CDP to Firefox+WebDriver BiDi.

Is Firefox’s CDP Support Going Away?

We currently don’t have a specific timeline for removing CDP support. However, maintaining multiple protocols is not a good use of our resources, and we expect WebDriver BiDi to be the future of remote automation in Firefox. If you are using the CDP support outside of the context of Puppeteer, we’d love to hear from you (see below), so that we can understand your use cases, and help transition to WebDriver BiDi.

Where Can I Provide Feedback?

For any issues you experience when porting Puppeteer tests to BiDi, please open issues in the Puppeteer issue tracker, unless you can verify the bug is in the Firefox implementation, in which case please file a bug on Bugzilla.

If you are currently using CDP with Firefox, please join the #webdriver matrix channel so that we can discuss your use case and requirements, and help you solve any problems you encounter porting your code to WebDriver BiDi.

Update: The Puppeteer team have published “Harness the Power of WebDriver BiDi: Chrome and Firefox Automation with Puppeteer“.

The post Puppeteer Support for the Cross-Browser WebDriver BiDi Standard appeared first on Mozilla Hacks - the Web developer blog.

Categorieën: Mozilla-nl planet

The Rust Programming Language Blog: Cargo cache cleaning

Mozilla planet - mo, 11/12/2023 - 01:00

Cargo has recently gained an unstable feature on the nightly channel (starting with nightly-2023-11-17) to perform automatic cleaning of cache content within Cargo's home directory. This post includes:

In short, we are asking people who use the nightly channel to enable this feature and report any issues you encounter on the Cargo issue tracker. To enable it, place the following in your Cargo config file (typically located in ~/.cargo/config.toml or %USERPROFILE%\.cargo\config.toml for Windows):

[unstable] gc = true

Or set the CARGO_UNSTABLE_GC=true environment variable or use the -Zgc CLI flag to turn it on for individual commands.

We'd particularly like people who use unusual filesystems or environments to give it a try, since there are some parts of the implementation which are sensitive and need battle testing before we turn it on for everyone.

What is this feature?

Cargo keeps a variety of cached data within the Cargo home directory. This cache can grow unbounded and can get quite large (easily reaching many gigabytes). Community members have developed tools to manage this cache, such as cargo-cache, but cargo itself never exposed any ability to manage it.

This cache includes:

  • Registry index data, such as package dependency metadata from crates.io.
  • Compressed .crate files downloaded from a registry.
  • The uncompressed contents of those .crate files, which rustc uses to read the source and compile dependencies.
  • Clones of git repositories used by git dependencies.

The new garbage collection ("GC") feature adds tracking of this cache data so that cargo can automatically or manually remove unused files. It keeps an SQLite database which tracks the last time the various cache elements have been used. Every time you run a cargo command that reads or writes any of this cache data, it will update the database with a timestamp of when that data was last used.

What isn't yet included is cleaning of target directories, see Plan for the future.

Automatic cleaning

When you run cargo, once a day it will inspect the last-use cache tracker, and determine if any cache elements have not been used in a while. If they have not, then they will be automatically deleted. This happens with most commands that would normally perform significant work, like cargo build or cargo fetch.

The default is to delete data that can be locally recreated if it hasn't been used for 1 month, and to delete data that has to be re-downloaded after 3 months.

Automatic deletion is disabled if cargo is offline such as with --offline or --frozen to avoid deleting artifacts that may need to be used if you are offline for a long period of time.

The initial implementation has exposed a variety of configuration knobs to control how automatic cleaning works. However, it is unlikely we will expose too many low-level details when it is stabilized, so this may change in the future (see issue #13061). See the Automatic garbage collection section for more details on this configuration.

Manual cleaning

If you want to manually delete data from the cache, several options have been added under the cargo clean gc subcommand. This subcommand can be used to perform the normal automatic daily cleaning, or to specify different options on which data to remove. There are several options for specifying the age of data to delete (such as --max-download-age=3days) or specifying the maximum size of the cache (such as --max-download-size=1GiB). See the Manual garbage collection section or run cargo clean gc --help for more details on which options are supported.

This CLI design is only preliminary, and we are looking at determining what the final design will look like when it is stabilized, see issue #13060.

What to watch out for

After enabling the gc feature, just go about your normal business of using cargo. You should be able to observe the SQLite database stored in your cargo home directory at ~/.cargo/.global-cache.

After the first time you use cargo, it will populate the database tracking all the data that already exists in your cargo home directory. Then, after 1 month, cargo should start deleting old data, and after 3 months will delete even more data.

The end result is that after that period of time you should start to notice the home directory using less space overall.

You can also try out the cargo clean gc command and explore some of its options if you want to try to manually delete some data.

If you run into problems, you can disable the gc feature and cargo should return to its previous behavior. Please let us know on the issue tracker if this happens.

Request for feedback

We'd like to hear from you about your experience using this feature. Some of the things we are interested in are:

  • Have you run into any bugs, errors, issues, or confusing problems? Please file an issue over at https://github.com/rust-lang/cargo/issues/.
  • The first time that you use cargo with GC enabled, is there an unreasonably long delay? Cargo may need to scan your existing cache data once to detect what already exists from previous versions.
  • Do you notice unreasonable delays when it performs automatic cleaning once a day?
  • Do you have use cases where you need to do cleaning based on the size of the cache? If so, please share them at #13062.
  • If you think you would make use of manually deleting cache data, what are your use cases for doing that? Sharing them on #13060 about the CLI interface might help guide us on the overall design.
  • Does the default of deleting 3 month old data seem like a good balance for your use cases?

Or if you would prefer to share your experiences on Zulip, head over to the #t-cargo stream.

Design considerations and implementation details

(These sections are only for the intently curious among you.)

The implementation of this feature had to consider several constraints to try to ensure that it works in nearly all environments, and doesn't introduce a negative experience for users.

Performance

One big focus was to make sure that the performance of each invocation of cargo is not significantly impacted. Cargo needs to potentially save a large chunk of data every time it runs. The performance impact will heavily depend on the number of dependencies and your filesystem. Preliminary testing shows the impact can be anywhere from 0 to about 50ms.

In order to minimize the performance impact of actually deleting files, the automatic GC runs only once a day. This is intended to balance keeping the cache clean without impacting the performance of daily use.

Locking

Another big focus is dealing with cache locking. Previously, cargo had a single lock on the package cache, which cargo would hold while downloading registry data and performing dependency resolution. When cargo is actually running rustc, it previously did not hold a lock under the assumption that existing cache data will not be modified.

However, now that cargo can modify or delete existing cache data, it needs to be careful to coordinate with anything that might be reading from the cache, such as if multiple cargo commands are run simultaneously. To handle this, cargo now has two separate locks, which are used together to provide three separate locking states. There is a shared read lock, which allows multiple builds to run in parallel and read from the cache. There is a write lock held while downloading registry data, which is independent of the read lock which allows concurrent builds to still run while new packages are downloaded. The third state is a write lock that prevents either of the two previous locks from being held, and ensures exclusive access while cleaning the cache.

Versions of cargo before 1.75 don't know about the exclusive write lock. We are hoping that in practice it will be rare to concurrently run old and new cargo versions, and that it is unlikely that the automatic GC will need to delete data that is concurrently in use by an older version.

Error handling and filesystems

Because we do not want problems with GC from disrupting users, the implementation silently skips the GC if it is unable to acquire an exclusive lock on the package cache. Similarly, when cargo saves the timestamp data on every command, it will silently ignore errors if it is unable to open the database, such as if it is on a read-only filesystem, or it is unable to acquire a write lock. This may result in the last-use timestamps becoming stale, but hopefully this should not impact most usage scenarios. For locking, we are paying special attention to scenarios such as Docker container mounts and network filesystems with questionable locking support.

Backwards compatibility

Since the cache is used by any version of cargo, we have to pay close attention to forwards and backwards compatibility. We benefit from SQLite's particularly stable on-disk data format which has been stable since 2004. Cargo has support to do schema migrations within the database that stay backwards compatible.

Plan for the future

A major aspect of this endeavor is to gain experience with using SQLite in a wide variety of environments, with a plan to extend its usage in several other parts of cargo.

Registry index metadata

One place where we are looking to introduce SQLite is for the registry index cache. When cargo downloads registry index data, it stores it in a custom-designed binary file format to improve lookup performance. However, this index cache uses many small files, which may not perform well on some filesystems.

Additionally, the index cache grows without bound. Currently the automatic cache cleaning will only delete an entire index cache if the index itself hasn't been used, which is rarely the case for crates.io. We may also need to consider finer-grained timestamp tracking or some mechanism to periodically purge this data.

Target directory change tracking and cleaning

Another place we are looking to introduce SQLite is for managing the target directory. In cargo's target directory, cargo keeps track of information about each crate that has been built with what is called a fingerprint. These fingerprints help cargo know if it needs to recompile something. Each artifact is tracked with a set of 4 files, using a mixture of custom formats.

We are looking to replace this system with SQLite which will hopefully bring about several improvements. A major focus will be to provide cleaning of stale data in the target directory, which tends to use substantial amount of disk space. Additionally we are looking to implement other improvements, such as more accurate fingerprint tracking, provide information about why cargo thinks something needed to be recompiled, and to hopefully improve performance. This will be important for the script feature, which uses a global cache for build artifacts, and the future implementation of a globally-shared build cache.

Categorieën: Mozilla-nl planet

Pages