mozilla

Mozilla Nederland LogoDe Nederlandse
Mozilla-gemeenschap

Abonneren op feed Mozilla planet
Planet Mozilla - https://planet.mozilla.org/
Bijgewerkt: 9 maanden 2 weken geleden

Firefox Developer Experience: Firefox WebDriver Newsletter — 121

di, 19/12/2023 - 16:34

WebDriver is a remote control interface that enables introspection and control of user agents. As such it can help developers to verify that their websites are working and performing well with all major browsers. The protocol is standardized by the W3C and consists of two separate specifications: WebDriver classic (HTTP) and the new WebDriver BiDi (Bi-Directional).

This newsletter gives an overview of the work we’ve done as part of the Firefox 121 release cycle.

Contributions

With Firefox being an open source project, we are grateful to get contributions from people outside of Mozilla.

WebDriver code is written in JavaScript, Python, and Rust so any web developer can contribute! Read how to setup the work environment and check the list of mentored issues for Marionette.

WebDriver BiDi New: “browsingContext.contextDestroyed” event

browsingContext.contextDestroyed is a new event that allows clients to be notified when a context is discarded. This event will be emitted for instance when a tab is closed or when a frame is removed from the DOM. The event’s payload contains the context which was destroyed, the url of the context and the parent context id (for child contexts). Note that when closing a tab containing iframes, only a single event will be emitted for the top-level context to avoid unnecessary protocol traffic.

Support for “userActivation” parameter in script.callFunction and script.evaluate

The userActivation parameter is a boolean which allows the script.callFunction and script.evaluate commands to execute JavaScript while simulating that the user is currently interacting with the page. This can be useful to use features which are only available on user activation, such as interacting with the clipboard. The default value for this parameter is false.

Support for “defaultValue” field in browsingContext.userPromptOpened event

The browsingContext.userPromptOpened event will now provide a defaultValue field set to the default value of user prompts of type “prompt“. If the default value was not provided (or was an empty string), the defaultValue field is omitted.

Here is an example payload for a window.prompt usage:

{ "type": "event", "method": "browsingContext.userPromptOpened", "params": { "context": "67b77507-0728-496f-b951-72650ead8c8a", "type": "prompt", "message": "What is your favorite automation protocol", "defaultValue": "WebDriver BiDi" } } Prompt example on a webpage.<figcaption class="wp-element-caption">Prompt example on a webpage.</figcaption> Updates for the browsingContext.captureScreenshot command

The browsingContext.captureScreenshot command received several updates, some of which are non backwards-compatible.

First, the scrollIntoView parameter was removed. The parameter could lead to confusing results as it does not ensure the scrolled element becomes fully visible. If needed, it is easy to scroll into view using script.evaluate.

The clip parameter value BoxClipRectangle renamed its type property from “viewport” to “box“.

Finally, a new origin parameter was added with two possible values: “document” or “viewport” (defaults to “viewport“). This argument allows clients to define the origin and bounds of the screenshot. Typically, in order to take “full page” screenshots, using the “document” value will allow the screenshot to expand beyond the viewport, without having to scroll manually. In combination with the clip parameter, this should allow more flexibility to take page, viewport or element screenshots.

Typically, you can use the origin set to “document” and the clip type “element” to take screenshots of elements without worrying about the scroll position or the viewport size:

{ "context": "67b77507-0728-496f-b951-72650ead8c8a", "origin": "document", "clip": { "type": "element", "element": { "sharedId": "67b77507-0728-496f-b951-72650ead8c8a" } } }  screenshot of the page footer, which was scrolled-out and taller than the viewport, using origin "document" and clip type "element".<figcaption class="wp-element-caption">Left: an example page scrolled to the top. Right: screenshot of the page footer, which was scrolled-out and taller than the viewport, using origin “document” and clip type “element”.</figcaption> Added context property for Window serialization

Serialized Window or Frame objects now contain a context property which contains the corresponding context id. This id can then be used to send commands to this Window/Frame and can also be exchanged with WebDriver Classic (Marionette).

Bug Fixes Marionette (WebDriver classic) Added support for Window and Frame serialization

Marionette now supports serialization and deserialization of Window and Frame objects.

Categorieën: Mozilla-nl planet

Mozilla Thunderbird: When Will Thunderbird For Android Be Released?

ma, 18/12/2023 - 19:01

When will Thunderbird for Android be released? This is a question that comes up quite a lot, and we appreciate that you’re all excited to finally put Thunderbird in your pocket. It’s not a simple answer, but we’ll do our best to explain why things are taking longer than expected.

We have always been a bit vague on when we were going to release Thunderbird for Android. At first this was because we still had to figure out what features we wanted to add to K-9 Mail before we were comfortable calling it Thunderbird. Once we had a list, we estimated how long it would take to add those features to the app. Then something happened that always happens in software projects – things took longer than expected. So we cut down on features and aimed for a release at the end of 2023. As we got closer to the end of the year, it became clear that even with the reduced set of features, the release date would have almost certainly slipped into early 2024.

We then sat together and reevaluated the situation. In the end we decided that there’s no rush. We’ll work on the features we wanted in the app in the first place, because you deserve the best mobile experience we can give you. Once those features have been added, we’ll release the app as Thunderbird for Android.

Why Wait? Try K-9 Mail Now

But of course you don’t have to wait until then. All our development happens out in the open. The stable version of K-9 Mail contains all of the features we have already completed. The beta version of K-9 Mail contains the feature(s) we’re currently working on.

Both stable and beta versions can be installed via F-Droid or Google Play.

Thunderbird for Android / K-9 Mail: November/December 2023 Progress Report K-9 Mail’s Future

Side note: Quite a few people seem to love K-9 Mail and have asked us to keep the robot dog around. We believe it should be relatively little effort to build two apps from one code base. The apps would be virtually identical and only differ in app name, app icon, and the color scheme. So our current plan is to keep K-9 Mail around.

Whether you prefer metal dogs or mythical birds, we’ve got you covered.

The post When Will Thunderbird For Android Be Released? appeared first on The Thunderbird Blog.

Categorieën: Mozilla-nl planet

Mozilla Thunderbird: Thunderbird for Android / K-9 Mail: November/December 2023 Progress Report

ma, 18/12/2023 - 19:01

a dark background with thunderbird and k-9 mail logos centered, with the text "Thunderbird for Android, November 2023 progress report"

In February 2023 we started publishing monthly reports on the progress of transforming K-9 Mail into Thunderbird for Android. Somewhat to my surprise, we managed to keep this up throughout the entire year. 

But since the end of the year company shutdown is coming up and both Wolf and I have some vacation days left, this will be the last progress report of the year, covering both November and December. If you need a refresher on where we left off previously, know that the progress report for October is only one click away.

New Home On Google Play

If you’ve recently visited K-9 Mail’s page on Google Play you might have noticed that the developer name changed from “K-9 Dog Walkers” to “Mozilla Thunderbird”. That’s because we finally got around to moving the app to a developer account owned by Thunderbird.

I’d like to use this opportunity to thank Jesse Vincent, who not only founded the K-9 Mail project, but also managed the Google Play developer account for all these years. Thank you ♥

When Will Thunderbird For Android Be Released? Asking For Android permissions

Previously, the app asked the user to grant the permission to access contacts when the message list or compose screens were displayed. 

<figcaption class="wp-element-caption">Permission prompt in message list screen</figcaption> <figcaption class="wp-element-caption">Permission prompt in compose screen</figcaption>

The app asked for the contacts permission every time one of these screens was opened. That’s not as bad as it sounds. Android automatically ignores such a request after the user has selected the “deny” option twice. Unfortunately, dismissing the dialog e.g. by using the back button, doesn’t count as denying the permission request. So users who chose that option to get rid of the dialog were asked again and again. Clearly not a great experience.

So we changed it. Now, the app no longer asks for the contacts permission in those screens. Instead, asking the user to grant permissions is now part of the onboarding flow. After adding the first account, users will see the following screen:

The keen observer will have noticed that the app is now also asking for the permission to create notifications. Since the introduction of notification categories in Android 8, users have always had the option to disable some or all notifications created by an app. But starting with Android 13, users now have to explicitly grant the permission to create notifications.

While the app will work without the notification permission, you should still grant it to the app, at least for now. Currently, some errors (e.g. when sending an email has failed) are only communicated via a notification. 

And don’t worry, granting the permission doesn’t mean you’ll be bombarded with notifications. You can still configure whether you want to get notifications for new messages on a per account basis.

Improved Account Setup

This section has been a fixture in the last couple of progress reports. The new account setup code has been a lot of work. And we’re still not quite done yet. However, it already is in a state where it’s a vast improvement over what we had previously.

Bug fixes

Thanks to feedback from beta testers, we identified and fixed a couple of bugs.

  • The app was crashing when trying to display an error message after the user had entered an invalid or unsupported email address.
  • While fixing the bug above, we also noticed that some placeholder code to validate email addresses was still used. We replaced that code and improved error messages, e.g. when encountering a syntactically valid, but deliberately unsupported email address like test@[127.0.0.1].
  • A user reported a crash when trying to set up an account with a particular email domain. We tracked this down to an MX DNS record containing an underscore. That’s not a valid character for a hostname. The app already checked for that, but the error wasn’t caught and so crashed the app.
User experience improvements

Thanks to feedback from people who went through the manual setup flow multiple times, we identified a couple of usability issues. We made some changes like disabling auto-correct in the server name text field and copying the password entered in the incoming server settings screen to the outgoing server settings screen.

Hopefully, automatic account setup will just work for you. But if you have to use the manual setup route, at least now it should be a tiny bit less annoying.

Edit server settings

Editing incoming or outgoing server settings is not strictly part of setting up an account. However, the same screens used in the manual account setup flow are also used when editing server settings of an existing account (e.g. by going to Settings → [Account] → Fetching mail → Incoming server). 

<figcaption class="wp-element-caption">Incoming server settings screen during manual account setup</figcaption> <figcaption class="wp-element-caption">Incoming server settings screen when editing an existing account</figcaption>

The screens don’t behave exactly the same in both instances, so some changes were necessary. In November we finally got around to adapting the screens. And now the new UI is also used when editing server settings.

Targeting Android 13

Every year Google requires Android developers to change their apps to support the new (security) features and restrictions of the Android version that was released the prior year. This is automatically enforced by only allowing developers to publish app updates on Google Play when they “target” the required Android version. This year’s deadline was August 31.

There was only one change in Android 13 that affected K-9 Mail. Once an app targets this Android version, it has to ask the user for permission before being able to create notifications. Since our plans already included adding a new screen to ask for permissions during onboarding, we didn’t spend too much time worrying about the deadline.

But due to us being busy working on other features, we only got around to adding the permission screen in November. We requested an extension to the deadline, which (to my surprise) seems to have been granted automatically. Still, there was a brief period of time where we weren’t able to publish new beta versions because we missed the extended deadline by a couple of days.

We’ll prioritize updating the app to target the latest Android version in the future.

Push Not Working On Android 14

When Push is enabled, K-9 Mail uses what the developer documentation calls “exact alarms” to periodically refresh its Push connection to the server. Starting with Android 12, apps need to request a separate permission to use exact alarms. But the permission itself was granted automatically.

In Android 14 (released in October 2023) Google changed the behavior and Android no longer pre-grants this permission to newly installed apps. However, instead of limiting this to apps targeting Android 14, for some reason they decided to extend this behavior change to apps targeting Android 13.

This unfortunate choice by the creator of Android means that Push is currently not working for users who perform a fresh install of K-9 Mail 6.712 or newer on Android 14. Upgrading from a previous version of K-9 Mail should be fine since the permission was then granted automatically in the past.

At the beginning of next year we’ll be working on adding a screen to guide the user to grant the necessary permission when enabling Push on Android 14. Until then, you can manually grant the permission by opening Android’s App info screen for the app, then enable Allow setting alarms and reminders under Alarms & reminders.

Community Contributions

In November and December the following contributions by community members were merged into K-9 Mail:

Thanks for the contributions! ❤

Releases

If you want to help shape future versions of the app, become a beta tester and provide feedback on new features while they are still in development.

The post Thunderbird for Android / K-9 Mail: November/December 2023 Progress Report appeared first on The Thunderbird Blog.

Categorieën: Mozilla-nl planet

The Talospace Project: Firefox 121

ma, 18/12/2023 - 07:56
We're still in the process of finding a place to live at the new job and alternating back and forth to the tune of 400 miles each way. Still, this weekend I updated Firefox on the Talos II to Fx121, which fortunately also builds fine with the WebRTC patch from Fx116 (or --disable-webrtc in your .mozconfig), the PGO-LTO patch from Fx117 and the .mozconfigs from Firefox 105.

Unfortunately I had intended to also sit down with the Blackbird and do a test upgrade to Fedora 39 before doing so on the Talos II, but the Blackbird BMC's persistent storage seems to be hosed, the BMC password is whacked and the clock is permanently stuck in June 2022, causing signature checks on the upgrade to fail (even with --nopgpcheck). This is going to require a little work with a serial console and I just didn't have enough spare cycles over the weekend, so I'll do that over the Christmas holiday when we have a few free days. Hopefully I can also get some more work done on upstreaming the JIT at the same time.

Categorieën: Mozilla-nl planet

The Servo Blog: This year in Servo: over 1000 pull requests and beyond

ma, 18/12/2023 - 01:00

Servo is well and truly back.

 453 (44%) by Igalia, 195 (19%) by non-Igalia, 389 (37%) by bots <figcaption>Contributors to servo/servo in 2023.</figcaption>

This year, to date, we’ve had 53 unique contributors (+140% over 22 last year), landing 1037 pull requests (+382% over 215) and 2485 commits (+375% over 523), and that’s just in our main repo!

Individual contributors are especially important for the health of the project, and of the pull requests made by humans (rather than our friendly bots), 30% were by people outside Igalia, and 18% were by non-reviewers.

Servo has been featured in six conference talks this year, including at RustNL, Web Engines Hackfest, LF Europe Member Summit, Open Source Summit Europe, GOSIM Workshop, and GOSIM Conference.

Servo now has a usable “minibrowser” UI, now supports offscreen rendering, its experimental WebGPU support (--pref dom.webgpu.enabled) has been updated, and Servo is now listed on wpt.fyi again (click Edit to add Servo).

Our new layout engine is now proving its strengths, with support for iframes, floats, stacking context improvements, inline layout improvements, margin collapsing, ‘position: sticky’, ‘min-width’ and ‘min-height’, ‘max-width’ and ‘max-height’, ‘align-content’, ‘justify-content’, ‘white-space’, ‘text-indent’, ‘text-align: justify’, ‘outline’ and ‘outline-offset’, and ‘filter: drop-shadow()’.

 17% + 64pp in floats, 18% + 55pp in floats-clear, 63% + 15pp in key CSS2 tests, 80% + 14pp in abspos, 34% + 14pp in CSS position module, 67% + 13pp in margin-padding-clear, 49% + 13pp in CSSOM, 51% + 10pp in all CSS tests, 49% + 6pp in all WPT tests <figcaption style="margin: 0 auto;">Pass rates in parts of the Web Platform Tests with our new layout engine, showing the improvement we’ve made since the start of our data in April 2023.</figcaption>

Floats are notoriously tricky, to the point we found them impossible to implement correctly in our legacy layout engine, but thanks to the move from eager to opportunistic parallelism, they are now supported fairly well. Whereas legacy layout was only ever able to reach 53.9% in the floats tests and 68.2% in floats-clear, we’re now at 82.2% in floats (+28.3pp over legacy) and 73.3% in floats-clear (+5.1pp over legacy).

Acid1 now passes in the new layout engine, and we’ve also surpassed legacy layout in the CSS2 abspos (by 50.0pp), CSS2 positioning (by 6.5pp), and CSS Position (by 4.4pp) test suites, while making big strides in others, like the CSSOM tests (+13.1pp) and key parts of the CSS2 test suite (+15.8pp).

Next year, our funding will go towards maintaining Servo, releasing nightlies on Android, finishing our integration with Tauri (thanks to NLNet), and implementing tables and better support for floats and non-Latin text (thanks to NLNet).

Servo will also be at FOSDEM 2024, with Rakhi Sharma speaking about embedding Servo in Rust projects on 3 February at 16:45 local time (15:45 UTC). See you there!

There’s a lot more we would like to do, so if you or a company you know are interested in sponsoring the development of an embeddable, independent, memory-safe, modular, parallel web rendering engine, we want to hear from you! Head over to our sponsorship page, or email join@servo.org for enquiries.

In a decade that many people feared would become the nadir of browser engine diversity, we hope we can help change that with Servo.

Categorieën: Mozilla-nl planet

The Rust Programming Language Blog: Launching the 2023 State of Rust Survey

ma, 18/12/2023 - 01:00

It’s time for the 2023 State of Rust Survey!

Since 2016, the Rust Project has collected valuable information and feedback from the Rust programming language community through our annual State of Rust Survey. This tool allows us to more deeply understand how the Rust Project is performing, how we can better serve the global Rust community, and who our community is composed of.

Like last year, the 2023 State of Rust Survey will likely take you between 10 and 25 minutes, and responses are anonymous. We will accept submissions until Monday, January 15th, 2024. Trends and key insights will be shared on blog.rust-lang.org as soon as possible in 2024.

We invite you to take this year’s survey whether you have just begun using Rust, you consider yourself an intermediate to advanced user, or you have not yet used Rust but intend to one day. Your responses will help us improve Rust over time by shedding light on gaps to fill in the community and development priorities, and more.

Once again, we are offering the State of Rust Survey in the following languages (if you speak multiple languages, please pick one). Language options are available on the main survey page:

  • English
  • Simplified Chinese
  • French
  • German
  • Japanese
  • Russian
  • Spanish

Please help us spread the word by sharing the survey link via your social media networks, at meetups, with colleagues, and in any other community that makes sense to you.

This survey would not be possible without the time, resources, and attention of members of the Survey Working Group, the Rust Foundation, and other collaborators. Thank you!

If you have any questions, please see our frequently asked questions.

We appreciate your participation!

Click here to read a summary of last year's survey findings.

Categorieën: Mozilla-nl planet

Patrick Cloke: Matrix Intentional Mentions explained

vr, 15/12/2023 - 21:41

Previously I have written about how push rules generate notifications and how read receipts mark notificiations as read in the Matrix protocol. This article is about a change that I instigated to improve when a “mention” (or “ping”) notification is created. (This is a “highlight” notification in the Matrix specification.)

This was part of the work I did at Element to reduce unintentional pings. I preferred thinking of it in the positive — that we should only generate a mention on purpose, hence “intentional” mentions. MSC3952 details the technical protocol changes, but this serves as a bit of a higher-level overview (some of this content is copied from the MSC).

Note

This blog post assumes that default push rules are enabled, these can be heavily modified, disabled, etc. but that is ignored in this post.

Legacy mentions

The legacy mention system searches for the current user’s display name or the localpart of the Matrix ID [1] in the text content of an event. For example, an event like the following would generate a mention for me:

{ // Additional fields ignored. "content": { "body": "Hello @clokep:matrix.org!" } }

A body content field [2] containing clokep or Patrick Cloke would cause a “highlight” notification (displayed as red in Element). This isn’t uncommon in chat protocols and is how IRC and XMPP.

Some of the issues with this are:

There were some prior attempts to fix this, but I would summarize them as attempting to reduce edge-cases instead of attempting to rethink how mentions are done.

Intentional mentions

I chose to call this “intentional” mentions since the protocol now requires explicitly referring to the Matrix IDs to mention in a dedicated field, instead of implicit references in the text content.

The overall change is simple: include a list of mentioned users in a new content field, e.g.:

{ // Additional fields ignored. "content": { "body": "Hello @clokep:matrix.org!" "m.mentions": { "user_ids": ["@clokep:matrix.org"] } } }

Only the m.mentions field is used to generate mentions, the body field is no longer involved. Not only does this remove a whole class of potential bugs, but also allows for “hidden” mentions and paves the way for mentions in extensible events (see MSC4053).

That’s the gist of the change, although the MSC goes deeper into backwards compatibility, and interacting with replies or edits.

Comparison to other protocols

The m.mentions field is similar to how Twitter, Mastodon, Discord, and Microsoft Teams handle mentioning users. The main downside of this approach is that it is not obvious where in the text the user’s mention is (and allows for hidden mentions).

The other seriously considered approach was searching for “pills” in the HTML content of the event. This is similar to how Slack handles mentions, where the user ID is encoded with some markup [3]. This has a major downside of requiring HTML parsing on a hotpath of processing notifications (and it is unclear how this would work for non-HTML clients).

Can I use this?

You can! The MSC was approved and included in Matrix 1.7, Synapase has had support since v1.86.0; it is pretty much up to clients to implement it!

Element Web has handled (and sent intentional mentions) since v1.11.37, although I’m not aware of other clients which do (Element X might now). Hopefully it will become used throughout the ecosystem since many of the above issues are still common complaints I see with Matrix.

[1]This post ignores room-mentions, but they’re handled very similarly. [2]Note that the plaintext content of the event is searched not the “formatted” content (which is usually HTML). [3]This solution should also reduce the number of unintentional mentions, but doesn’t allow for hidden mentions.
Categorieën: Mozilla-nl planet

Patrick Cloke: Matrix Presence

vr, 15/12/2023 - 17:24

I put together some notes on presence when implementing multi-device support for presence in Synapse, maybe this is helpful to others! This is a combination of information from the specification, as well as some information about how Synapse works.

Note

These notes are true as of the v1.9 of the Matrix spec and also cover some Matrix spec changes which may or may not have been merged since.

Presence in Matrix

Matrix includes basic presence support, which is explained decently from the specification:

Each user has the concept of presence information. This encodes:

  • Whether the user is currently online
  • How recently the user was last active (as seen by the server)
  • Whether a given client considers the user to be currently idle
  • Arbitrary information about the user’s current status (e.g. “in a meeting”).

This information is collated from both per-device (online, idle, last_active) and per-user (status) data, aggregated by the user’s homeserver and transmitted as an m.presence event. Presence events are sent to interested parties where users share a room membership.

User’s presence state is represented by the presence key, which is an enum of one of the following:

  • online : The default state when the user is connected to an event stream.
  • unavailable : The user is not reachable at this time e.g. they are idle. [1]
  • offline : The user is not connected to an event stream or is explicitly suppressing their profile information from being sent.

MSC3026 defines a busy presence state:

the user is online and active but is performing an activity that would prevent them from giving their full attention to an external solicitation, i.e. the user is online and active but not available.

Presence information is returned to clients in the presence key of the sync response as a m.presence EDU which contains:

  • currently_active: Whether the user is currently active (boolean)
  • last_active_ago: The last time since this used performed some action, in milliseconds.
  • presence: online, unavailable, or offline (or busy)
  • status_msg: An optional description to accompany the presence.
Updating presence

Clients can call PUT /_matrix/client/v3/presence/{userId}/status to update the presence state & status message or can set the presence state via the set_presence parameter on /sync request.

Note that when using the set_presence parameter, offline is equivalent to “do not make a change”.

User activity

From the Matrix spec on last active ago:

The server maintains a timestamp of the last time it saw a pro-active event from the user. A pro-active event may be sending a message to a room or changing presence state to online. This timestamp is presented via a key called last_active_ago which gives the relative number of milliseconds since the pro-active event.

If the presence is set to online then last_active_ago is not part of the /sync response and currently_active is returned instead.

Idle timeout

From the Matrix spec on automatically idling users:

The server will automatically set a user’s presence to unavailable if their last active time was over a threshold value (e.g. 5 minutes). Clients can manually set a user’s presence to unavailable. Any activity that bumps the last active time on any of the user’s clients will cause the server to automatically set their presence to online.

MSC3026 also recommends:

If a user’s presence is set to busy, it is strongly recommended for implementations to not implement a timer that would trigger an update to the unavailable state (like most implementations do when the user is in the online state). Presence in Synapse

Note

This describes Synapse’s behavior after v1.93.0. Before that version Synapse did not account for multiple devices, essentially meaning that the latest device update won.

This also only applies to local users; per-device information for remote users is not available, only the combined per-user state.

User’s devices can set a device’s presence state and a user’s status message. A user’s device knows better than the server whether they’re online and should send that state as part of /sync calls (e.g. sending online or unavailable or offline).

Thus a device is only ever able to set the “minimum” presence state for the user. Presence states are coalesced across devices as busy > online > unavailable > offline. You can build simple truth tables of how these combine with multiple devices:

Device 1 Device 2 User state online unavailable online busy online busy unavailable offline unavailable

Additionally, users expect to see the latest activity time across all devices. (And therefore if any device is online and the latest activity is recent then the user is currently active).

The status message is global and setting it should always override any previous state (and never be cleared automatically).

Automatic state transitions

Note

Note that the below only describes the logic for local users. Data received over federation is handled differently.

If a device is unavailable or offline it should transition to online if a “pro-active event” occurs. This includes sending a receipt or event, or syncing without set_presence or set_presence=online.

If a device is offline it should transition to unavailable if it is syncing with set_presence=unavailable.

If a device is online (either directly or implicitly via user actions) it should transition to unavailable (idle) after a period of time [2] if the device is continuing to sync. (Note that this implies the sync is occurring with set_presence=unavailable as otherwise the device is continuing to report as online). [3]

If a device is online or unavailable it should transition to offline after a period of time if it is not syncing and not making other actions which would transition the device to online. [4]

Note if a device is busy it should not transition to other states. [5]

There’s a huge testcase which checks all these transitions.

Examples
  1. Two devices continually syncing, one online and one unavailable. The end result should be online. [6]
  2. One device syncing with set_presence=unavailable but had a “pro-active” action, after a period of time the user should be unavailable if no additional “pro-active” actions occurred.
  3. One device that stops syncing (and no other “pro-active” actions” are occurring), after a period of time the user should be offline.
  4. Two devices continually syncing, one online and one unavailable. The online device stops syncing, after a period of time the user should be unavailable.
[1]This should be called idle. [2]The period of time is implementation specific. [3]Note that syncing with set_presence=offline does not transition to offline, it is equivalent to not syncing. (It is mostly for mobile applications to process push notifications.) [4]The spec doesn’t seem to ever say that devices can transition to offline. [5]See the open thread on the MSC3026. [6]This is essentially the bug illustrated by the change in Element Web’s behavior.
Categorieën: Mozilla-nl planet

The Rust Programming Language Blog: A Call for Proposals for the Rust 2024 Edition

vr, 15/12/2023 - 01:00

The year 2024 is soon to be upon us, and as long-time Rust aficionados know, that means that a new Edition of Rust is on the horizon!

What is an Edition?

You may be aware that a new version of Rust is released every six weeks. New versions of the language can both add things as well as change things, but only in backwards-compatible ways, according to Rust's 1.0 stability guarantee.

But does that mean that Rust can never make backwards-incompatible changes? Not quite! This is what an Edition is: Rust's mechanism for introducing backwards-incompatible changes in a backwards-compatible way. If that sounds like a contradiction, there are three key properties of Editions that preserve the stability guarantee:

  1. Editions are opt-in; crates only receive breaking changes if its authors explicitly ask for them.

  2. Crates that use older editions never get left behind; a crate written for the original Rust 2015 Edition is still supported by every Rust release, and can still make use of all the new goodies that accompany each new version, e.g. new library APIs, compiler optimizations, etc.

  3. An Edition never splits the library ecosystem; crates using new Editions can depend on crates using old Editions (and vice-versa!), so nobody ever has to worry about Edition-related incompatibility.

In order to keep churn to a minimum, a new Edition of Rust is only released once every three years. We've had the 2015 Edition, the 2018 Edition, the 2021 Edition, and soon, the 2024 Edition. And we could use your help!

A call for proposals for the Rust 2024 Edition

We know how much you love Rust, but let's be honest, no language is perfect, and Rust is no exception. So if you've got ideas for how Rust could be better if only that pesky stability guarantee weren't around, now's the time to share! Also note that potential Edition-related changes aren't just limited to the language itself: we'll also consider changes to both Cargo and rustfmt as well.

Please keep in mind that the following criteria determine the sort of changes we're looking for:

  1. A change must be possible to implement without violating the strict properties listed in the prior section. Specifically, the ability of crates to have cross-Edition dependencies imposes restrictions on changes that would take effect across crate boundaries, e.g. the signatures of public APIs. However, we will occasionally discover that an Edition-related change that was once thought to be impossible actually turns out to be feasible, so hope is not lost if you're not sure if your idea meets this standard; propose it just to be safe!
  1. We strive to ensure that nearly all Edition-related changes can be applied to existing codebases automatically (via tools like cargo fix), in order to make upgrading to a new Edition as painless as possible.

  2. Even if an Edition could make any given change, that doesn't mean that it should. We're not looking for hugely-invasive changes or things that would fundamentally alter the character of the language. Please focus your proposals on things like fixing obvious bugs, changing annoying behavior, unblocking future feature development, and making the language easier and more consistent.

To spark your imagination, here's a real-world example. In the 2015 and 2018 Editions, iterating over a fixed-length array via [foo].into_iter() will yield references to the iterated elements; this is is surprising because, on other types, calling .into_iter() produces an iterator that yields owned values rather than references. This limitation existed because older versions of Rust lacked the ability to implement traits for all possible fixed-length arrays in a generic way. Once Rust finally became able to express this, all Editions at last gained the ability to iterate over owned values in fixed-length arrays; however, in the specific case of [foo].into_iter(), altering the existing behavior would have broken lots of code in the wild. Therefore, we used the 2021 Edition to fix this inconsistency for the specific case of [foo].into_iter(), allowing us to address this long-standing issue while preserving Rust's stability guarantees.

How to contribute

Just like other changes to Rust, Edition-related proposals follow the RFC process, as documented in the Rust RFCs repository. Please follow the process documented there, and please consider publicizing a draft of your RFC to collect preliminary feedback before officially submitting it, in order to expedite the RFC process once you've filed it for real! (And in addition to the venues mentioned in the prior link, please feel free to announce your pre-RFC to our Zulip channel.)

Please file your RFCs as soon as possible! Our goal is to release the 2024 Edition in the second half of 2024, which means we would like to get everything implemented (not only the features themselves, but also all the Edition-related migration tooling) by the end of May, which means that RFCs should be accepted by the end of February. And since RFCs take time to discuss and consider, we strongly encourage you to have your RFC filed by the end of December, or the first week of January at the very latest.

We hope to have periodic updates on the ongoing development of the 2024 Edition. In the meantime, if you have any questions or if you would like to help us make the new Edition a reality, we invite you to come chat in the #edition channel in the Rust Zulip.

Categorieën: Mozilla-nl planet

Mozilla Addons Blog: A new world of open extensions on Firefox for Android has arrived

do, 14/12/2023 - 19:20

Woo-hoo you did it! Hundreds of add-on developers heeded the call to make their desktop extensions compatible for today’s debut of a new open ecosystem of Firefox for Android extensions. More than 450 Firefox for Android extensions are now discoverable on the addons.mozilla.org (AMO) Android homepage. It’s a strong start to an exciting new frontier of mobile browser customization. Let’s see where this goes.

Are you a developer who hasn’t migrated your desktop extension to Firefox for Android yet? Here’s a good starting point for developing extensions for Firefox for Android.

If you’ve already embarked on the mobile extension journey and have questions/insights/feedback to offer as we continue to optimize the mobile development experience, we invite you to join the discussion about top APIs missing on Firefox for Android.

Have you found any Firefox for Android bugs? Do tell!

The post A new world of open extensions on Firefox for Android has arrived appeared first on Mozilla Add-ons Community Blog.

Categorieën: Mozilla-nl planet

Mozilla Performance Blog: New Sheriffing feature and significant updates to KPI reporting queries

wo, 13/12/2023 - 11:01

A year ago I was sharing how a Mozilla Performance Sheriff catches performance regressions, the entire Workflow they go through, and the incoming improvements. Since I joined the Performance Tools Team (formerly Performance Test), almost five years ago, a whole lot of improvements have been made, and features have been added.

In this article, I want to focus on a special set of features, that give the Performance Sheriffs more control over the Sheriffing Workflow (from when an alert is triggered, triaged to when the regression bug is filed and linked to the alert). We call them time-to-triage (from alert to triage) and time-to-bug (from alert to bug). They are actually the object of our Sheriffing Team’s KPIs, the KPIs that measure the performance of the Performance Sheriffs team (I like puns).

The time-to-triage KPI measures the time since an alert was triggered by a performance change to when it was triaged (basically first-time analysis). It is at most 3 days, and at least 80% of the sheriffed alerts have to meet this deadline (or 20% is allowed not to). However, our team does not work weekends and they have to be excluded. For example, if an alert was created on a Friday (any), the three-day-triage time ends on Monday instead of Wednesday when the three business days actually expire. This means we basically only get a single day to triage it. So every time something like this happens, we have to manually exclude those alerts from the old queries of the KPI report that do not exclude the weekends from those times. The new queries do this exclusion automatically.

 

Triage Response Times (time-to-triage)Year To Date

Triage Response Times (time-to-triage)
Year To Date

Triage Response Times (New Query)Year To Date

Triage Response Times (New Query)
Year To Date

Alerts Exceeding Triage TargetYear To Date

Alerts Exceeding Triage Target
Year To Date

The same thing is true for an alert created on a weekend, where a part of the alert-to-triage time falls on the weekend. Actually, the only alerts that can not capture weekends are the ones created Monday and Tuesday.

The time-to-bug KPI measures the time since an alert was triggered by a performance change to when a bug was linked to the alert. It is at most 5 days, and at least 80% of the valid regression alerts must meet this deadline (or 20% is allowed not to). The only alerts that can not capture weekends within this KPI are the ones created on Monday, the first hour in the morning, whose KPI ends Friday in the last hour of the day.

Regression Bug Response TimesYear To Date

Regression Bug Response Times
Year To Date

Regression Bug Response Times (New Query)Year To Date

Regression Bug Response Times (New Query)
Year To Date

Regressions Exceeding Bug TargetYear To Date

Regressions Exceeding Bug Target
Year To Date

In the images above, you can see a difference in the percentages of time-to-triage (86.9% vs. 97.9% old query vs. new query) and time-to-bug (75.7% vs. 97% old query vs. new query). This is not because the Sheriffing Team is doing a better job, they were doing this the whole time. It is because the feature we developed helps measure the percentages accurately by excluding the weekends from the calculated times. According strictly to the percentages, the impact of this feature is significant, taking us from an average – maybe struggling – performance, to a really good one. Of course, the inclusion of weekends in the report of the KPIs was known a while ago, but having a bigger picture and concrete metrics is more revealing.

The development of these time-to-triage/time-to-bug features is full-stack and involved:

  • Helping our manager’s Sheriffing report calculate the times more accurately (to whom I am grateful for supporting this initiative);
  • Modifying the performance_alert_summary database table to store due dates;
  • Implementing the accurate calculation in the backend as described above;
  • Showing in the UI the countdown until the alert goes overdue gives the Performance Sheriffs more control and the ability to organize themselves throughout the Sheriffing Workflow better.

I didn’t mention the countdown feature yet. It is shown in the image below, right next to the status dropdown of the alert summary (top-right corner). Here are displayed:

  • The type of due date that is in effect (Triage in this case);
  • The amount of time. When the time goes under 24 hours, the timer will switch to showing the hours left.

The alert will become triaged and the counter will switch from triage to bug when the first-time analysis is performed on it (star, assign, add tag, add note).

Alert with Triage due date status

Alert with Triage due date status

 

Below is an example of a time-to-bug timer (the time left until linking the alert to a bug will go due). By default the timer counter is green, but when the timer goes under 24 hours, it will go orange.

Alert with Bug due date status

Alert with Bug due date status

When the timer goes overdue, we can see in the image below that the counter icon becomes red and the “Overdue” status is shown up.

Alert with Overdue status (this is for demo purposes only, the alert wasn’t overdue for real)

Alert with Overdue status
(this is for demo purposes only, the alert wasn’t overdue for real)

Lastly, after the alert is finally linked to a bug, the counter will turn into a green checkmark and the countdown status will be “Ready for acknowledge”.

Alert with Ready for acknowledge status

Alert with Ready for acknowledge status

Now, instead of manually excluding the times inflated by the weekends, we have an automated feature to closely control the alert lifecycle and report the KPI percentages more accurately.

The development of this feature was a personal initiative, encouraged by our manager and by the whole team (without their support I couldn’t have done this). This is part of a wider initiative I support, improvements to Performance Sheriffing Workflow. It improves the developer experience while working with performance regressions and helps the Performance Sheriffs be more efficient by improving their tools and automating as much as possible their workflow.

Categorieën: Mozilla-nl planet

Tiger Oakes: Takeaways from React Day Berlin & TestJS Summit 2023

wo, 13/12/2023 - 01:00
What I learned from a conference double feature.
Categorieën: Mozilla-nl planet

Hacks.Mozilla.Org: Puppeteer Support for the Cross-Browser WebDriver BiDi Standard

di, 12/12/2023 - 17:14

We are pleased to share that Puppeteer now supports the next-generation, cross-browser WebDriver BiDi standard. This new protocol makes it easy for web developers to write automated tests that work across multiple browser engines.

How Do I Use Puppeteer With Firefox?

The WebDriver BiDi protocol is supported starting with Puppeteer v21.6.0. When calling puppeteer.launch pass in "firefox" as the product option, and "webDriverBiDi" as the protocol option:

const browser = await puppeteer.launch({   product: 'firefox',   protocol: 'webDriverBiDi', })

You can also use the "webDriverBiDi" protocol when testing in Chrome, reflecting the fact that WebDriver BiDi offers a single standard for modern cross-browser automation.

In the future we expect "webDriverBiDi" to become the default protocol when using Firefox in Puppeteer.

Doesn’t Puppeteer Already Support Firefox?

Puppeteer has had experimental support for Firefox based on a partial re-implementation of the proprietary Chrome DevTools Protocol (CDP). This approach had the advantage that it worked without significant changes to the existing Puppeteer code. However the CDP implementation in Firefox is incomplete and has significant technical limitations. In addition, the CDP protocol itself is not designed to be cross browser, and undergoes frequent breaking changes, making it unsuitable as a long-term solution for cross-browser automation.

To overcome these problems, we’ve worked with the WebDriver Working Group at the W3C to create a standard automation protocol that meets the needs of modern browser automation clients: this is WebDriver BiDi. For more details on the protocol design and how it compares to the classic HTTP-based WebDriver protocol, see our earlier posts.

As the standardization process has progressed, the Puppeteer team has added a WebDriver BiDi backend in Puppeteer, and provided feedback on the specification to ensure that it meets the needs of Puppeteer users, and that the protocol design enables existing CDP-based tooling to easily transition to WebDriver BiDi. The result is a single protocol based on open standards that can drive both Chrome and Firefox in Puppeteer.

Are All Puppeteer Features Supported?

Not yet; WebDriver BiDi is still a work in progress, and doesn’t yet cover the full feature set of Puppeteer.

Compared to the Chrome+CDP implementation, there are some feature gaps, including support for accessing the cookie store, network request interception, some emulation features, and permissions. These features are actively being standardized and will be integrated as soon as they become available. For Firefox, the only missing feature compared to the Firefox+CDP implementation is cookie access. In addition, WebDriver BiDi already offers improvements, including better support for multi-process Firefox, which is essential for testing some websites. More information on the complete set of supported APIs can be found in the Puppeteer documentation, and as new WebDriver-BiDi features are enabled in Gecko we’ll publish details on the Firefox Developer Experience blog.

Nevertheless, we believe that the WebDriver-based Firefox support in Puppeteer has reached a level of quality which makes it suitable for many real automation scenarios. For example at Mozilla we have successfully ported our Puppeteer tests for pdf.js from Firefox+CDP to Firefox+WebDriver BiDi.

Is Firefox’s CDP Support Going Away?

We currently don’t have a specific timeline for removing CDP support. However, maintaining multiple protocols is not a good use of our resources, and we expect WebDriver BiDi to be the future of remote automation in Firefox. If you are using the CDP support outside of the context of Puppeteer, we’d love to hear from you (see below), so that we can understand your use cases, and help transition to WebDriver BiDi.

Where Can I Provide Feedback?

For any issues you experience when porting Puppeteer tests to BiDi, please open issues in the Puppeteer issue tracker, unless you can verify the bug is in the Firefox implementation, in which case please file a bug on Bugzilla.

If you are currently using CDP with Firefox, please join the #webdriver matrix channel so that we can discuss your use case and requirements, and help you solve any problems you encounter porting your code to WebDriver BiDi.

Update: The Puppeteer team have published “Harness the Power of WebDriver BiDi: Chrome and Firefox Automation with Puppeteer“.

The post Puppeteer Support for the Cross-Browser WebDriver BiDi Standard appeared first on Mozilla Hacks - the Web developer blog.

Categorieën: Mozilla-nl planet

The Rust Programming Language Blog: Cargo cache cleaning

ma, 11/12/2023 - 01:00

Cargo has recently gained an unstable feature on the nightly channel (starting with nightly-2023-11-17) to perform automatic cleaning of cache content within Cargo's home directory. This post includes:

In short, we are asking people who use the nightly channel to enable this feature and report any issues you encounter on the Cargo issue tracker. To enable it, place the following in your Cargo config file (typically located in ~/.cargo/config.toml or %USERPROFILE%\.cargo\config.toml for Windows):

[unstable] gc = true

Or set the CARGO_UNSTABLE_GC=true environment variable or use the -Zgc CLI flag to turn it on for individual commands.

We'd particularly like people who use unusual filesystems or environments to give it a try, since there are some parts of the implementation which are sensitive and need battle testing before we turn it on for everyone.

What is this feature?

Cargo keeps a variety of cached data within the Cargo home directory. This cache can grow unbounded and can get quite large (easily reaching many gigabytes). Community members have developed tools to manage this cache, such as cargo-cache, but cargo itself never exposed any ability to manage it.

This cache includes:

  • Registry index data, such as package dependency metadata from crates.io.
  • Compressed .crate files downloaded from a registry.
  • The uncompressed contents of those .crate files, which rustc uses to read the source and compile dependencies.
  • Clones of git repositories used by git dependencies.

The new garbage collection ("GC") feature adds tracking of this cache data so that cargo can automatically or manually remove unused files. It keeps an SQLite database which tracks the last time the various cache elements have been used. Every time you run a cargo command that reads or writes any of this cache data, it will update the database with a timestamp of when that data was last used.

What isn't yet included is cleaning of target directories, see Plan for the future.

Automatic cleaning

When you run cargo, once a day it will inspect the last-use cache tracker, and determine if any cache elements have not been used in a while. If they have not, then they will be automatically deleted. This happens with most commands that would normally perform significant work, like cargo build or cargo fetch.

The default is to delete data that can be locally recreated if it hasn't been used for 1 month, and to delete data that has to be re-downloaded after 3 months.

Automatic deletion is disabled if cargo is offline such as with --offline or --frozen to avoid deleting artifacts that may need to be used if you are offline for a long period of time.

The initial implementation has exposed a variety of configuration knobs to control how automatic cleaning works. However, it is unlikely we will expose too many low-level details when it is stabilized, so this may change in the future (see issue #13061). See the Automatic garbage collection section for more details on this configuration.

Manual cleaning

If you want to manually delete data from the cache, several options have been added under the cargo clean gc subcommand. This subcommand can be used to perform the normal automatic daily cleaning, or to specify different options on which data to remove. There are several options for specifying the age of data to delete (such as --max-download-age=3days) or specifying the maximum size of the cache (such as --max-download-size=1GiB). See the Manual garbage collection section or run cargo clean gc --help for more details on which options are supported.

This CLI design is only preliminary, and we are looking at determining what the final design will look like when it is stabilized, see issue #13060.

What to watch out for

After enabling the gc feature, just go about your normal business of using cargo. You should be able to observe the SQLite database stored in your cargo home directory at ~/.cargo/.global-cache.

After the first time you use cargo, it will populate the database tracking all the data that already exists in your cargo home directory. Then, after 1 month, cargo should start deleting old data, and after 3 months will delete even more data.

The end result is that after that period of time you should start to notice the home directory using less space overall.

You can also try out the cargo clean gc command and explore some of its options if you want to try to manually delete some data.

If you run into problems, you can disable the gc feature and cargo should return to its previous behavior. Please let us know on the issue tracker if this happens.

Request for feedback

We'd like to hear from you about your experience using this feature. Some of the things we are interested in are:

  • Have you run into any bugs, errors, issues, or confusing problems? Please file an issue over at https://github.com/rust-lang/cargo/issues/.
  • The first time that you use cargo with GC enabled, is there an unreasonably long delay? Cargo may need to scan your existing cache data once to detect what already exists from previous versions.
  • Do you notice unreasonable delays when it performs automatic cleaning once a day?
  • Do you have use cases where you need to do cleaning based on the size of the cache? If so, please share them at #13062.
  • If you think you would make use of manually deleting cache data, what are your use cases for doing that? Sharing them on #13060 about the CLI interface might help guide us on the overall design.
  • Does the default of deleting 3 month old data seem like a good balance for your use cases?

Or if you would prefer to share your experiences on Zulip, head over to the #t-cargo stream.

Design considerations and implementation details

(These sections are only for the intently curious among you.)

The implementation of this feature had to consider several constraints to try to ensure that it works in nearly all environments, and doesn't introduce a negative experience for users.

Performance

One big focus was to make sure that the performance of each invocation of cargo is not significantly impacted. Cargo needs to potentially save a large chunk of data every time it runs. The performance impact will heavily depend on the number of dependencies and your filesystem. Preliminary testing shows the impact can be anywhere from 0 to about 50ms.

In order to minimize the performance impact of actually deleting files, the automatic GC runs only once a day. This is intended to balance keeping the cache clean without impacting the performance of daily use.

Locking

Another big focus is dealing with cache locking. Previously, cargo had a single lock on the package cache, which cargo would hold while downloading registry data and performing dependency resolution. When cargo is actually running rustc, it previously did not hold a lock under the assumption that existing cache data will not be modified.

However, now that cargo can modify or delete existing cache data, it needs to be careful to coordinate with anything that might be reading from the cache, such as if multiple cargo commands are run simultaneously. To handle this, cargo now has two separate locks, which are used together to provide three separate locking states. There is a shared read lock, which allows multiple builds to run in parallel and read from the cache. There is a write lock held while downloading registry data, which is independent of the read lock which allows concurrent builds to still run while new packages are downloaded. The third state is a write lock that prevents either of the two previous locks from being held, and ensures exclusive access while cleaning the cache.

Versions of cargo before 1.75 don't know about the exclusive write lock. We are hoping that in practice it will be rare to concurrently run old and new cargo versions, and that it is unlikely that the automatic GC will need to delete data that is concurrently in use by an older version.

Error handling and filesystems

Because we do not want problems with GC from disrupting users, the implementation silently skips the GC if it is unable to acquire an exclusive lock on the package cache. Similarly, when cargo saves the timestamp data on every command, it will silently ignore errors if it is unable to open the database, such as if it is on a read-only filesystem, or it is unable to acquire a write lock. This may result in the last-use timestamps becoming stale, but hopefully this should not impact most usage scenarios. For locking, we are paying special attention to scenarios such as Docker container mounts and network filesystems with questionable locking support.

Backwards compatibility

Since the cache is used by any version of cargo, we have to pay close attention to forwards and backwards compatibility. We benefit from SQLite's particularly stable on-disk data format which has been stable since 2004. Cargo has support to do schema migrations within the database that stay backwards compatible.

Plan for the future

A major aspect of this endeavor is to gain experience with using SQLite in a wide variety of environments, with a plan to extend its usage in several other parts of cargo.

Registry index metadata

One place where we are looking to introduce SQLite is for the registry index cache. When cargo downloads registry index data, it stores it in a custom-designed binary file format to improve lookup performance. However, this index cache uses many small files, which may not perform well on some filesystems.

Additionally, the index cache grows without bound. Currently the automatic cache cleaning will only delete an entire index cache if the index itself hasn't been used, which is rarely the case for crates.io. We may also need to consider finer-grained timestamp tracking or some mechanism to periodically purge this data.

Target directory change tracking and cleaning

Another place we are looking to introduce SQLite is for managing the target directory. In cargo's target directory, cargo keeps track of information about each crate that has been built with what is called a fingerprint. These fingerprints help cargo know if it needs to recompile something. Each artifact is tracked with a set of 4 files, using a mixture of custom formats.

We are looking to replace this system with SQLite which will hopefully bring about several improvements. A major focus will be to provide cleaning of stale data in the target directory, which tends to use substantial amount of disk space. Additionally we are looking to implement other improvements, such as more accurate fingerprint tracking, provide information about why cargo thinks something needed to be recompiled, and to hopefully improve performance. This will be important for the script feature, which uses a global cache for build artifacts, and the future implementation of a globally-shared build cache.

Categorieën: Mozilla-nl planet

Mozilla Privacy Blog: Mozilla and Allies Say No to Surveillance Blank Check in NDAA, Yes to Strong Surveillance Protections

vr, 08/12/2023 - 16:04

Today Mozilla, along with a group of builders and supporters of innovation, sent a letter calling on the US House of Representatives to pass strong surveillance reform proposals such as the Government Surveillance Reform Act (GSRA) and the Protect Liberty and End Warrantless Surveillance Act (PLEWSA).

In line with our previous call for reform, our letter also highlighted the need for codification of the scope of surveillance proposed in the Administration’s own Executive Order on “Enhancing Safeguards for United States Signals Intelligence Activities” and opposed a months-long reauthorization of Section 702 that would effectively greenlight surveillance abuses.

Both GSRA and PLEWSA take critical steps forward in protecting Americans from overbroad surveillance, such as imposing warrant requirements for queries of US person data and banning warrantless purchases of sensitive information on Americans from data brokers. We do, however, encourage Congress to examine how it can further strengthen PLEWSA.

Unfortunately, House and Senate Intelligence Committees are also considering proposals of their own, proposals that would entrench the surveillance status quo.

Those wishing to get involved can add their names to our letter and do their part to engage Congress on this important issue.

You can find the letter HERE.

The post Mozilla and Allies Say No to Surveillance Blank Check in NDAA, Yes to Strong Surveillance Protections appeared first on Open Policy & Advocacy.

Categorieën: Mozilla-nl planet

Support.Mozilla.Org: What’s up with SUMO – Q4 2023

vr, 08/12/2023 - 08:13

Hi everybody,

The last part of our quarterly update in 2023 come early with this post. That means, we won’t get the data from December just yet (but we’ll make sure to update the post later). Lots of updates after the last quarter so let’s just dive in!

Welcome note and shout-outs from Q4

If you know anyone that we should feature here, please contact Kiki and we’ll make sure to add them in our next edition.

Community news
  • Kiki back from maternity leave and Sarto bid her farewell, all happened in this quarter.
  • We have a new contributor policy around the use of generative AI tools. This was one of the things that Sarto initiated back then so I’d like to give the credit to her. Please take some time to read and familiarize yourself with the policy.
  • Spanish contributors are pushing really hard to help localize the in-product and top articles for the Firefox Desktop. I’m so proud that at the moment, 57.65% of Firefox Desktop in-product articles have been translated & updated to Spanish (compared to 11.8% when we started) and 80% of top 50 articles are localized and updated to Spanish. Huge props to those who I mentioned in the shout-outs section above.
  • We’ve got new locale leaders for Catalan and Indonesian (as I mentioned above). Please join me to congratulate Handi S & Carlos Tomás for their new role!
  • The Customer Experience team is officially moved out from the Marketing org to the Strategy and Operations org led by Suba Vasudevan (more about that in our community meeting in Dec).
  • We’ve migrated Pocket support platform (used to be under Help Scout) to SUMO. That means, Pocket help articles are now available on Mozilla Support, and people looking for Pocket premium support can also ask a question through SUMO.
  • Firefox account is transitioned to Mozilla account in early November this year. Read this article to learn more about the background for this transition.
  • We did a SUMO sprint for the Review checker feature with the release of Firefox 119, even though we couldn’t find lots of chatter about it.
  • Please check out this thread to learn more about recent platform fixes and improvements (including the use of emoji! )
  • We’ve also updated and moved Kitsune documentation to GitHub page recently. Check out this thread to learn more.
Catch up
  • Watch the monthly community call if you haven’t. Learn more about what’s new in October, November, and December! Reminder: Don’t hesitate to join the call in person if you can. We try our best to provide a safe space for everyone to contribute. You’re more than welcome to lurk in the call if you don’t feel comfortable turning on your video or speaking up. If you feel shy to ask questions during the meeting, feel free to add your questions on the contributor forum in advance, or put them in our Matrix channel, so we can answer them during the meeting. First time joining the call? Check out this article to get to know how to join. 
  • If you’re an NDA’ed contributor, you can watch the recording of the Customer Experience weekly scrum meeting from AirMozilla to catch up with the latest product updates.
  • Consider subscribe to Firefox Daily Digest to get daily updates about Firefox from across different platforms.

Check out SUMO Engineering Board to see what the platform team is currently doing and submit a report through Bugzilla if you want to report a bug/request for improvement.

Community stats KB

KB pageviews (*)

* KB pageviews number is a total of KB pageviews for /en-US/ only Month Page views Vs previous month Oct 2023 7,061,331 9.36% Nov 2023 6,502,248 -7.92% Dec 2023 TBD TBD

Top 5 KB contributors in the last 90 days: 

KB Localization

Top 10 locales based on total page views

Locale Oct 2023 

pageviews (*)

Nov 2023 pageviews (*) Dec 2023 

pageviews (*)

Localization progress (per Dec, 7)(**) de 10.66% 10.97% TBD 93% fr 7.10% 7.23% TBD 80% zh-CN 6.84% 6.81% TBD 92% es 5.59% 5.49% TBD 27% ja 5.10% 4.72% TBD 33% ru 3.67% 3.8% TBD 88% pt-BR 3.30% 3.11% TBD 43% It 2.52% 2.48% TBD 96% zh-TW 2.42% 2.61% TBD 2% pl 2.13% 2.11% TBD 83% * Locale pageviews is an overall pageviews from the given locale (KB and other pages) ** Localization progress is the percentage of localized article from all KB articles per locale

Top 5 localization contributors in the last 90 days: 

Forum Support

Forum stats

Month Total questions Answer rate within 72 hrs Solved rate within 72 hrs Forum helpfulness Oct 2023 3,897 66.33% 10.01% 59.68% Nov 2023 2,660 64.77% 9.81% 65.74% Dec 2023 TBD TBD TBD TBD

Top 5 forum contributors in the last 90 days: 

Social Support Channel Total tweets Total moderation by contributors Total reply by contributors Respond conversion rate Oct 2023 311 209 132 63.16% Nov 2023 245 137 87 63.50% Dec 2023 TBD TBD TBD TBD

Top 5 Social Support contributors in the past 3 months: 

  1. Tim Maks 
  2. Wim Benes
  3. Daniel B
  4. Philipp T
  5. Pierre Mozinet
Play Store Support

Firefox for Android only

Channel Total reviews Total conv interacted by contributors Total conv replied by contributors Oct 2023 6,334 45 18 Nov 2023 6,231 281 75 Dec 2023

Top 5 Play Store contributors in the past 3 months: 

Product updates

To catch up on product releases update, please watch the recording of the Customer Experience scrum meeting from AirMozilla. You can also subscribe to the AirMozilla folder by clickling on the Subscribe button at the top right corner of the page to get notifications each time we add a new recording.

Useful links:

 

Categorieën: Mozilla-nl planet

Niko Matsakis: Being Rusty: Discovering Rust's design axioms

do, 07/12/2023 - 14:46

To your average Joe, being “rusty” is not seen as a good thing.1 But readers of this blog know that being Rusty – with a capitol R! – is, of course, something completely different! So what is that makes Rust Rust? Our slogans articulate key parts of it, like fearless concurrency, stability without stagnation, or the epic Hack without fear. And there is of course Lindsey Kuper’s epic haiku: “A systems language / pursuing the trifecta: / fast, concurrent, safe”. But I feel like we’re still missing a unified set of axioms that we can refer back to over time and use to guide us as we make decisions. Some of you will remember the Rustacean Principles, which was my first attempt at this. I’ve been dissatisfied with them for a couple of reasons, so I decided to try again. The structure is really different, so I’m calling it Rust’s design axioms. This post documents the current state – I’m quite a bit happier with it! But it’s not quite there yet. So I’ve also got a link to a repository where I’m hoping people can help improve them by opening issues with examples, counter-examples, or other thoughts.

Axioms capture the principles you use in your decision-making process

What I’ve noticed is that when I am trying to make some decision – whether it’s a question of language design or something else – I am implicitly bringing assumptions, intuitions, and hypotheses to bear. Oftentimes, those intutions fly by very quickly in my mind, and I barely even notice them. Ah yeah, we could do X, but if we did that, it would mean Y, and I don’t want that, scratch that idea. I’m slowly learning to be attentive to these moments – whatever Y is right there, it’s related to one of my design axioms — something I’m implicitly using to shape my thinking.

I’ve found that if I can capture those axioms and write them out, they can help me down the line when I’m facing future decisions. It can also help to bring alignment to a group of people by making those intutions explicit (and giving people a chance to refute or sharpen them). Obviously I’m not the first to observe this. I’ve found Amazon’s practice of using tenets to be quite useful2, for example, and I’ve also been inspired by things I’ve read online about the importance of making your hypotheses explicit.3

In proof systems, your axioms are the things that you assert to be true and take on faith, and from which the rest of your argument follows. I choose to call these Rust’s design axioms because that seemed like exactly what I was going for. What are the starting assumptions that, followed to their conclusion, lead you to Rust? The more clearly we can articulate those assumptions, the better we’ll be able to ensure that we continue to follow them as we evolve Rust to meet future needs.

Axioms have a hypothesis and a consequence

I’ve structured the axioms in a particular way. They begin by stating the axiom itself – the core belief that we assert to be true. That is followed by a consequence, which is something that we do as a result of that core belief. To show you what I mean, here is one of the Rust design axioms I’ve drafted:

Rust users want to surface problems as early as possible, and so Rust is designed to be reliable. We make choices that help surface bugs earlier. We don’t make guesses about what our users meant to do, we let them tell us, and we endeavor to make the meaning of code transparent to its reader. And we always, always guarantee memory safety and data-race freedom in safe Rust code.

Axioms have an ordering and earlier things take priority

Each axiom is useful on its own, but where things become interesting is when they come into conflict. Consider reliability: that is a core axiom of Rust, no doubt, but is it the most important? I would argue it is not. If it were, we wouldn’t permit unsafe code, or at least not without a safety proof. I think our core axiom is actually that Rust is is meant to be used, and used for building a particular kind of program. I articulated it like this:

Rust is meant to empower everyone to build reliable and efficient software, so above all else, Rust needs to be accessible to a broad audience. We avoid designs that will be too complex to be used in practice. We build supportive tooling that not only points out potential mistakes but helps users understand and fix them.

When it comes to safety, I think Rust’s approach is eminently practical. We’ve designed a safe type system that we believe covers 90-95% of what people need to do, and we are always working to expand that scope. We to get that last 5-10%, we fallback to unsafe code. Is this as safe and reliable as it could be? No. That would be requiring 100% proofs of correctness. There are systems that do that, but they are maintained by a small handful of experts, and that idea – that systems programming is just for “wizards” – is exactly what we are trying to get away from.

To express this in our axioms, we put accessible as the top-most axiom. It defines the mission overall. But we put reliability as the second in the list, since that takes precedence over everything else.

The design axioms I really like

Without further ado, here is my current list design axioms. Well, part of it. These are the axioms that I feel pretty good about it. The ordering also feels right to me.

We believe that…

  • Rust is meant to empower everyone to build reliable and efficient software, so above all else, Rust needs to be accessible to a broad audience. We avoid designs that will be too complex to be used in practice. We build supportive tooling that not only points out potential mistakes but helps users understand and fix them.
  • Rust users want to surface problems as early as possible, and so Rust is designed to be reliable. We make choices that help surface bugs earlier. We don’t make guesses about what our users meant to do, we let them tell us, and we endeavor to make the meaning of code transparent to its reader. And we always, always guarantee memory safety and data-race freedom in safe Rust code.
  • Rust users are just as obsessed with quality as we are, and so Rust is extensible. We empower our users to build their own abstractions. We prefer to let people build what they need than to try (and fail) to give them everything ourselves.
  • Systems programmers need to know what is happening and where, and so system details and especially performance costs in Rust are transparent and tunable. When building systems, it’s often important to know what’s going on underneath the abstractions. Abstractions should still leave the programmer feeling like they’re in control of the underlying system, such as by making it easy to notice (or avoid) certain types of operations.

…where earlier things take precedence.

The design axioms that are still a work-in-progress

These axioms are things I am less sure of. It’s not that I don’t think they are true. It’s that I don’t know yet if they’re worded correctly. Maybe they should be combined together? And where, exactly, do they fall in the ordering?

  • Rust users want to focus on solving their problem, not the fiddly details, so Rust is productive. We favor APIs that where the most convenient and high-level option is also the most efficient one. We support portability across operating systems and execution environments by default. We aren’t explicit for the sake of being explicit, but rather to surface details we believe are needed.
  • N✕M is bigger than N+M, and so we design for composability and orthogonality. We are looking for features that tackle independent problems and build on one another, giving rise to N✕M possibilities.
  • It’s nicer to use one language than two, so Rust is versatile. Rust can’t be the best at everything, but we can make it decent for just about anything, whether that’s low-level C code or high-level scripting.

Of these, I like the first one best. Also, it follows the axiom structure better, because it starts with a hypothesis about Rust users and what they want. The other two are a bit older and I hadn’t adopted that convention yet.

Help shape the axioms!

My ultimate goal is to author an RFC endorsing these axioms for Rust. But I need help to get there. Are these the right axioms? Am I missing things? Should we change the ordering?

I’d love to know what you think! To aid in collaboration, I’ve created a nikomatsakis/rust-design-axioms github repository. It hosts the current state of the axioms and also has suggested ways to contribute.

I’ve already opened issues for some of the things I am wondering about, such as:

  • nikomatsakis/rust-design-axioms#1: Maybe we need a “performant” axiom? Right now, the idea of “zero-cost abstractions” and ““the default thing is also the most efficient one” feels a bit smeared across “transparent and tunable” and “productive”.
  • nikomatsakis/rust-design-axioms#2: Is “portability” sufficiently important to pull out from “productivity” into its own axiom?
  • nikomatsakis/rust-design-axioms#3: Are “versatility” and “orthogonality” really expressing something different from “productivity”?

Check it out!

  1. I have a Google alert for “Rust” and I cannot tell you how often it seems that some sports teams or another shakes off Rust. I’d never heard that expression before signing up for this Google alert. ↩︎

  2. I’m perhaps a bit unusual in my love for things like Amazon’s Leadership Principles. I can totally understand why, to many people, they seem like corporate nonsense. But if there’s one theme I’ve seen consistenly over my time working on Rust, it’s that process and structure are essential. Take a look at the “People Systems” keynote that Aaron, Ashley, and I gave at RustConf 2018 and you will see that theme running throughout. So many of Rust’s greatest practices – things like the teams or RFCs or public, rfcbot-based decision making – are an attempt to take some kind of informal, unstructured process and give it shape. ↩︎

  3. I really like this Learning for Action page, which I admit I found just by googling for “strategy articulate a hypotheses”. I’m less into this super corporate-sounding LinkedIn post, but I have to admit I think it’s right on the money. ↩︎

Categorieën: Mozilla-nl planet

The Rust Programming Language Blog: Announcing Rust 1.74.1

do, 07/12/2023 - 01:00

The Rust team has published a new point release of Rust, 1.74.1. Rust is a programming language that is empowering everyone to build reliable and efficient software.

If you have a previous version of Rust installed via rustup, getting Rust 1.74.1 is as easy as:

rustup update stable

If you don't have it already, you can get rustup from the appropriate page on our website.

What's in 1.74.1

1.74.1 resolves a few regressions introduced in 1.74.0:

Contributors to 1.74.1

Many people came together to create Rust 1.74.1. We couldn't have done it without all of you. Thanks!

Categorieën: Mozilla-nl planet

Mozilla Security Blog: Mozilla VPN Security Audit 2023

wo, 06/12/2023 - 18:00

To provide transparency into our ongoing efforts to protect your privacy and security on the Internet, we are releasing a security audit of Mozilla VPN that Cure53 conducted earlier this year.

The scope of this security audit included the following products:

  • Mozilla VPN Qt6 App for macOS
  • Mozilla VPN Qt6 App for Linux
  • Mozilla VPN Qt6 App for Windows
  • Mozilla VPN Qt6 App for iOS
  • Mozilla VPN Qt6 App for Android

Here’s a summary of the items discovered within this security audit that the auditors rated as medium or higher severity:

  • FVP-03-003: DoS via serialized intent 
      • Data received via intents within the affected activity should be validated to prevent the Android app from exposing certain activities to third-party apps.
      • There was a risk that a malicious application could leverage this weakness to crash the app at any time.
      • This risk was addressed by Mozilla and confirmed by Cure53.
  • FVP-03-008: Keychain access level leaks WG private key to iCloud 
      • Cure53 confirmed that this risk has been addressed due to an extra layer of encryption, which protects the Keychain specifically with a key from the device’s secure enclave.
  • FVP-03-009: Lack of access controls on daemon socket
      • Access controls to guarantee that the user sending commands to the daemon was permitted to initiate the intended action needs to be implemented.
      • This risk has been addressed by Mozilla and confirmed by Cure53.
  • FVP-03-010: VPN leak via captive portal detection 
      • Cure53 advised that the captive portal detection feature be turned off by default to prevent an opportunity for IP leakage when using maliciously set up WiFi hotspots.
      • Mozilla addressed the risk by no longer pinging for a captive portal outside of the VPN tunnel.
  • FVP-03-011: Lack of local TCP server access controls
      • The VPN client exposes a local TCP interface running on port 8754, which is bound to localhost. Users on localhost can issue a request to the port and disable the VPN.
      • Mozilla addressed this risk as recommended by Cure53.
  • FVP-03-012: Rogue extension can disable VPN using mozillavpnnp (High)
      • mozillavpnnp does not sufficiently restrict the application caller.
      • Mozilla addressed this risk as recommended by Cure53.

If you’d like to read the detailed report from Cure53, including all low and informational items, you can find it here.

 

The post Mozilla VPN Security Audit 2023 appeared first on Mozilla Security Blog.

Categorieën: Mozilla-nl planet

Mozilla Privacy Blog: Mozilla Asks US Supreme Court to Support Responsible Content Moderation

wo, 06/12/2023 - 01:05

Today Mozilla Corporation joined an amicus brief in a pair of important Supreme Court cases. The cases consider Texas and Florida laws that prohibit social media platforms from removing hateful and abusive content. If upheld, these laws would make content moderation impossible and would make the internet a much less safe place for all of us. Mozilla urges the Supreme Court to find them unconstitutional.

The Texas law, known as H.B. 20, would prohibit large social media sites from blocking, removing, or demonetizing content based on the viewpoint. While it provides an exception for illegal speech, this still means that platforms would be forced to host a huge range of legal but harmful content, such as outright racism or Holocaust denial. It would mandate, for example, that a page devoted to South African history must tolerate pro-Apartheid comments, or that an online community devoted to religious practice allow comments mocking religion. It would condemn all social media to rampant trolling and abuse.

Mozilla has joined a brief filed by Internet Works and other companies including Tumblr and Pinterest. The brief sets out how content moderation works in practice, and how it can vary widely depending on the goals and community of each platform. It explains how content moderation can promote speech and free association by allowing people to choose and build online communities. In Mozilla’s own social media products, our goal is to moderate in favor of a healthy community. This goal is central to our mission, which underscores our commitment to “an internet that promotes civil discourse, human dignity, and individual expression” and “that elevates critical thinking, reasoned argument, shared knowledge, and verifiable facts.”

The laws under consideration by the Court do not serve speech, but would instead destroy online communities that rely on healthy moderation. Mozilla is standing with the community and allies to call for a better future online.

The post Mozilla Asks US Supreme Court to Support Responsible Content Moderation appeared first on Open Policy & Advocacy.

Categorieën: Mozilla-nl planet

Pagina's