Mozilla Nederland LogoDe Nederlandse

The journey to Roe and after – a Pocket Collection unveils the stories behind Slate’s 7th season of Slow Burn

Mozilla Blog - vr, 08/07/2022 - 22:19

With the recent overturn of Roe v. Wade, many of us can’t help but wonder: How did we get here? It didn’t happen overnight — no, it was more of a slow burn.

Just in time for the seventh season of Slate’s Slow Burn, host and executive editor Susan Matthews explores the path to Roe — a time when more Republicans than Democrats supported abortion rights. Her exploration leads her to the forgotten story of the first woman to be convicted of manslaughter for having an abortion, the stories of the unlikely Catholic power couple who helped ignite the pro-life movement and a rookie Supreme Court justice who got assigned the opinion of a lifetime.

We chatted with Matthews to learn more about the stories behind the podcast, their importance especially in this moment and what she’s reading when she’s not reporting.

Slate has done some important reporting on Roe v. Wade and its end. As someone who has played a major part in this reporting, can you give a snapshot of what listeners can expect from this new season of Slow Burn?

One of the things that strikes me the most about this moment is that abortion feels like one of the most “stuck” topics I can think of for Americans right now. That reality was part of why I wanted to go back in time as we awaited this decision — it’s really hard to imagine the conversation on abortion being less stuck than it is now. But in the early 1970s, things were actually changing really rapidly. Abortion was being talked about openly and all over the country for the first time, and in Slow Burn: Roe v. Wade we’re telling some of those specific stories — including the story of the first woman convicted of manslaughter for getting an abortion, the Catholic power couple who jump-started the pro-life movement, the story of a women-backed lawsuit in Connecticut that influenced Roe, and then, of course, the story of the decision itself. 

I tried very hard to approach the storytelling from the perspective of realizing that the people involved in those stories had no idea how the abortion debate in America would turn out, and that helped me follow the inherent surprise of each story. So I guess what I would say is listeners can expect to learn a lot, and I hope they can feel like this is an opportunity to engage in this topic that I hope feels a little less weighed down by the politics of the moment (though there are resonances, to be sure!).

Can you tell us how this season of Slow Burn came to life and what makes it unique?

I’ve edited medical stories, jurisprudence stories and personal essays for a long time. So I am very familiar with the essay that makes the case, rooted in personal experience, for why abortion access is important. I agree that abortion access is important! But what I wanted to do with this podcast was dig into the history of abortion in America from a perspective that tried to investigate what happened and how we got to where we are now, rather than using [personal] narratives to make a simple argument about abortion. When I reported each story, I tried to include as much context as possible — and a lot was different then, more on that in a minute! — but also, in every story, I tried to really drill down into who these people were, what was motivating them to act, what worried them, what inspired them, etc. I think the thrill of Slow Burn is that each season kind of shows the listener that each of these deeply important historical moments were really just things that real people had a hand in — real people were driving the action, real people were reacting to things, real people were having feelings about them. So I hope that what makes this show unique is that we’re digging into the lives of those real people, in all of their complications.

Did anything surprise you in the process of putting this podcast together? Was there anything in your research that stood out to you most?

There are two things: The first is that abortion did not used to be partisan in the way it is now, at all. In 1972, more Republicans than Democrats supported abortion access. We dig into why that was, and we also start to answer the question of how that changed. The other thing that really surprised me was how haphazard a lot of the change was. For example, when New York liberalized its abortion law in 1970 and didn’t include a residency requirement, all of the sudden tons of women all over the country started coming to the state for abortions, and that actually had really severe ramifications that I think, when considering the context, also make sense. 

Another example of this is just that no one — not even the justices — knew that Roe v. Wade was going to be the case that determined abortion rights in America when it first came to the court. It’s finding out all of the little things like that that allows listeners to just think about how many different paths we could have taken outside of the timeline we are actually on.

How has the Pocket collection you’ve created for Slow Burn: Roe vs Wade let fans go deeper into these stories?

One of the most difficult stories we reported this season was the story at the center of Episode 2, which is about the Willkes. The Willkes were a Catholic couple who really helped launch the pro-life movement — they’re perfect fodder for a podcast because we have a lot of audio of them talking. But there was so much news coverage about them as a couple in the decades after the story we’re telling in the show, and I relied on so many other stories to understand them, that it was really gratifying to be able to put all of those resources somewhere for other people to see. I was really invested in portraying the Willkes fairly and I think that building out that story in particular with supporting resources helps listeners get a real understanding of who they were and the influence they had on the pro-life movement over subsequent decades.

How does this Pocket collection support the storytelling in your podcast? Is the content in your collection different that what is shared in Slow Burn: Roe vs Wade?

I would say that our story focuses really specifically on the years leading up to Roe — so really, 1970, 1971, and 1972 — but that obviously a ton has happened with abortion and abortion politics in particular since. And the collection helped us give people a window into the rest of that story, which certainly helps explain how we ended up where we are now.

What articles and videos are in your Pocket waiting to be read/watched right now?

I am dying to read this story from The New York Times about Amber Rose.  The other things in my queue are basically stories that I missed or didn’t have time to read when I was making the show and saved for later. Those include Rebecca Traister’s profile of Dianne Feinstein, the Times op-ed about marrying the wrong person that blew up the internet that I have yet to read, and Margaret Talbot’s long read on Amy Coney Barrett (for obvious reasons).

At Pocket, we’re all about helping people carve out time and space to dig into the stories that matter. Where and when do you catch up on the long reads and podcast episodes you’re excited for?

This is quite nerdy, but I’m a member of the Brooklyn Botanical Gardens and so I try to take long walks on the weekends around there with the podcasts I want to catch up on (Prospect Park works for this purpose too!). Commuting time of any kind — subway to work, car rides — are big for podcasts for me too. I also really like to lounge in bed on Sunday mornings and scroll through whatever I missed during the week.

The post The journey to Roe and after – a Pocket Collection unveils the stories behind Slate’s 7th season of Slow Burn appeared first on The Mozilla Blog.

Categorieën: Mozilla-nl planet

Killian Wells, CEO of Fragrance House Xyrena, Shares The Joy He Finds In His Corner Of The Internet

Mozilla Blog - vr, 01/07/2022 - 06:00

Here at Mozilla, we are the first to admit the internet isn’t perfect, but we are also quick to point out that the internet is pretty darn magical. The internet opens up doors and opportunities, allows for people to connect with others, and lets everyone find where they belong — their corners of the internet. We all have an internet story worth sharing. In My Corner of the Internet, we talk with people about the online spaces they can’t get enough of, what we should save in Pocket to read later and what sites and forums shaped them.

With Pride celebrations taking place throughout June, we’re featuring LGBTQ+ leaders this month as part of our My Corner of the Internet series. In the last installment, Killian Wells, CEO of the award-winning fragrance house Xyrena, talks about his love of nerd culture, launching a beauty brand website and Pocketing his favorite blogs. 

What is your favorite corner of the internet?

I’m on Instagram a lot and love throwback accounts like @insta80s90s, @80_deco, @popculturedmemes, and @onlyninetieskidsknow.

What is an internet deep-dive that you can’t wait to jump back into?

A few times a year I fall down an internet rabbit hole of searching for synths/patches used on popular songs, particularly by my favorite producers, like Max Martin and Timbaland. 

What is one tab on your browser you always regret closing?

As a film buff, I’m on IMDb a few times a day, especially when I’m watching a movie or show and need to know where else I’ve seen an actor that looks familiar. I’m on it so often that I’ve developed the very useless talent of being able to name the year a movie was released and its distributor.

Who is an LGBTQ+ person with a very online presence that is a role model for you?

I’m a huge fan of Jeremy Scott. I love his design aesthetic so much! I also saw his documentary a few years ago and really relate to his story in many ways. 

What can you not stop talking about on the internet right now?

We just launched a brand-new site for Xyrena (, so I’m really excited about that. I’m pretty obsessed with the retro design.

What was the first online community that you engaged with?

I’m a big pop culture nerd and collect Funko Pops (the Ad Icons are my favorite) so as a teenager I connected with other funatics. My first business was actually selling Wacky Wobblers online and I got the chance to meet Funko’s founder, Mike Becker, and tour one of their original headquarters. 

What articles and videos are in your Pocket waiting to be read/watched right now?

I keep my favorite blogs in my Pocket, like and

If you could create your own corner of the internet, what would it look like?

It already exists at 

Killian Wells is an Austin-based pop music artist/songwriter/producer turned perfumer and CEO of the award-winning fragrance house Xyrena. Dubbed ‘the bad boy perfumer’ and the ‘Damien Hirst of the perfume world’ by press and critics, Wells’ work is heavily influenced by pop culture from the 80s, 90s, and Y2K. Follow him on Instagram @KillianWells

Save and discover the best articles, stories and videos on the web Get Pocket

The post Killian Wells, CEO of Fragrance House Xyrena, Shares The Joy He Finds In His Corner Of The Internet appeared first on The Mozilla Blog.

Categorieën: Mozilla-nl planet

A Pocket collection for your wellness journey, as curated by the team behind ‘The Science of Happiness’ podcast

Mozilla Blog - do, 30/06/2022 - 19:24

Want to live a more fulfilling life? The internet gives plenty of advice. But finding guidance backed by research can get tricky.

That’s where The Science of Happiness podcast comes in. Co-produced by PRX and UC Berkeley’s Greater Good Science Center, the popular podcast explores science-backed strategies to cultivate a happier life. Its new series, Happiness Break, guides listeners through a practice they can follow for a few minutes during their day. 

Podcast host and psychologist Dacher Keltner is curating Pocket reading lists in hopes of encouraging listeners to go deeper on subjects like fear of failure, gratitude and optimism. We chatted with him about what motivated the Happiness Break, why he thinks wellness audio content have become so popular and how he makes time for his own reading list. 

For people unfamiliar with your award-winning podcast, The Science of Happiness, can you tell us a little bit about your work and how you got to the audio space? 

In teaching The Science of Happiness in university and other settings for 25 years, it became really clear to us here at UC Berkeley how hungry people are for actionable knowledge about meaningful living. One of the best ways in which we can dive into that knowledge is through conversation. That’s how people have been learning about happiness for a millennia — just telling stories, listening and being with other people. And so The Science of Happiness podcast emerged out of that sense — that is, if we bring really interesting, diverse voices to our show who are different ages and have different perspectives,  and they experiment with a research-backed practice, and we complement that with the latest science — listeners will be really interested. 

How did the new series Happiness Break come to be and how does it differ from The Science of Happiness? Can you offer a quick peep of what listeners can expect? 

I was brought in to teach The Science of Happiness to medical residents at a hospital, where there tends to be a lot of stress because of the pandemic. This young resident told me, “You know, I love this, but I only have a few minutes each day. I’d love something that I could just hear on my phone as I’m making my way to my next patient.”

That idea just kept returning to the producers at The Science of Happiness and myself. People are on the move. They have these little moments in the day: Maybe they’re waiting to pick up their kids, or waiting for somebody for a meeting, or just sitting outside. They would like this content, but they’d like it to be quick or brief. So we decided to create the Happiness Break, [which guides listeners to] a practice for a few minutes. It’s tailor-made for our busy lives, and it offers what we’ve been delivering in The Science of Happiness podcast in a quick, usable form. 

There is an excellent roster of guest hosts in this new series! With so many fantastic wellness experts and strategists, how did you go about selecting them?

We definitely have our roster of trusted voices – people like Kristin Neff, who is the pioneer in self-compassion literature – but what has been true of The Science of Happiness podcast, is we’ve really been trying to diversify our offerings to include other cultural traditions [relevant to more meaningful living]. We’re also building on practices and ideas about how we find optimistic paths in the climate crisis. We hope to broaden listeners’ appreciation of happiness in this way.

Podcasts and audio-based apps centered on well-being have become quite popular in recent years. Why do you think there is such an appetite for audio guidance around wellness?

Some of the oldest questions that you find in the written record, in spiritual and ethical traditions, in novels, paintings and music are: What does it mean to be alive? How can I find meaning? How can I find happiness? This is a deep human interest, to find happiness. 

The audio space, apps, podcasts and courses are building upon that human tendency. They’re tailored in many ways to our specific cultural moment. There could be a young person working out and they want to listen to something, or they’re commuting, or they’re looking for 20 minutes during a lunch break to get outside and find a little bit of meaning. The podcasts, apps and the courses that you find really fit those interests. I also think they’ve been helpful to people during the pandemic and these hard times.

There’s a lot of background research and preparation that go into producing each episode, and you included articles you and your team read in the Pocket collection you created for Happiness Break. What kind of insights can listeners expect from this collection? 

One of the things that we hope to cultivate is a deeper search for knowledge about the topics that we cover, like gratitude, awe, kindness and handling stress. So we’ve curated little pockets of information that guide our listeners on that journey. 

So if we’re covering immersion in nature, they’ll learn more from articles about the benefits of being in nature. And then there’s links to other podcasts that we respect and personal stories of people that show listeners what we’ve built our podcast around. We are convinced, like great predecessors like William James, that personal stories can tell as much about a phenomenon like happiness as any scientific finding. 

With Happiness Break, listeners have the opportunity to be guided through research-based strategies for a happier, more meaningful life. As a psychologist, why do you think it’s important to explain the science and research behind why these practices actually work?

I think there are several reasons why. The first reason is, people like science, and they love to find out about studies and interesting twists on how one might study something like forgiveness or apologies. 

The second reason has to do with trust. There’s a lot out there on wellness and much of it is of mixed utility, so science is one way we come to trust things, at least, certain listeners in our world. 

The third reason is the right kind of science tells you about the process of a phenomenon. For example, when you learn about the science related to how nature immersion affects your physiology, you’ll know that it has a direct effect on your vagus nerve and certain patterns of activation in your cortex. That finding helps give you insight, like, “Wow, when I was walking through the Rose Garden, the sights, colors, sounds and scents just made me feel better about the world.” Science gives you a lens into understanding that.

What articles and videos are in your Pocket waiting to be read/watched right now?

I catch up on the long reads [mentioned] in the podcasts I am moved by. There will be a half-hour of like, “I really want to read what’s happening in the January 6th hearings,” and a half-hour of my reading what I love. The New Yorker is interesting because even the crime and political stories, they’re so well-written, that they’re a treat. 

At Pocket, we’re all about helping people carve out time and space to dig into the stories that matter. Where and when do you catch up on the long reads and podcast episodes you’re excited for?

I think one of the interesting things for listeners to think about is: “How do we build this stuff into our busy lives and into the rhythms of our day?” I check in on things in the morning, as my coffee is starting to have its effects, and before I’m ready to work hard on writing or research. And then I do it at night, too, before I settle into sleep. I really encourage our listeners to find a couple of moments in the day where this becomes a regular part of the rhythm of their day.

10 years of fascinating reads From the top article from each year, to the way Pocket readers kind of predicted the future, these collections will certainly spark your interest

The post A Pocket collection for your wellness journey, as curated by the team behind ‘The Science of Happiness’ podcast appeared first on The Mozilla Blog.

Categorieën: Mozilla-nl planet

Recording Academy’s VP of D.E.I. Ryan Butler Tells Us What Brings Him Joy Online

Mozilla Blog - ma, 27/06/2022 - 18:00

Here at Mozilla, we are the first to admit the internet isn’t perfect, but we are also quick to point out that the internet is pretty darn magical. The internet opens up doors and opportunities, allows for people to connect with others, and lets everyone find where they belong — their corners of the internet. We all have an internet story worth sharing. In My Corner of the Internet, we talk with people about the online spaces they can’t get enough of, what we should save in Pocket to read later and what sites and forums shaped them.

With Pride celebrations taking place throughout June, we’re featuring LGBTQ+ leaders this month as part of our My Corner of the Internet series. In this next installment, Ryan Butler, the Recording Academy’s Vice President of Diversity, Equity & Inclusion talks about his love of Janelle Monáe, how TikTok is influencing the music industry, and the importance of creating a safe space for underrepresented voices online through music.

What is your favorite corner of the internet?

I love scrolling through the GRAMMYs TikTok (@GRAMMYs), because it has a ton of feel-good moments from our annual Awards shows. There’s nothing like watching the heartwarming reactions of first-time GRAMMY winners. I still get emotional every time I watch Doja Cat’s acceptance speech from this year’s telecast.

What is an internet deep dive that you can’t wait to jump back into?

I’m interested in continuing to learn how TikTok is influencing the music industry and how it’s specifically affecting Black creators. New information surfaces every day about both the positive and negative impacts of the app on creators’ rights, streaming and popularity. 

What is the one tab on your browser you always regret closing?

I always regret closing my music tab. I love discovering new music and diverse artists through Apple Music. It’s vital to my role as Vice President of Diversity, Equity and Inclusion to stay up to date on all things music.

Who is an LGBTQ+ person with a very online presence that is a role model for you?

Janelle Monáe. Whether it is through social media, music, movies and now publishing, Janelle has always been an advocate and an inspiring figure for both Black and LGBTQ+ communities. I can’t wait to read The Memory Librarian which will explore race and queerness in a dystopian world.

What can you not stop talking about on the internet right now? 

I cannot stop talking about the Recording Academy’s recent initiatives to diversify our membership and advocate for inclusivity in the music industry. Earlier this year, we announced our partnership with GLAAD to advance LGBTQ+ representation in music. In May, we kicked off our HBCU Love Tour at Howard University, which aims to empower future music industry professionals and bolster Black representation. As a member of the LGBTQ+ community and a former HBCU student and professor, I’m proud to be on the forefront of these initiatives.

What was the first online community you engaged with?

The music community. I enjoy seeing how artists’ fandoms bring people together from around the world.

What articles and videos are in your Pocket waiting to be read/watched right now?

I have a few Global Spin videos saved that I need to catch up on. Global Spin is a performance series that spotlights international artists in the Afrobeats, K-Pop and Latin genres. I love how this series focuses on musical cultures from around the world.

If you could create your own corner of the internet what would it look like?

It is important that the music community is reflective of the diverse creators within it, so I’d love to create a corner of the internet that promotes intersectionality in music. Giving underrepresented voices a safe space to be heard is not only a personal mission but part of the Recording Academy’s mission as well.

Ryan Butler serves as Vice President of Diversity, Equity & Inclusion for the Recording Academy® where he leads diversity, equity and inclusion internally and externally for the Recording Academy and its affiliates. He is responsible for enterprise-wide diversity and inclusion efforts and ensuring the Academy’s core value of diversity, equity and inclusion remains embedded throughout all aspects of the organization, including internal staff culture, Membership, Awards, Advocacy, and related programs. He also sets national and Chapter goals to accelerate outcomes for underrepresented communities and creators.

Save and discover the best articles, stories and videos on the web Get Pocket

The post Recording Academy’s VP of D.E.I. Ryan Butler Tells Us What Brings Him Joy Online appeared first on The Mozilla Blog.

Categorieën: Mozilla-nl planet

Niko Matsakis: Async cancellation: a case study of pub-sub in mini-redis

Mozilla planet - ma, 13/06/2022 - 21:15

Lately I’ve been diving deep into tokio’s mini-redis example. The mini-redis example is a great one to look at because it’s a realistic piece of quality async Rust code that is both self-contained and very well documented. Digging into mini-redis, I found that it exemplifies the best and worst of async Rust. On the one hand, the code itself is clean, efficient, and high-level. On the other hand, it relies on a number of subtle async conventions that can easily be done wrong – worse, if you do them wrong, you won’t get a compilation error, and your code will “mostly work”, breaking only in unpredictable timing conditions that are unlikely to occur in unit tests. Just the kind of thing Rust tries to avoid! This isn’t the fault of mini-redis – to my knowledge, there aren’t great alterantive patterns available in async Rust today (I go through some of the alternatives in this post, and their downsides).

Context: evaluating moro

We’ve heard from many users that async Rust has a number of pitfalls where things can break in subtle ways. In the Async Vision Doc, for example, the Barbara battles buffered streams and solving a deadlock stories discuss challenges with FuturesUnordered (wrapped in the buffered combinator); the Barbara gets burned by select and Alan tries to cache requests, which doesn’t always happen stories talk about cancellation hazards and the select! or race combinators.

In response to these stories, I created an experimental project called moro that explores structured concurrency in Rust. I’ve not yet blogged about moro, and that’s intentional. I’ve been holding off until I gain more confidence in moro’s APIs. In the meantime, various people (including myself) have been porting different bits of code to moro to get a better sense for what works and what doesn’t. GusWynn, for example, started changing bits of the codebase to use moro and to have a safer alternative to cancellation. I’ve been poking at mini-redis, and I’ve also been working with some folks within AWS with some internal codebases.

What I’ve found so far is that moro absolutely helps, but it’s not enough. Therefore, instead of the triumphant blog post I had hoped for, I’m writing this one, which does a kind of deep-dive into the patterns that mini-redis uses: both how they work well when done right, but also how they are tedious and error-prone. I’ll be posting some follow-up blog posts that explore some of the ways that moro can help.

What is mini-redis?

If you’ve not seen it, mini-redis is a really cool bit of example code from the tokio project. It implements a “miniature” version of the redis in-memory data store, focusing on the key-value and pub-sub aspects of redis. Specifically, clients can connect to mini-redis and issue a subset of the redis commands. In this post, I’m going to focus on the “pub-sub” aspect of redis, in which clients can publish messages to a topic which are then broadcast to everyone who has subscribed to that topic. Whenever a client publishes a message, it receives in response the number of other clients that are currently subscribed to that topic.

Here is an example workflow involving two clients. Client 1 is subscribing to things, and Client 2 is publishing messages.

Core data structures

To implement this, the redis server maintains a struct State that is shared across all active clients. Since it is shared across all clients, it is maintained in a Mutex (source):

struct Shared { /// The shared state is guarded by a mutex. […] state: Mutex<State>, … }

Within this State struct, there is a pub_sub field (source):

pub_sub: HashMap<String, broadcast::Sender<Bytes>>,

The pub_sub field stores a big hashmap. The key is the topic and the value is the broadcast::Sender, which is the “sender half” of a tokio broadcast channel. Whenever a client issues a publish command, it ultimately calls Db::publish, which winds up invoking send on this broadcast channel:

pub(crate) fn publish(&self, key: &str, value: Bytes) -> usize { let state = self.shared.state.lock().unwrap(); state .pub_sub .get(key) // On a successful message send on the broadcast channel, the number // of subscribers is returned. An error indicates there are no // receivers, in which case, `0` should be returned. .map(|tx| tx.send(value).unwrap_or(0)) // If there is no entry for the channel key, then there are no // subscribers. In this case, return `0`. .unwrap_or(0) } The subscriber loop

We just saw how, when clients publish data to a channel, that winds up invoking send on a broadcast channel. But how do the clients who are subscribed to that channel receive those messages? The answer lies in the Subscribe command.

The idea is that the server has a set subscriptions of subscribed channels for the client (source):

let mut subscriptions = StreamMap::new();

This is implemented using a tokio StreamMap, which is a neato data structure that takes multiple streams which each yield up values of type V, gives each of them a key K, and combines them into one stream that yields up (K, V) pairs. In this case, the streams are the “receiver half” of those broadcast channels, and the keys are the channel names.

When it receives a subscribe command, then, the server wants to do the following:

  • Add the receivers for each subscribed channel into subscriptions.
  • Loop:
    • If a message is published to subscriptions, then send it to the client.
    • If the client subscribes to new channels, add those to subscriptions and send an acknowledgement to client.
    • If the client unsubscribes from some channels, remove them from subscriptions and send an acknowledgement to client.
    • If the client terminates, end the loop and close the connection.
“Show me the state”

Learning to write Rust code is basically an exercise in asking “show me the state” — i.e., the key to making Rust code work is knowing what data is going to be modified and when1. In this case, there are a few key pieces of state…

  • The set subscriptions of “broadcast receivers” from each subscribed stream
    • There is also a set self.channels of “pending channel names” that ought to be subscribed to, though this is kind of an implementation detail and not essential.
  • The connection connection used to communicate with the client (a TCP socket)

And there are three concurrent tasks going on, each of which access that same state…

  • Looking for published messages from subscriptions and forwarding to connection (reads subscriptions, writes to connection)
  • Reading client commands from connection and then either…
    • subscribing to new channels (writes to subscriptions) and sending a confirmation (writes to connection);
    • or unsubscribing from channels (writes to subscriptions) and sending a confirmation (writes to connection).
  • Watching for termination and then cancelling everything (drops the broadcast handles in connections).

You can start to see that this is going to be a challenge. There are three conceptual tasks, but they are each needing mutable access to the same data:

If you tried to do this with normal threads, it just plain wouldn’t work…

let mut subscriptions = vec![]; // close enough to a StreamMap for now std::thread::scope(|s| { s.spawn(|| subscriptions.push("key1")); s.spawn(|| subscriptions.push("key2")); });

If you try this on the playground, you’ll see it gets an error because both closures are trying to access the same mutable state. No good. So how does it work in mini-redis?

Enter select!, our dark knight

Mini-redis is able to juggle these three threads through careful use of the select! macro. This is pretty cool, but also pretty error-prone — as we’ll see, there are a number of subtle points in the way that select! is being used here, and it’s easy to write the code wrong and have surprising bugs. At the same time, it’s pretty neat that we can use select! in this way, and it begs the question of whether we can find safer patterns to achieve the same thing. I think right now you can find safer ones, but they require less efficiency, which isn’t really living up to Rust’s promise (though it might be a good idea). I’ll cover that in a follow-up post, though, for now I just want to focus on explaining what mini-redis is doing and the pros and cons of this approach.

The main loop looks like this (source):

let mut subscriptions = StreamMap::new(); loop { … select! { Some((channel_name, msg)) = => ... // -------------------- future 1 res = dst.read_frame() => ... // ---------------- future 2 _ = shutdown.recv() => ... // --------------- future 3 } }

select! is kind of like a match statement. It takes multiple futures (underlined in the code above) and continues executing them until one of them completes. Since the select! is in a loop, and in this case each of the features are producing a series of events, this setup effectively runs the three futures concurrently, processing events as they arrive:

  • – the future waiting for the next message to arise to the StreamMap
  • dst.read_frame() – the async method read_frame is defined on the conection, dst. It reads data from the client, parses it into a complete command, and returns that command. We’ll dive into this function in a bit – it turns out that it is written in a very careful way to account
  • shutdown.recv() – the mini-redis server signals a global shutdown by threading a tokio channel to every connection; when a message is sent to that channel, all the loops cleanup and stop.
How select! works

So, select! runs multiple futures concurrently until one of them completes. In practice, this means that it iterates down the futures, one after the other. Each future gets awoken and runs until it either yields (meaning, awaits on something that isn’t ready yet) or completes. If the future yields, then select! goes to the next future and tries that one.

Once a future completes, though, the select! gets ready to complete. It begins by dropping all the other futures that were selected. This means that they immediately stop executing at whatever await point they reached, running any destructors for things on the stack. As I described in a previous blog post, in practice this feels a lot like a panic! that is injected at the await point. And, just like any other case of recovering from an exception, it requires that code is written carefully to avoid introducing bugs – tomaka describes one such example in his blog post. These bugs are what gives async cancellation in Rust a reputation for being difficult.

Cancellation and mini-redis

Let’s talk through what cancellation means for mini-redis. As we saw, the select! here is effectively running two distinct tasks (as well as waiting for shutdown):

  • Waiting on for a message to arrive from subscribed channels, so it can be forwarded to the client.
  • Waiting on dst.read_frame() for the next comand from the client, so that we can modify the set of subscribed channels.

We’ll see that mini-redis is coded carefully so that, whichever of these events occurs first, everything keeps working correctly. We’ll also see that this setup is fragile – it would be easy to introduce subtle bugs, and the compiler would not help you find them.

Take a look back at the sample subscription workflow at the start of this post. After Client1 has subscribed to A, the server is effectively waiting for Client1 to send further messages, or for other clients to publish.

The code that checks for further messages from Client1 is an async function called read_frame. It has to read the raw bytes sent by the client and assemble them into a “frame” (a single command). The read_frame in mini-redis is written in particular way:

  • It loops and, for each iteration…
    • tries to parse from a complete frame from self.buffer,
    • if self.buffer doesn’t contain a complete frame, then it reads more data from the stream into the buffer.

In pseudocode, it looks like (source):

impl Connection { async fn read_frame(&mut self) -> Result<Option<Frame>> { loop { if let Some(f) = parse_frame(&self.buffer) { return Ok(Some(f)); } read_more_data_into_buffer(&mut self.buffer).await; } } }

The key idea is that the function buffers up data until it can read an entire frame (i.e., successfully complete) and then it removes that entire frame at once. It never removes part of a frame from the buffer. This ensures that if the read_frame function is canceled while awaiting more data, nothing gets lost.

Ways to write a broken read_frame

There are many ways to a version of read_frame that is NOT cancel-safe. For example, instead of storing the buffer in self, one could put the buffer on the stack:

impl Connection { async fn read_frame(&mut self) -> Result<Option<Frame>> { let mut buffer = vec![]; loop { if let Some(f) = parse_frame(&buffer) { return Ok(Some(f)); } read_more_data_into_buffer(&mut buffer).await; // ----- // If future is canceled here, // buffer is lost. } } }

This setup is broken because, if the future is canceled when awaiting more data, the buffered data is lost.

Alternatively, read_frame could intersperse reading from the stream and parsing the frame itself:

impl Connection { async fn read_frame(&mut self) -> Result<Option<Frame>> { let mut buffer = vec![]; let command_name = self.read_command_name().await match command_name { "subscribe" => self.parse_subscribe_command().await, "unsubscribe" => self.parse_unsubscribe_command().await, "publish" => self.parse_publish_command().await, ... } } }

The problem here is similar: if we are canceled while awaiting one of the parse_foo_command futures, then we will forget the fact that we read the command_name already.

Comparison with JavaScript

It is interesting to compare Rust’s Future model with Javascript’s Promise model. In JavaScript, when an async function is called, it implicitly creates a new task. This task has “independent life”, and it keeps executing even if nobody ever awaits it. In Rust, invoking an async fn returns a Future, but that is inert. A Future only executes when some task awaits it. (You can create a task by invoking a suitable spawn method your runtime, and then it will execute on its own.)

There are really good reasons for Rust’s model: in particular, it is a zero-cost abstraction (or very close to it). In JavaScript, if you have one async function, and you factor out a helper function, you just went from one task to two tasks, meaning twice as much load on the scheduler. In Rust, if you have an async fn and you factor out a helper, you still have one task; you also still allocate basically the same amount of stack space. This is a good example of the “performant” (“idiomatic code runs efficiently”) Rust design principle in action.

However, at least as we’ve currently set things up, the Rust model does have some sharp edges. We’ve seen three ways to write read_frame, and only one of them works. Interestingly, all three of them would work in JavaScript, because in the JS model, an async function always starts a task and hence maintains its context.

I would argue that this represents a serious problem for Rust, because it represents a failure to maintain the “reliability” principle (“if it compiles, it works”), whigh ought to come first and foremost for us. The result is that async Rust feels a bit more like C or C++, where performant and versatile take top rank, and one has to have a lot of experience to know how to avoid sharp edges.

Now, I am not arguing Rust should adopt the “Promises” model – I think the Future model is better. But I think we need to tweak something to recover that reliability.

Comparison with threads

It’s interesting to compare how mini-redis with async Rust would compare to a mini-redis implemented with threads. It turns out that it would also be challenging, but in different ways. To start, let’s write up some pseudocode for what we are trying to do:

let mut subscriptions = StreamMap::new(); spawn(async move { while let Some((channel_name, msg)) = { connection.send_message(channel_name, msg); } }); spawn(async move { while let Some(frame) = connection.read_frame().await { match frame { Subscribe(new_channel) => subscribe(&mut connection, new_channel), Unsubscribe(channel) => unsubscribe(&mut connection, channel), _ => ..., } } });

Here we have spawned out two threads, one of which is waiting for new messages from the subscriptions, and one of which is processing incoming client messages (which may involve adding channels the subscriptions map).

There are two problems here. First, you may have noticed I didn’t handle server shutdown! That turns out to be kind of a pain in this setup, because tearing down those spawns tasks is harder than you might think. For simplicity, I’m going to skip that for the rest of the post – it turns out that moro’s APIs solve this problem in a really nice way by allowing shutdown to be imposed externally without any deep changes.

Second, those two threads are both accessing subscriptions and connection in a mutable way, which the Rust compiler will not accept. This is a key problem. Rust’s type system works really well when you can breakdown your data such that every task accesses distinct data (i.e., “spatially disjoint”), either because each task owns the data or because they have &mut references to different parts of it. We have a much harder time dealing with multiple tasks accessing the same data but at different points in time (i.e., “temporally disjoint”).

Use an arc-mutex?

The main way to manage multiple tasks sharing access to the same data is with some kind of interior mutability, typically an Arc<Mutex<T>>. One problem with this is that it fails Rust’s performant design principle (“idiomatic code runs efficiently”), because there is runtime overhead (even if it is minimal in practice, it doesn’t feel good). Another problem with Arc<Mutex<T>> is that it hits on a lot of Rust’s ergonomic weak points, failing our “supportive” principle (“the language, tools, and community are here to help”):

  • You have to allocate the arcs and clone references explicitly, which is annoying;
  • You have to invoke methods like lock, get back lock guards, and understand how destructors and lock guards interact;
  • In Async code in particular, thanks to #57478, the compiler doesn’t understand very well when a lock guard has been dropped, resulting in annoying compiler errors – though Eric Holk is close to landing a fix for this one! :tada:

Of course, people who remember the “bad old days” of async Rust before async-await are very familiar with this dynamic. In fact, one of the big selling points of adding async await sugar into Rust was getting rid of the need to use arc-mutex.

Deeper problems

But the ergonomic pitfalls of Arc<Mutex> are only the beginning. It’s also just really hard to get Arc<Mutex> to actually work for this setup. To see what I mean, let’s dive a bit deeper into the state for mini-redis. There are two main bits of state we have to think about:

  • the tcp-stream to the client
  • the StreamMap of active connections

Managing access to the tcp-stream for the client is actually relatively easy. For one thing, tokio streams support a split operation, so it is possible to take the stream and split out the “sending half” (for sending messages to the client) and the “receiving half” (for receiving messages from the client). All the active threads can send data to the client, so they all need the sending half, and presumably it’ll be have to be wrapped in an (async aware) mutex. But only one active thread needs the receiving half, so it can own that, and avoid any locks.

Managing access to the StreamMap of active connections, though, is quite a bit more difficult. Imagine we were to put that StreamMap itself into a Arc<Mutex>, so that both tasks can access it. Now one of the tasks is going to be waiting for new messages to arrive. It’s going to look something like this:

let mut subscriptions = Arc::new(Mutex::new(StreamMap::new())); spawn(async move { while let Some((channel_name, msg)) = subscriptions.lock().unwrap().next().await { connection.send_message(channel_name, msg); } });

However, this code won’t compile (thankfully!). The problem is that we are acquiring a lock but we are trying to hold onto that lock while we await, which means we might switch to other tasks with the lock being held. This can easily lead to deadlock if those other tasks try to acquire the lock, since the tokio scheduler and the O/S scheduler are not cooprerating with one another.

An alternative would be to use an async-aware mutex like tokio::sync::Mutex, but that is also not great: we can still wind up with a deadlock, but for another reason. The server is now prevented from adding a new subscription to the list until the lock is released, which means that if Client1 is trying to subscribe to a new channel, it has to wait for some other client to send a message to an existing channel to do so (because that is when the lock is released). Not great.

Actually, this whole saga is covered under another async vision doc “status quo” story, Alan thinks he needs async locks.

A third alternative: actors

Recognizing the problems with locks, Alice Ryhl some time ago wrote a nice blog post, “Actors with Tokio”, that explains how to setup actors. This problem actually helps to address both our problems around mutable state. The idea is to move the connections array so that it belongs solely to one actor. Instead of directly modifying collections, the other tasks will communicate with this actor by exchanging messages.

So basically there could be two actors, or even three:

  • Actor A, which owns the connections (list of subscribed streams). It receives messages that are either publishing new messages to the streams or messages that say “add this stream” to the list.
  • Actor B, which owns the “read half” of the client’s TCP stream. It reads bytes and parses new frames, then sends out requests to the other actors in response. For example, when a subscribe message comes in, it can send a message to Actor A saying “subscribe the client to this channel”.
  • Actor C, which owns the “write half” of the client’s TCP stream. Both actors A and B will send messages to it when there are things to be sent to client.

To see how this would be implemented, take a look at Alice’s post. The TL;DR is that you would model connections between actors as tokio channels. Each actor is either spawned or otherwise setup to run independently. You still wind up using select!, but you only use it to receive messages from multiple channels at once. This doesn’t present any cancelation hazards because the channel code is carefully written to avoid them.

This setup works fine, and is even elegant in its own way, but it’s also not living up to Rust’s concept of performant or the goal of “zero-cost abstractions” (ZCA). In particular, the idea with ZCA is that it is supposed to give you a model that says “if you wrote this by hand, you couldn’t do any better”. But if you wrote a mini-redis server in C, by hand, you probably wouldn’t adopt actors. In some sense, this is just adopting something much closer to the Promise model. (Plus, the most obvious way to implement actors in tokio is largely to use tokio::spawn, which definitely adds overhead, or to use FuturesUnordered, which can be a bit subtle as well – moro does address these problems by adding a nice API here.)

(The other challenge with actors implemented this way is coordinating shutdown, though it can certainly be done: you just have to remember to thread the shutdown handler around everywhere.)

Cancellation as the “dark knight”: looking again at select!

Taking a step back, we’ve now seen that trying to use distinct tasks introduces this interesting problem that we have shared data being accessed by all the tasks. That either pushes us to locks (broken) or actors (works), but either way, it raises the question: why wasn’t this a problem with select!? After all, select! is still combining various logical tasks, and those tasks are still touching the same variables, so why is the compiler ok with it?

The answer is closely tied to cancellation: the select! setup works because

  • the things running concurrently are not touching overlapping state:
    • one of them is looking at subscriptions (waiting for a message);
    • another is looking at connection;
    • and the last one is receiving the termination message.
  • and once we decide which one of these paths to take, we cancel all the others.

This last part is key: if we receive an incoming message from the client, for example, we drop the future that was looking at subscriptions, canceling it. That means subscriptions is no longer in use, so we can push new subscriptions into it, or remove things from it.

So, cancellation is both what enables the mini-redis example to be performant and a zero-cost abstraction, but it is also the cause of our reliability hazards. That’s a pickle!


We’ve seen a lot of information, so let me try to sum it all up for you:

  • Fine-grained cancellation in select! is what enables async Rust to be a zero-cost abstraction and to avoid the need to create either locks or actors all over the place.
  • Fine-grained cancellation in select is the root cause for a LOT of reliability problems.

You’ll note that I wrote fine-grained cancellation. What I mean by that is specifically things like how select! will cancel the other futures. This is very different from coarse-grained cancellation like having the entire server shutdown, for which I think structured concurrency solves the problem very well.

So what can we do about fine-grained cancellation? Well, the answer depends.

In the short term, I value reliability above all, so I think adopting an actor-like pattern is a good idea. This setup can be a nice architecture for a lot of reasons2, and while I’ve described it as “not performant”, that assumes you are running a really high-scale server that has to handle a ton of load. For most applications, it will perform very well indeed.

I think it makes sense to be very judiciouis in what you select!! In the context of Materialize, GusWynn was experimenting with a Selectable trait for precisely this reason; that trait just permits select from a few sources, like channels. It’d be nice to support some convenient way of declaring that an async fn is cancel-safe, e.g. only allowing it to be used in select! if it is tagged with #[cancel_safe]. (This might be something one could author as a proc macro.)

But in the longer term, I’m interested if we can come up with a mechanism that will allow the compiler to get smarter. For example, I think it’d be cool if we could share one &mut across two async fn that are running concurrently, so long as that &mut is not borrowed across an await point. I have thoughts on that but…not for this post.

  1. My experience is that being forced to get a clear picture on this is part of what makes Rust code reliable in practice. 

  2. It’d be fun to take a look at Reactive Design Patterns and examine how many of them apply to Rust. I enjoyed that book a lot. 

Categorieën: Mozilla-nl planet

Mozilla Thunderbird: Frequently Asked Questions: Thunderbird Mobile and K-9 Mail

Mozilla planet - ma, 13/06/2022 - 15:05

Today, we announced our detailed plans for Thunderbird on mobile. We also welcomed the open-source Android email client K-9 Mail into the Thunderbird family. Below, you’ll find an evolving list of frequently asked questions about this collaboration and our future plans.

Revealed: Our Plans For Thunderbird On Android Why not develop your own mobile client?

The Thunderbird team had many discussions on how we might provide a great mobile experience for our users. In the end, we didn’t want to duplicate effort if we could combine forces with an existing open-source project that shared our values. Over years of discussing ways K-9 and Thunderbird could collaborate, we decided it would best serve our users to work together.

Should I install K-9 Mail now or wait for Thunderbird?

If you want to help shape the future of Thunderbird on Android, you’re encouraged to install K-9 Mail right now. Leading up to the first official release of Thunderbird for Android, the user interface will probably change a few times. If you dislike somewhat frequent changes in apps you use daily, you might want to hold off.

Will this affect desktop Thunderbird? How?

Many Thunderbird users have asked for a Thunderbird experience on mobile, which we intend to provide by helping make K-9 amazing (and turning it into Thunderbird on Android). K-9 will supplement the Thunderbird experience and enhance where and how users are able to have a great email experience. Our commitment to desktop Thunderbird is unchanged, most of our team is committed to making that a best-in-class email client and it will remain that way.

What will happen to K-9 Mail once the official Thunderbird for Android app has been released?

K-9 Mail will be brought in-line with Thunderbird from a feature perspective, and we will ensure that syncing between Thunderbird and K-9/Thunderbird on Android is seamless. Of course, Thunderbird on Android and Thunderbird on Desktop are both intended to serve very different form factors, so there will be UX differences between the two. But we intend to allow similar workflows and tools on both platforms.

Will I be able to sync my Thunderbird accounts with K-9 Mail?

Yes. We plan to offer Firefox Sync as one option to allow you to securely sync accounts between Thunderbird and K-9 Mail. We expect this feature to be implemented in the summer of 2023.

Will Thunderbird for Android support calendars, tasks, feeds, or chat like the desktop app?

We are working on an amazing email experience first. We are looking at the best way to provide Thunderbird’s other functionality on Android but currently are still debating how best to achieve that. For instance, one method is to simply sync calendars, and then users are able to use their preferred calendar application on their device. But we have to discuss this within the team, and the Thunderbird and K-9 communities, then decide what the best approach is.

Going forward, how will K-9 Mail donations be used?

Donations made towards K-9 will be allocated to the Thunderbird project. Of course, Thunderbird in turn will provide full support for K-9 Mail’s development and activities that support the advancement and sustainability of the app.

Is a mobile Thunderbird app in development for iOS?

Thunderbird is currently evaluating the development of an iOS app.

How can I get involved?

1) Participate in our discussion and planning forum.
2) Developers are encouraged to visit to get started.
3) Obtain Thunderbird source code by visiting
4) K-9 Mail source code is available at:
5) You can financially support Thunderbird and K-9 Mail’s development by donating via this link:

Thunderbird is the leading open-source, cross-platform email and calendaring client, free for business and personal use. We want it to stay secure and become even better. A donation will allow us to hire more developers, pay for infrastructure, expand our userbase, and continue to improve.

Click here to make a donation

The post Frequently Asked Questions: Thunderbird Mobile and K-9 Mail appeared first on The Thunderbird Blog.

Categorieën: Mozilla-nl planet

Mozilla Thunderbird: Revealed: Our Plans For Thunderbird On Android

Mozilla planet - ma, 13/06/2022 - 15:00

For years, we’ve wanted to extend Thunderbird beyond the desktop, and the path to delivering a great Thunderbird on Android™ experience started in 2018.

That’s when Thunderbird Product Manager Ryan Lee Sipes first met up with Christian Ketterer (aka “cketti”), the project maintainer for open-source Android email client K-9 Mail. The two instantly wanted to find a way for the two projects to collaborate. Throughout the following few years, the conversation evolved into how to create an awesome, seamless email experience across platforms.

But Ryan and cketti both agreed that the final product had to reflect the shared values of both projects. It had to be open source, respect the user, and be a perfect fit for power users who crave customization and a rich feature set.

“Ultimately,” Sipes says, “it made sense to work together instead of developing a mobile client from scratch.”

K-9 Mail Joins The Thunderbird Family

To that end, we’re thrilled to announce that today, K-9 Mail officially joins the Thunderbird family. And cketti has already joined the full-time Thunderbird staff, bringing along his valuable expertise and experience with mobile platforms.

Ultimately, K-9 Mail will transform into Thunderbird on Android.

That means the name itself will change and adopt Thunderbird branding. Before that happens, we need to reach certain development milestones that will bring K-9 Mail into alignment with Thunderbird’s feature set and visual appearance.

To accomplish that, we’ll devote finances and development time to continually improving K-9 Mail. We’ll be adding brand new features and introducing quality-of-life enhancements.

K-9 Mail and Thunderbird are both community-funded projects. If you want to help us improve and expand K-9 Mail faster, please consider donating at

Here’s a glimpse into our features roadmap:

  • Account setup using Thunderbird account auto-configuration.
  • Improved folder management.
  • Support for message filters.
  • Syncing between desktop and mobile Thunderbird.

“Joining the Thunderbird family allows K-9 Mail to become more sustainable and gives us the resources to implement long-requested features and fixes that our users want,” cketti says. “In other words, K-9 Mail will soar to greater heights with the help of Thunderbird.”

Thunderbird On Android: Join The Journey

Thunderbird users have long been asking for Thunderbird on their Android and iOS devices. This move allows Thunderbird users to have a powerful, privacy-respecting email experience today on Android. Plus, it lets the community help shape the transition of K-9 Mail into a fully-featured mobile Thunderbird experience.

This is only the beginning, but it’s a very exciting first step.

Want to talk directly with the Thunderbird team about it? Join us for a Twitter Spaces chat (via @MozThunderbird) on Wednesday, June 15 at 10am PDT / 1pm EDT / 7pm CEST). I’ll be there alongside cketti and Ryan to answer your questions, and discuss the future of Thunderbird on mobile devices.

Additional Links And Resources FAQ: Frequently Asked Questions Frequently Asked Questions: Thunderbird Mobile and K-9 Mail

We’ve published a separate FAQ here, addressing many of the community’s questions and concerns. Check back there from time to time, as we plan to update the FAQ as this collaboration progresses.

Thunderbird is the leading open-source, cross-platform email and calendaring client, free for business and personal use. We want it to stay secure and become even better. A donation will allow us to hire more developers, pay for infrastructure, expand our userbase, and continue to improve.

Click here to make a donation

The post Revealed: Our Plans For Thunderbird On Android appeared first on The Thunderbird Blog.

Categorieën: Mozilla-nl planet

Will Kahn-Greene: Dennis v1.0.0 released! Retrospective! Handing it off!

Mozilla planet - vr, 10/06/2022 - 18:00
What is it?

Dennis is a Python command line utility (and library) for working with localization. It includes:

  • a linter for finding problems in strings in .po files like invalid Python variable syntax which leads to exceptions

  • a template linter for finding problems in strings in .pot files that make translator's lives difficult

  • a statuser for seeing the high-level translation/error status of your .po files

  • a translator for strings in your .po files to make development easier

v1.0.0 released!

It's been 5 years since I released Dennis v0.9. That's a long time.

This brings several minor things and clean up. Also, I transferred the repository from "willkg" to "mozilla" in GitHub.

  • b38a678 Drop Python 3.5/3.6; add Python 3.9/3.10 (#122, #123, #124, #125)

  • b6d34d7 Redo tarrminal printin' and colorr (#71)

    There's an additional backwards-incompatible change here in which we drop the --color and --no-color arguments from dennis-cmd lint.

  • 658f951 Document dubstep (#74)

  • adb4ae1 Rework CI so it uses a matrix

  • transfer project from willkg to mozilla for ongoing maintenance and support


I worked on Dennis for 9 years.

It was incredibly helpful! It eliminated an entire class of bugs we were plagued with for critical Mozilla sites like AMO, MDN, SUMO, Input 1, and others. It did it in a way that supported and was respectful of our localization community.

It was pretty fun! The translation transforms are incredibly helpful for fixing layout issues. Some of them also produce hilarious results:


Input has gone to the happy hunting ground in the sky.


SUMO in dubstep.


SUMO in Pirate.


SUMO in Zombie.

There were a variety of dennis recipes including using it in a commit hook to translate commit messages.

I enjoyed writing silly things at the bottom of all the release blog posts.

I learned a lot about gettext, localization, and languages! Learning about the nuances of plurals was fascinating.

The code isn't great. I wish I had redone the tokenization pipeline. I wish I had gotten around to adding support for other gettext variable formats.

Regardless, this project had a significant impact on Mozilla sites which I covered briefly in my Dennis Retrospective (2013).

Handing it off

It's been 6 years since I worked on sites that have localization, so I haven't really used Dennis in a long time and I'm no longer a stakeholder for it.

I need to reduce my maintenance load, so I looked into whether to end this project altogether. Several Mozilla projects still use it for linting PO files for deploys, so I decided not to end the project, but instead hand it off.

Welcome @diox and @akatsoulas who are picking it up!

Where to go for more

For more specifics on this release, see here:

Documentation and quickstart here:

Source code and issue tracker here:

39 of 7,952,991,938 people were aware that Dennis existed but tens--nay, hundreds!--of millions were affected by it.

Categorieën: Mozilla-nl planet

Anne van Kesteren: Leaving Mozilla

Mozilla planet - vr, 10/06/2022 - 09:53

I will be officially leaving Mozilla on the last day of June. My last working day will be June 16. Perhaps I should say I will be leaving the Mozilla Corporation — MoCo, as it’s known internally. After all, once you’re a Mozillian, you’re always a Mozillian. I was there for a significant part of my life — nine years, most of them great, some tough. I was empowered and supported by leadership to move between cities and across countries. Started by moving to London (first time I lived abroad) in February 2013, then Zürich in May 2014, Engelberg (my personal favorite) in May 2015, Zürich again in February 2017, and now here in Berlin since September 2018. In the same time period I moved in with my wonderful partner and we became the lucky parents of two amazing children. It isn’t always easy, but I wouldn’t trade it for the world. They bring me joy every day.

It’s been such a privilege and humbling experience to be able to learn about the internet, browsers, and systems engineering from some of the most talented, kind, and caring people in the world in that space. And furthermore, to be able to build it with them, in my small way. They are always seeking to truly solve problems by approaching them from first principles. As well as looking to raise the layers of abstraction upon which we build the digital world. And then actually doing it, too. I recently read A Philosophy of Software Design by John Ousterhout and it struck me that a lot of the wisdom in that book has been imparted upon me by my time here.

I am extremely grateful to my beautiful colleagues, friends, and leadership at Mozilla for making this a period in my life I will treasure forever. So long, and thanks for all the browser engines. And remember, always ask: is this good for the web? ❤️

Categorieën: Mozilla-nl planet

Anne van Kesteren: After Mozilla

Mozilla planet - vr, 10/06/2022 - 09:52

As mentioned in my previous post I will no longer be employed by MoCo in July. This might leave some of you with some questions I will attempt to answer here. Most significantly:

  • If you want Mozilla’s opinion about standards and used me as a proxy, use Mozilla’s standards-positions going forward.
  • Mozilla will be represented on the WHATWG Steering Group by Tantek Çelik going forward.
  • Due to inertia I will remain the Editor of a number of WHATWG Workstreams. Always happy to discuss changing that.
  • And similarly, I will also remain a WHATWG administrator, mainly ensuring folks get invited to the WHATWG organization to make collaboration more straightforward.

I plan to continue being involved in developing the web platform and encouraging folks to leave their sense of logic at the door, but I suspect that until at least September or so I will be mostly otherwise occupied due to a mix of vacation and onboarding. A lot of that coincides with European vacation time so you might not end up noticing anything. If you do notice, reach out on WHATWG chat and presumably someone there can help you or at least provide some pointers. I expect to also check in on occasion.

Categorieën: Mozilla-nl planet

Steve Fink: Ephemeron Tables aka JavaScript WeakMaps and How They Work

Mozilla planet - vr, 10/06/2022 - 05:19
Introduction I read Ephemerons explained today after finding it on Hacker News, and it was good but lengthy. It was also described in terms of the Squeak language and included post-mortem finalization, which is unavailable in JavaScript (and frankly sounds terrifying from an implementation point of view!) I thought I’d try my hand at writing […]
Categorieën: Mozilla-nl planet

Karl Dubost: Get browsers version number on macOS (zsh)

Mozilla planet - vr, 10/06/2022 - 00:00

I'm not sure why I had not written this before, but it kind of hit me when doing testing this week, that I could optimize a bit more my time.

statues in the forest with a red beanny.

This is a shell (zsh) script and macOS only. It reads the version information of a list of browsers and spills them out in a nice and ready to be copied and pasted in a bug report.

#!/bin/zsh APP_PATH="/Applications/" INFO_PATH=".app/Contents/Info.plist" browsers=("Safari Technology Preview" "Firefox Nightly" "Google Chrome Canary" "Safari" "Firefox" "Google Chrome" "Microsoft Edge Canary") for browser_name in ${(@k)browsers}; do full_path="${APP_PATH}${browser_name}${INFO_PATH}" ; if test -f "$full_path"; then browser_version=$(defaults read "$full_path" CFBundleShortVersionString); echo "${browser_name} ${browser_version}"; fi done

What it looks like once rendered. I need to update a couple of things.

Safari Technology Preview 15.4 Firefox Nightly 103.0a1 Safari 15.5 Firefox 99.0 Microsoft Edge Canary 104.0.1285.0


Categorieën: Mozilla-nl planet

Mozilla Addons Blog: Manifest V3 Firefox Developer Preview — how to get involved

Mozilla planet - wo, 08/06/2022 - 23:58

Firefox logoWhile MV3 is still in development, many major features are already included in the Developer Preview, which provides an opportunity to expose functionality for testing and feedback. With strong developer feedback, we’re better equipped to quickly address critical bug fixes, provide clear developer documentation, and reorient functionality.

Some features, such as a well defined and documented lifecycle for Event Pages, are still works in progress. As we complete features, they’ll land in future versions of Firefox and you’ll be able to test and progress your extensions into MV3 compatibility. In most ways Firefox is committed to MV3 cross browser compatibility. However in some cases Firefox will offer distinct extension functionality.

Developer Preview is not available to regular users; it requires you to change preferences in about:config. Thus you will not be able to upload MV3 extensions to (AMO) until we have an official release available to users.

The following are key considerations about migration at this time and areas we’d greatly appreciate developer feedback.

  1. Read the MV3 migration guide. MV3 contains many changes and our migration guide covers the major necessary steps, as well as linking to documentation to help understand further details.
  2. Update your extension to be compatible with Event Pages. One major difference in Firefox is our use of Event Pages, which provides an alternative to the existing Background Pages that allows idle timeouts and page restarts. This adds resilience to the background, which is necessary for resource constraints and mobile devices. For the most part, Event Pages are compatible with existing Background Pages, requiring only minor changes. We plan to release Event Pages for MV2 in an upcoming Firefox release, so preparation to use Event Pages can be included in MV2 addons soon. Many extensions may not need all the capabilities available in Event Pages. The background scripts are easily transferable to the Service Worker background when it becomes available in a future release. In the meantime, extensions attempting to support both Chrome and Firefox can take advantage of Event Pages in Firefox.
  3. Test your content scripts with MV3. There are multiple changes that will impact content scripts, ranging from tighter restrictions on CORS, CSP, remote code execution, and more. Not all extensions will run into issues in these cases, and some may only require minor modifications that will likely work within MV2 as well.
  4. Understand and consider your migration path for APIs that have changed or deprecated. Deprecated APIs will require code changes to utilize alternate or new APIs. Examples include New Scripting API (which will be part of MV2 in a future release), changing page and browser actions to the action API, etc.
  5. Test and plan migration for permissions. Most permissions are already available as optional permissions in MV2. With MV3, we’re making host permissions optional — in many cases by default. While we do not yet have the primary UI for user control in Developer Preview, developers should understand how these changes will affect their extensions.
  6. Let us know how it’s going! Your feedback will help us make the transition from MV2 to MV3 as smooth as possible. Through Developer Preview we anticipate learning about MV3 rough edges, documentation needs, new features to be fleshed out, and bugs to be fixed. We have a host of community channels you can access to ask questions, help others, report problems, or whatever else you desire to communicate as it relates to the MV3 migration process.

Stay in touch with us on any of these forums…


The post Manifest V3 Firefox Developer Preview — how to get involved appeared first on Mozilla Add-ons Community Blog.

Categorieën: Mozilla-nl planet

Mozilla Thunderbird: Welcome To The Thunderbird 102 Beta! Resources, Links, And Guides

Mozilla planet - di, 07/06/2022 - 14:30

The wait for this year’s major new Thunderbird release is almost over! But you can test-drive many of the new features like the brand new Address Book, Matrix Chat support, import/export wizard, and refreshed visuals right now with the Thunderbird 102 Beta. Better still, you might be directly responsible for improving the final product via your feedback and bug reports.

Below, you’ll find all the resources you need for testing the Thunderbird 102 Beta. From technical guides to a community feedback forum to running the beta side-by-side with your existing stable version, we’ve got you covered.

Do you feel something is missing from this list? Please leave a comment here, or email me personally (jason at thunderbird dot net). I’ll make sure we get it added!

Thunderbird 102 Beta: First Steps

Here are some first steps and important considerations to take into account before deciding to install the beta.

Thunderbird 102 Beta: Guides, Links, And Resources

We want you to have the smoothest beta experience possible. Whether you’re reporting bugs, seeking solutions, or trying to run beta side-by-side with an existing Thunderbird installation, these resources should help.

From all of us here at Thunderbird: Have fun, and happy testing!

The post Welcome To The Thunderbird 102 Beta! Resources, Links, And Guides appeared first on The Thunderbird Blog.

Categorieën: Mozilla-nl planet

Cameron Kaiser: macOS Oxnard

Mozilla planet - ma, 06/06/2022 - 21:01
Those of us in southern California are shaking our heads. I like the City of (San Buena)Ventura fine, but Ventura isn't exactly in the same snobbish class as, you know, Big Sur or Monterey. I mean, if they really wanted they could have had macOS Camarillo, or macOS Thousand Oaks, or maybe even macOS Calabasas even though that sounds those gourd urns you buy at Pier 1 or a euphemism for unmentionable body parts, but anyway. (macOS Malibu! Buy all her friends!) There was a lot of buzz over the possibility this could have been macOS Mammoth, and even Wikipedia went all-in. I can see why they didn't because the jokes would have flown thick and heavy if it turns out as big and ponderous as the name, but when you give up the chance at Point Mugu or Oxnard (or La Conchita: the operating system that collapses on itself!) for a pretty if less ostentatious coastal community, someone at Apple just isn't thinking big. Or, for that matter, mammoth.
Categorieën: Mozilla-nl planet