mozilla

Mozilla Nederland LogoDe Nederlandse
Mozilla-gemeenschap

The Firefox Frontier: Sharing links via email just got easier thanks to Email Tabs

Mozilla planet - ma, 12/11/2018 - 16:02

If your family is anything like ours, the moment the calendar flips to October, you’re getting texts and emails asking for holiday wish lists. Email remains one of the top … Read more

The post Sharing links via email just got easier thanks to Email Tabs appeared first on The Firefox Frontier.

Categorieën: Mozilla-nl planet

The Mozilla Blog: Firefox Ups the Ante with Latest Test Pilot Experiment: Price Wise and Email Tabs

Mozilla planet - ma, 12/11/2018 - 16:00

Over the last few years, the Test Pilot team has developed innovative features for Firefox desktop and mobile, collaborating directly with Firefox users to improve the browser – from reminders to return to a tab on your desktop to a simple and secure way to keep track of your passwords.

Today, just in time for the holiday shopping season, the Firefox Test Pilot team is introducing Price Wise and Email Tabs — the latest experimental features designed to give users more choice and transparency when shopping online. These game-changing desktop tools are sure to make shopping a breeze with more options to save, share, track and shop. We’ve also made a few updates to the Test Pilot program itself to make it even easier to become a part of the growing Firefox users testing new features.

Price Wise – Track prices across major retailers and get notified when the price drops

Online comparison shopping is more popular than ever, but it’s often hard to know when to buy to get the best deal. With Firefox Price Wise, you can add products to your Price Watcher list and get a desktop notification automatically every time the price drops. Users can even click through directly from their list to purchase as soon as the price changes, making online shopping more affordable and efficient. The feature is currently only available in the U.S., and works with products from five major retailers: Best Buy, eBay, Amazon, Walmart, and The Home Depot. This list of retailers were among the top 10 visited by Firefox users and we’re working to expand to more retailers in the future.



Email Tabs – Save and share content seamlessly as you browse the web

While there are many tools to help users share and save links when browsing, research shows that most of us still rely on email to get the job done – a manual process that requires multiple steps and services. We think there’s a better way. With Email Tabs, you can select and send links to one or many open tabs all within Firefox in a few short steps, making it easier than ever to share your holiday gift list, Thanksgiving recipes or just about anything else.

To start, click the Email Tabs icon at the top the browser, select the tabs you want and decide how much of the content you want to send – just the links, a screenshot preview or full text – then hit send and it’ll automatically be sent to your Gmail inbox.

Decide how you want to send whether its links, screenshot preview or full text

How about saving the links for future reference? Email Tabs also lets you copy multiple tabs to Clipboard for outside sharing. The feature only works with Gmail right now, but we’re working on adding more clients in the near future. This will be seamless if you’re logged into Gmail already, if not you can always log in once you’re prompted.

Copy one or multiple tabs to Clipboard

And of course, the best part of Price Wise and Email Tabs? With Firefox private browsing and content blocking features, you can shop online with extra protection against tracking this holiday season.

Improved Test Pilot for Users to Shape Firefox

We appreciate the thousands of Firefox users who have participated in the Test Pilot program since we started this journey. It’s their voice and impact that have motivated and inspired us to continue to develop features and services. Thanks to their support, we’re happy to share that several of our experiments are ready for graduation.

Send, which lets you upload and encrypt large files (up to 1GB) to share online, will be updated and unveiled later this year. Our Summer experiments, Firefox Color, which allows you to customize several different elements of your browser, including background texture, text, icons, the toolbar and highlights, and Side View, which allows you to view two different browser tabs in the same tab, within the same browser window, will graduate as standalone extensions.

We’re always working to improve our Test Pilot program to encourage Firefox users to participate and provide feedback on the latest Firefox features. With this version of Test Pilot, we’ve simplified the steps to make it easier for users to participate than before. To learn more about our revamped Test Pilot program and to help us test and evaluate a variety of potential Firefox tools, visit testpilot.firefox.com.

 

The post Firefox Ups the Ante with Latest Test Pilot Experiment: Price Wise and Email Tabs appeared first on The Mozilla Blog.

Categorieën: Mozilla-nl planet

Firefox Ups the Ante with Latest Test Pilot Experiment: Price Wise and Email Tabs

Mozilla Blog - ma, 12/11/2018 - 16:00

Over the last few years, the Test Pilot team has developed innovative features for Firefox desktop and mobile, collaborating directly with Firefox users to improve the browser – from reminders to return to a tab on your desktop to a simple and secure way to keep track of your passwords.

Today, just in time for the holiday shopping season, the Firefox Test Pilot team is introducing Price Wise and Email Tabs — the latest experimental features designed to give users more choice and transparency when shopping online. These game-changing desktop tools are sure to make shopping a breeze with more options to save, share, track and shop. We’ve also made a few updates to the Test Pilot program itself to make it even easier to become a part of the growing Firefox users testing new features.

Price Wise – Track prices across major retailers and get notified when the price drops

Online comparison shopping is more popular than ever, but it’s often hard to know when to buy to get the best deal. With Firefox Price Wise, you can add products to your Price Watcher list and get a desktop notification automatically every time the price drops. Users can even click through directly from their list to purchase as soon as the price changes, making online shopping more affordable and efficient. The feature is currently only available in the U.S., and works with products from five major retailers: Best Buy, eBay, Amazon, Walmart, and The Home Depot. This list of retailers were among the top 10 visited by Firefox users and we’re working to expand to more retailers in the future.



Email Tabs – Save and share content seamlessly as you browse the web

While there are many tools to help users share and save links when browsing, research shows that most of us still rely on email to get the job done – a manual process that requires multiple steps and services. We think there’s a better way. With Email Tabs, you can select and send links to one or many open tabs all within Firefox in a few short steps, making it easier than ever to share your holiday gift list, Thanksgiving recipes or just about anything else.

To start, click the Email Tabs icon at the top the browser, select the tabs you want and decide how much of the content you want to send – just the links, a screenshot preview or full text – then hit send and it’ll automatically be sent to your Gmail inbox.

Decide how you want to send whether its links, screenshot preview or full text

How about saving the links for future reference? Email Tabs also lets you copy multiple tabs to Clipboard for outside sharing. The feature only works with Gmail right now, but we’re working on adding more clients in the near future. This will be seamless if you’re logged into Gmail already, if not you can always log in once you’re prompted.

Copy one or multiple tabs to Clipboard

And of course, the best part of Price Wise and Email Tabs? With Firefox private browsing and content blocking features, you can shop online with extra protection against tracking this holiday season.

Improved Test Pilot for Users to Shape Firefox

We appreciate the thousands of Firefox users who have participated in the Test Pilot program since we started this journey. It’s their voice and impact that have motivated and inspired us to continue to develop features and services. Thanks to their support, we’re happy to share that several of our experiments are ready for graduation.

Send, which lets you upload and encrypt large files (up to 1GB) to share online, will be updated and unveiled later this year. Our Summer experiments, Firefox Color, which allows you to customize several different elements of your browser, including background texture, text, icons, the toolbar and highlights, and Side View, which allows you to view two different browser tabs in the same tab, within the same browser window, will graduate as standalone extensions.

We’re always working to improve our Test Pilot program to encourage Firefox users to participate and provide feedback on the latest Firefox features. With this version of Test Pilot, we’ve simplified the steps to make it easier for users to participate than before. To learn more about our revamped Test Pilot program and to help us test and evaluate a variety of potential Firefox tools, visit testpilot.firefox.com.

 

The post Firefox Ups the Ante with Latest Test Pilot Experiment: Price Wise and Email Tabs appeared first on The Mozilla Blog.

Categorieën: Mozilla-nl planet

Wladimir Palant: As far as I'm concerned, email signing/encryption is dead

Mozilla planet - ma, 12/11/2018 - 14:08

It’s this time of year again, sending emails from Thunderbird fails with an error message:

The certificates I use to sign my emails have expired. So I once again need to go through the process of getting replacements. Or I could just give up on email signing and encryption. Right now, I am leaning towards the latter.

Why did I do it in the first place?

A while back, I used to communicate a lot with users of my popular open source project. So it made sense to sign emails and let people verify — it’s really me writing. It also gave people a way to encrypt their communication with me.

The decision in favor of S/MIME rather than PGP wasn’t because of any technical advantage. The support for S/MIME is simply built into many email clients by default, so the chances that the other side would be able to recognize the signature were higher.

How did this work out?

In reality, I had a number of confused users asking about that “attachment” I sent them. What were they supposed to do with this smime.p7s file?

Over the years, I received mails from more than 7000 email addresses. Only 72 signed their emails with S/MIME, 52 used PGP to sign. I only exchanged encrypted mails with one person.

What’s the point of email signing?

The trouble is, signing mails is barely worth it. If somebody receives an unsigned mail, they won’t go out of their way to verify the sender. Most likely, they won’t even notice, because humans are notoriously bad at recognizing the absence of something. But even if they do, unsigned is what mails usually look like.

Add to this that the majority of mail users are using webmail now. So their email clients have no support for either S/MIME or PGP. Nor is it realistic to add this support without introducing a trusted component such as a browser extension. But with people who didn’t want to install a dedicated email client, how likely are they to install this browser extension even if a trustworthy solution existed?

Expecting end users to take care of sender verification just isn’t realistic. Instead, approaches like SPF or DKIM emerged. While these aren’t perfect and expect you to trust your mail provider, fake sender addresses are largely a solved issue now.

Wouldn’t end-to-end encryption be great?

Now we know of course about state-level actors spying on the internet traffic, at least since 2013 there is no denying. So there has been tremendous success in deprecating unencrypted HTTP traffic. Shouldn’t the same be done for emails?

Sure, but I just don’t see it happen by means of individual certificates. Even the tech crowd is struggling when it comes to mobile email usage. As to the rest of the world, good luck explaining them why they need to jump through so many hoops, starting with why webmail is a bad choice. In fact, we considered rolling out email encryption throughout a single company and had to give up. The setup was simply too complicated and limited the possible use cases too much.

So encrypting email traffic is now done by enabling SSL in all those mail relays. Not really end-to-end encryption, with the mail text visible on each of those relays. Not entirely safe either, as long as the unencrypted fallback still exists — an attacker listening in the middle can always force the mail servers to fall back to an unencrypted connection. But at least passive eavesdroppers will be dealt with.

But what if S/MIME or PGP adoption increases to 90% of the population?

Good luck with that. As much as I would love to live in this perfect world, I just don’t see it happen. It’s all a symptom of the fact that security is bolted on top of email. I’m afraid, if we really want end-to-end encryption we’ll need an entirely different protocol. Most importantly, secure transmissions should be the default rather than an individual choice. And then we’ll only have to validate the approach and make sure it’s not a complete failure.

Categorieën: Mozilla-nl planet

Mozilla Reps Community: Rep of the Month – October 2018

Mozilla planet - ma, 12/11/2018 - 13:59

Please join us in congratulating Tim Maks van den Broek, our Rep of the Month for October 2018!

Tim is one of our most active members in the Dutch community. During his 15+ years as a Mozilla Volunteer he has touched many parts of the Project. More recently his focus is on user support and he is active in our Reps Onboarding team.

org

On the Onboarding Team he dedicates time for new Reps joining the project to ensure a smooth process in getting to know our processes and work… He is also helping the Participation Systems team in operationalizing (i.e. bug fixing) identity and access management at Mozilla (shortly known as IAM login system).

To congratulate him, please head over to the Discourse topic!

Categorieën: Mozilla-nl planet

Cameron Kaiser: ICYMI: what's new on Talospace

Mozilla planet - ma, 12/11/2018 - 01:02
In the shameless plug category, in case you missed them, two original articles on Talospace, our sister blog: making your Talos II into an IBM pSeries (yes, you can run AIX on a Talos II with Linux KVM), and roadgeeking with the Talos II (because the haters gotta hate and say POWER9 isn't desktop ready, which is just FUD FUD FUD).
Categorieën: Mozilla-nl planet

Daniel Stenberg: HTTP/3

Mozilla planet - zo, 11/11/2018 - 19:14

The protocol that's been called HTTP-over-QUIC for quite some time has now changed name and will officially become HTTP/3. This was triggered by this original suggestion by Mark Nottingham.

The QUIC Working Group in the IETF works on creating the QUIC transport protocol. QUIC is a TCP replacement done over UDP. Originally, QUIC was started as an effort by Google and then more of a "HTTP/2-encrypted-over-UDP" protocol.

When the work took off in the IETF to standardize the protocol, it was split up in two layers: the transport and the HTTP parts. The idea being that this transport protocol can be used to transfer other data too and its not just done explicitly for HTTP or HTTP-like protocols. But the name was still QUIC.

People in the community has referred to these different versions of the protocol using informal names such as iQUIC and gQUIC to separate the QUIC protocols from IETF and Google (since they differed quite a lot in the details). The protocol that sends HTTP over "iQUIC" was called "hq" (HTTP-over-QUIC) for a long time.

Mike Bishop scared the room at the QUIC working group meeting in IETF 103 when he presented this slide with what could be thought of almost a logo...

On November 7, 2018 Dmitri of Litespeed announced that they and Facebook had successfully done the first interop ever between two HTTP/3 implementations. Mike Bihop's follow-up presentation in the HTTPbis session on the topic can be seen here. The consensus in the end of that meeting said the new name is HTTP/3!

No more confusion. HTTP/3 is the coming new HTTP version that uses QUIC for transport!

Categorieën: Mozilla-nl planet

Niko Matsakis: After NLL: Moving from borrowed data and the sentinel pattern

Mozilla planet - za, 10/11/2018 - 06:00

Continuing on with my “After NLL” series, I want to look at another common error that I see and its solution: today’s choice is about moves from borrowed data and the Sentinel Pattern that can be used to enable them.

The problem

Sometimes when we have &mut access to a struct, we have a need to temporarily take ownership of some of its fields. Usually what happens is that we want to move out from a field, construct something new using the old value, and then replace it. So for example imagine we have a type Chain, which implements a simple linked list:

enum Chain { Empty, Link(Box<Chain>), } impl Chain { fn with(next: Chain) -> Chain { Chain::Link(Box::new(next)) } }

Now suppose we have a struct MyStruct and we are trying to add a link to our chain; we might have something like:

struct MyStruct { counter: u32, chain: Chain, } impl MyStruct { fn add_link(&mut self) { self.chain = Chain::with(self.chain); } }

Now, if we try to run this code, we will receive the following error:

error[E0507]: cannot move out of borrowed content --> ex1.rs:7:30 | 7 | self.chain = Chain::with(self.chain); | ^^^^ cannot move out of borrowed content

The problem here is that we need to take ownership of self.chain, but you can only take ownership of things that you own. In this case, we only have borrowed access to self, because add_link is declared as &mut self.

To put this as an analogy, it is as if you had borrowed a really nifty Lego building that your friend made so you could admire it. Then, later, you are building your own Lego thing and you realize you would like to take some of the pieces from their building and put them into yours. But you can’t do that – those pieces belong to your friend, not you, and that would leave a hole in their building.

Still, this is kind of annoying – after all, if we look at the larger context, although we are moving self.chain, we are going to replace it shortly thereafter. So maybe it’s more like – we want to take some blocks from our friend’s Lego building, but not to put them into our own building. Rather, we were going to take it apart, build up something new with a few extra blocks, and then put that new thing back in the same spot – so, by the time they see their building again, the “hole” will be all patched up.

Root of the problem: panics

You can imagine us doing a static analysis that permits you to take ownership of &mut borrowed data, as long as we can see that it will be replaced before the function returns. There is one little niggly problem though: can we be really sure that we are going to replace self.chain? It turns out that we can’t, because of the possibility of panics.

To see what I mean, let’s take that troublesome line and expand it out so we can see all the hidden steps. The original line was this:

self.chain = Chain::with(self.chain);

which we can expand to something like this:

let tmp0 = self.chain; // 1. move `self.chain` out let tmp1 = Chain::with(tmp0); // 2. build new link self.chain = tmp1; // 3. replace with `tmp2`

Written this way, we can see that in between moving self.chain out and replacing it, there is a function call: Chain::with. And of course it is possible for this function call to panic, at least in principle. If it were to panic, then the stack would start unwinding, and we would never get to step 3, where we assign self.chain again. This means that there might be a destructor somewhere along the way that goes to inspect self – if it were to try to access self.chain, it would just find uninitialized memory. Or, even worse, self might be located inside of some sort of Mutex or something else, so even if our thread panics, other threads might observe the hole.

To return to our Lego analogy1, it is as if – after we removed some pieces from our friends Lego set – our parents came and made us go to bed before we were able to finish the replacement piece. Worse, our friend’s parents came over during the night to pick up the set, and so now when our friend gets it back, it has this big hole in it.

One solution: sentinel

In fact, there is a way to move out from an &mut pointer – you can use the function std::mem::replace2. replace sidesteps the panic problem we just described because it requires you to already have a new value at hand, so that we can move out from self.chain and immediately put a replacement there.

Our problem here is that we need to do the move before we can construct the replacement we want. So, one solution then is that we can put some temporary, dummy value in that spot. I call this a sentinel value – because it’s some kind of special value. In this particular case, one easy way to get the code to compile would be to stuff in an empty chain temporarily:

let chain = std::mem::replace(&mut self.chain, Chain::Empty); self.chain = Chain::with(chain);

Now the compiler is happy – after all, even if Chain::with panics, it’s not a memory safety problem. If anybody happens to inspect self.chain later, they won’t see uninitialized memory, they will see an empty chain.

To return to our Lego analogy3, it’s as if, when we remove the pieces from our friend’s Lego set, we immediately stuff in a a replacement piece. It’s an ugly piece, with the wrong color and everything, but it’s ok – because our friend will never see it.

A more robust sentinel

The compiler is happy, but are we happy? Perhaps we are, but there is one niggling detail. We wanted this empty chain to be a kind of “temporary value” that nobody ever observes – but can we be sure of that? Actually, in this particular example, we can be fairly sure… other than the possibility of panic (which certainly remains, but is perhaps acceptable, since we are in the process of tearing things down), there isn’t really much else that can happen before self.chain is replaced.

But often we are in a situation where we need to take temporary ownership and then invoke other self methods. Now, perhaps we expect that these methods will never read from self.chain – in other words, we have a kind of interprocedural conflict. For example, maybe to construct the new chain we invoke self.extend_chain instead, which reads self.counter and creates that many new links4 in the chain:

impl MyStruct { fn add_link(&mut self) { let chain = std::mem::replace(&mut self.chain, Chain::Empty); let new_chain = self.extend_chain(chain); self.chain = new_chain; } fn extend_chain(&mut self, chain: Chain) -> Chain { for _ in 0 .. self.counter { chain = Chain::with(chain); } chain } }

Now I would get a bit nervous. I think nobody ever observes this empty chain, but how can I be sure? At some point, you would like to test this hypothesis.

One solution here is to use a sentinel value that is otherwise invalid. For example, I could change my chain field to store an Option<Chain>, with the invariant that self.chain should always be Some, because if I ever observe a None, it means that add_link is in progress. In fact, there is a handy method on Option called take that makes this quite easy to do:

struct MyStruct { counter: u32, chain: Option<Chain>, // <-- new } impl MyStruct { fn add_link(&mut self) { // Equivalent to: // let link = std::mem::replace(&mut self.chain, None).unwrap(); let link = self.chain.take().unwrap(); self.chain = Some(Chain::with(self.chain)); } }

Now, if I were to (for example) invoke add_link recursively, I would get a panic, so I would at least be alerted to the problem.

The annoying part about this pattern is that I have to “acknowledge” it every time I reference self.chain. In fact, we already saw that in the code above, since we had to wrap the new value with Some when assigning to self.chain. Similarly, to borrow the chain, we can’t just do &self.chain, but instead we have to do something like self.chain.as_ref().unwrap(), as in the example below, which counts the links in the chain:

impl MyStruct { fn count_chain(&self) -> usize { let mut links = 0; let mut cursor: &Chain = self.chain.as_ref().unwrap(); loop { match cursor { Chain::Empty => return links, Chain::Link(c) => { links += 1; cursor = c; } } } } }

So, the pro of using Option is that we get stronger error detection. The con is that we have an ergonomic penalty.

Observation: most collections do not allocate when empty

One important detail when mucking about with sentinels: creating an empty collection is generally “free” in Rust, at least for the standard library. This is important because I find that the fields I wish to move from are often collections of some kind or another. Indeed, even in our motivating example here, the Chain::Empty sentinel is an “empty” collection of sorts – but if the field you wish to move were e.g. a Vec<T> value, then you could as well use Vec::new() as a sentinel without having to worry about wasteful memory allocations.

An alternative to sentinels: prevent unwinding through abort

There is a crate called take_mut on crates.io that offers a convenient alternative to installing a sentinel, although it does not apply in all scenarios. It also raises some interesting questions about “unsafe composability” that worry me a bit, which I’ll discuss at the end.

To use take_mut to solve this problem, we would rewrite our add_link function as follows:

fn add_link(&mut self) { take_mut::take(&mut self.chain, |chain| { Chain::with(chain) }); }

The take function works like so: first, it uses unsafe code to move the value from self.chain, leaving uninitialized memory in its place. Then, it gives this value to the closure, which in this case will execute Chain::with and return a new chain. This new chain is then installed to fill the hole that was left behind.

Of course, this begs the queston: what happens if the Chain::with function panics? Since take has left a hole in the place of self.chain, it is in a tough spot: the answer from the take_mut library is that it will abort the entire process. That is, unlike with a panic, there is no controlled shutdown. There is some precedent for this: we do the same thing in the event of stack overflow, memory exhaustion, and a “double panic” (that is, a panic that occurs when unwinding another panic).

The idea of aborting the process is that, unlike unwinding, we are guaranteeing that there are no more possible observers for that hole in memory. Interestingly, in writing this article, I realized that aborting the process does not compose with some other unsafe abstractions you might want. Imagine, for example, that you had memory mapped a file on disk and were supplying an &mut reference into that file to safe code. Or, perhaps you were using shared memory between two processes, and had some kind of locked object in there – after locking, you might obtain an &mut into the memory of that object. Put another way, if the take_mut crate is safe, that means that an &mut can never point to memory not ultimately “owned” by the current process. I am not sure if that’s a good decision for us to make – though perhaps the real answer is that we need to permit unsafe crates to be a bit more declarative about the conditions they require from other crates, as I talk a bit about in this older blog post on observational equivalence.

My recommenation

I would advise you to use some variant of the sentinel pattern. I personally prefer to use a “signaling sentinel”5 like Option if it would be a bug for other code to read the field, unless the range of code where the value is taken is very simple. So, in our original example, where we just invoked Chain::new, I would not bother with an Option – we can locally see that self does not escape. But in the variant where we recursively invoke methods on self, I would, because there it would be possible to recursively invoke self.add_link or otherwise observe self.chain in this intermediate state.

It’s a bit annoying to use Option for this because it’s so explicit. I’ve sometimes created a Take<T> type that wraps a Option<T> and implements DerefMut<Target = T>, so it can transparently be used as a T in most scenarios – but which will panic if you attempt to deref the value while it is “taken”. This might be a nice library, if it doesn’t exist already.

One other thing to remember: instead of using a sentinel, you may be able to avoid moving altogether, and sometimes that’s better. For example, if you have an &mut Vec<T> and you need ownership of the T values within, you can use the drain iterator method. The only real difference from drain vs into_iter is that drain leaves an empty iterator behind once iteration is complete.

(Similarly, if you are writing an API and have the option of choosing between writing a fn(self) -> Self sort of signature vs fn(&mut self), you might adopt the latter, as it gives your callers more flexibility. But this is a bit subtle; it would make a good topic for the Rust API guidelines, but I didn’t find it there.)

Discussion

If you’d like to discuss something in this post, there is a dedicated thread on the users.rust-lang.org site.

Appendix A. Possible future directions

Besides creating a more ergonomic library to replace the use of Option as a sentinel, I can think of a few plausible extensions to the language that would alleviate this problem somewhat.

Tracking holes

The most obvious change is that we could plausibly extend the borrow checker to permit moves out of an &mut, so long as the value is guaranteed to be replaced before the function returns or panics. The “or panics” bit is the tricky part, of course.

Without any other extensions to the language, we would have to consider virtually every operation to “potentially panic”, which would be pretty limiting. Our “motivating example” from this post, for example, would fail the test, because the Chain::with function – like any function – might potentially panic. The main thing this would do is allow functions like std::mem::replace and std::mem::swap to be written in safe code, as well as other more complex rotations. Handy, but not earth shattering.

If we wanted to go beyond that, we would have to start looking into effect type systems, which allow us to annotate functions with things like “does not panic” and so forth. I am pretty nervous about taking that particular “step up” in complexity – though there may be other use cases (for example, to enable FFI interoperability with things that longjmp, we might want ways to for functions to declare whether they panic and how anyway). But it feels like at best this will be a narrow tool that we wouldn’t expect people to use broadly.

In order to avoid annotation, @eddyb has tossed around the idea of an “auto trait”-style effect system. Basically, you would be able to state that you want to take as argument a “closure that can never call the function X” – in this case, that might mean “a closure that can never invoke panic!”. The compiler would then do a conservative analysis of the closure’s call graph to figure out if it works. This would then permit a variant of the take_mut crate where we don’t have to worry about aborting the process, because we know the closure never panics. Of course, just like auto traits, this raises semver concerns – sure, your function doesn’t panic now, but does that mean you promise never to make it panic in the future?6

Permissions in, permissions out

There is another possible answer as well. We might generalize Rust’s borrowing system to express the idea of a “borrow that never ends” – presently that’s not something we can express. The idea would be that a function like add_link would take in an &mut but somehow express that, if a panic were to occur, the &mut is fully invalidated.

I’m not particularly hopeful on this as a solution to this particular problem. There is a lot of complexity to address and it just doesn’t seem even close to worth it.

There are however some other cases where similar sorts of “permission juggling” might be nice to express. For example, people sometimes want the ability to have a variant on insert – basically a function that inserts a T into a collection and then returns a shared reference &T to inserted data. The idea is that the caller can then go on to do other “shared” operations on the map (e.g., other map lookups). So the signature would look a little like this:

impl SomeCollection<T> { fn insert_then_get(&mut self, data: T) -> &T { // } }

This signature is of course valid in Rust today, but it has an existing meaning that we can’t change. The meaning today is that the function requires unique access to self – and that unique access has to persist until we’ve finished using the return value. It’s precisely this interpretation that makes methods like Mutex::get_mut sound.

If we were to move in this direction, we might look to languages like Mezzo for inspiration, which encode this notion of “permissions in, permissons out” more directly7. I’m definitely interested in investigating this direction, particularly if we can use it to address other proposed “reference types” like &out (for taking references to uninitialized memory which you must initialized), &move, and so forth. But this seems like a massive research effort, so it’s hard to predict just what it would look like for Rust, and I don’t see us adopting this sort of thing in the near to mid term.

Panic = Abort having semantic impact

Shortly after I posted this, Gankro tweeted the following:

[chanting in distance]
Appendix A With Panic=Abort Having Semantic Impact

— Alexis Beingessner (@Gankro) November 10, 2018

I actually meant to talk about that, so I’m adding this quick section. You may have noticed that panics and unwinding are a big thing in this post. Unwinding, however, is only optional in Rust – many users choose instead to convert panics into a hard abort of the entire process. Presently, the type and borrow checkers do not consider this option in any way, but you could imagine them taking it into account when deciding whether a particular bit of code is safe, particularly in lieu of a more fancy effect system.

I am not a big fan of this. For one thing, it seems like it would encourage people to opt into “panic = abort” just to avoid a sentinel value here and there, which would lead to more of a split in the ecosystem. But also, as I noted when discussing the take_mut crate, this whole approach presumes that an &mut reference can only ever refer to memory that is owned by the current process, and I’m not sure that’s something we wish to state.

Still, food for thought.

Footnotes
  1. I really like this Lego analogy. You’ll just have to bear with me.

  2. std::mem::replace is a super useful function in all kinds of scenarios; worth having in your toolbox.

  3. OK, maybe I’m taking this analogy too far. Sorry. I need help.

  4. I bet you were wondering what that counter field was for – gotta admite that Chekhov’s Gun action.

  5. i.e., some sort of sentinel where a panic occurs if the memory is observed

  6. It occurs to me that we now have a corpus of crates at various versions. It would be interesting to see how common it is to make something panic which did not used to, as well sa to make other sorts of changes.

  7. Also related: fractional permissions and a whole host of other things.

Categorieën: Mozilla-nl planet

Dennis Schubert: Observing Broken Image Intersections

Mozilla planet - za, 10/11/2018 - 03:39

So, I thought I am going to share this little story about a broken website I looked at recently, just to make an example of how weird the web can be, and how Gray implementing specifications can sometimes be.

Imagine a site1 with a list of news articles and lots of thumbnails next to them, and imagine the served code looks something like:

CSS:

.thumbnail { overflow: hidden; } .thumbnail img { height: auto; margin-left: -42px; width: 213px; }

HTML:

<div class="thumbnail"> <img data-lazyload-src="foo.jpg" /> </div>

That does not look too fancy, does it? The data-lazyload-src attribute on the image suggests they are doing some kind of image lazyloading, but that is probably a good thing given the site is mobile optimized as well, and you do not want to load a lot of thumbnails at once. The site is actually pretty smart about it, and is using a library that implements an IntersectionObserver to be notified whenever an <img> scrolls into view, to then trigger loading the image. Pretty cool stuff.

Now, the fun part. We received a report that the site is working fine in Chrome, but for some reason, in Firefox, the thumbnails never load. Pretty bad.

After evenly distributing breakpoints in code I deemed relevant, it turned out the IntersectionObserver never triggers the image loading. As my knowledge about the IntersectionObserver was still stuck in 2014 (which is pretty much nothing, given the work on it started in 2015), I took the time to read the spec, because clearly, Firefox has a compat issue breaking that website. And well, I actually found a compat issue, but that one was completely irrelevant to the issue I was originally debugging.

So, back to the beginning. Looking again at their IntersectionObserver, I realized that Firefox is calling the callback for a lot of images, but in Firefox, IntersectionObserverEntry.isIntersecting is false, even for the images that should be true as they are scrolled into view. In Chrome, everything is fine, and some of the thumbnails are reported to be intersecting.

Before you scroll up to check the source code again, let me remind you that images should be rendered as display: inline; per default, as you surely remember2. Now, what do you expect the CSS to do in the default case where no image is loaded:

  1. Scale the image to 213px width with some magic height.
  2. No dimensions applied to the image, because it is display: inline;, d’oh!
  3. Render a “broken image” icon, but it is replaced with a picture of a cute kitten.

If you guessed 1, 2, or 3: Congratulations, you are wrong! As we all know, CSS is easy, and this is one of those cases where CSS is super easy. So, let me explain this simple CSS behavior by talking spec for a second here. <img> is, amongst some others3, a so-called replaced element. The spec accurately describes those as

An element whose content is outside the scope of the CSS formatting model, such as an image, embedded document, or applet. For example, the content of the HTML IMG element is often replaced by the image that its “src” attribute designates

which is basically the spec authors telling you “yeah, we also do not know how it looks like”. For images, there are some rules on how the browser should render things:

  • If the element does not represent an image, but the element already has intrinsic dimensions (e.g. from the dimension attributes or CSS rules), and either:

    • the user agent has reason to believe that the image will become available and be rendered in due course, or
    • the element has no alt attribute, or
    • the Document is in quirks mode

    The user agent is expected to treat the element as a replaced element whose content is the text that the element represents, if any, optionally alongside an icon indicating that the image is being obtained (if applicable).

  • If the element is an img element that represents some text and the user agent does not expect this to change

    The user agent is expected to treat the element as a non-replaced phrasing element whose content is the text, optionally with an icon indicating that an image is missing, so that the user can request the image be displayed or investigate why it is not rendering. In non-graphical contexts, such an icon should be omitted.

  • If the element is an img element that represents nothing and the user agent does not expect this to change

    The user agent is expected to treat the element as an empty inline element. (In the absence of further styles, this will cause the element to essentially not be rendered.)

There are some nasty spec language bits in there, but in order to not bother you more than I need, I will skip those, but you get the idea. If you scroll back up to the source, you will notice two things: the image tag in question does not have a src attribute, and to add more fun to the mix, it also does not have an alt attribute, but it does have intrinsic dimensions, as they are defined via CSS.

So, technically, the first case is true: the element is not an image, and it also does not have an alt attribute. But what does “treat the element as a replaced element whose content is the text that the element represents” even mean? How are we supposed to replace nothing with text? Because there is no text, the last case is also true, because there is nothing there, and because there is no src attribute to be loaded, the browser also does not expect this to change.

To my understanding, this means the browser can replace the element with either something or with nothing. Well, let’s see what different browsers do:

HTML:

<img><hr> <img src="broken image!"><hr> <img alt=""><hr> <img alt="poetic alt text"><hr>

Comparison of the code example's rendering in Firefox, Chrome, Edge, and Safari

As it turns out, browsers disagree in our relevant case. In Firefox, we render nothing as an inline element, and Chrome decides to render something empty as inline-block.

Even worse, I am having a hard time figuring out who is right and who is wrong here. There are two Chrome issues (one, two) about this specific scenario, and a Firefox patch landing just as I write this that brings Firefox closer to Chrome, at least in the no-src scenario. But still, there seems to be a general disagreement on what the right thing is.

To end this whole post: if you paid attention4, you have figured out the original issue by now.

Because Firefox renders nothing (that is actually not entirely true, but let us act like it is, because the reality would turn this post into a proper scientific paper), there is nothing that can ever intersect the viewport, so the IntersectinObserver returns, rightfully so, false. On Chrome, however, there is something that is 213px wide, so there is something that intersects the viewport, so there is something for the observer to report on.

And there is our issue. Quite simple, eh?

The sad thing out of all is there is a very, very simple solution to all of this.

.thumbnail img { display: inline-block; }

And they would live happily ever after.

  1. This is totally not webcompat.com bug #18554, and I am totally not trying to write a miketaylr.com style blog post here. 

  2. Yeah, me neither. 

  3. audio, canvas, embed, iframe, input, object, and video, if you really want to know. 

  4. Yeah, me neither. 

Categorieën: Mozilla-nl planet

Mike Hoye: The Evolution Of Open

Mozilla planet - vr, 09/11/2018 - 23:00

This started its life as a pair of posts to the Mozilla governance forum, about the mismatch between private communication channels and our principles of open development. It’s a little long-winded, but I think it broadly applies not just to Mozilla but to open source in general. This version of it interleaves those two posts into something I hope is coherent, if kind of rambly. Ultimately the only point I want to make here is that the nature of openness has changed, and while it doesn’t mean we need to abandon the idea as a principle or as a practice, we can’t ignore how much has changed or stay mired in practices born of a world that no longer exists.

If you’re up for the longer argument, well, you can already see the wall of text under this line. Press on, I believe in you.

Even though open source software has essentially declared victory, I think that openness as a practice – not just code you can fork but the transparency and accessibility of the development process – matters more than ever, and is in a pretty precarious position. I worry that if we – the Royal We, I guess – aren’t willing to grow and change our understanding of openness and the practical realities of working in the open, and build tools to help people navigate those realities, that it won’t be long until we’re worse off than we were when this whole free-and-open-source-software idea got started.

To take that a step further: if some of the aspirational goals of openness and open development are the ideas of accessibility and empowerment – that reducing or removing barriers to participation in software development, and granting people more agency over their lives thereby, is self-evidently noble – then I think we need to pull apart the different meanings of the word “open” that we use as if the same word meant all the same things to all the same people. My sense is that a lot of our discussions about openness are anchored in the notion of code as speech, of people’s freedom to move bits around and about the limitations placed on those freedoms, and I don’t think that’s enough.

A lot of us got our start when an internet connection was a novelty, computation was scarce and state was fragile. If you – like me – are a product of this time, “open” as in “open source” is likely to be a core part of your sense of personal safety and agency; you got comfortable digging into code, standing up your own services and managing your own backups pretty early, because that was how you maintained some degree of control over your destiny, how you avoided the indignities of data loss, corporate exploitation and community collapse.

“Open” in this context inextricably ties source control to individual agency. The checks and balances of openness in this context are about standards, data formats, and the ability to export or migrate your data away from sites or services that threaten to go bad or go dark. This view has very little to say – and is often hostile to the idea of – granular access restrictions and the ability to impose them, those being the tools of this worldview’s bad actors.

The blind spots of this worldview are the products of a time where someone on the inside could comfortably pretend that all the other systems that had granted them the freedom to modify this software simply didn’t exist. Those access controls were handled, invisibly, elsewhere; university admission, corporate hiring practices or geography being just a few examples of the many, many barriers between the network and the average person.

And when we’re talking about blind spots and invisible social access controls, of course, what we’re really talking about is privilege. “Working in the open”, in a world where computation was scarce and expensive, meant working in front of an audience that was lucky enough to go to university or college, whose parents could afford a computer at home, who lived somewhere with broadband or had one of the few jobs whose company opened low-numbered ports to the outside world; what it didn’t mean was doxxing, cyberstalking, botnets, gamergaters, weaponized social media tooling, carrier-grade targeted-harassment-as-a-service and state-actor psy-op/disinformation campaigns rolling by like bad weather. The relentless, grinding day-to-day malfeasance that’s the background noise of this grudgefuck of a zeitgeist we’re all stewing in just didn’t inform that worldview, because it didn’t exist.

In contrast, a more recent turn on the notion of openness is one of organizational or community openness; that is, openness viewed through the lens of the accessibility and the experience of participation in the organization itself, rather than unrestricted access to the underlying mechanisms. Put another way, it puts the safety and transparency of the organization and the people in it first, and considers the openness of work products and data retention as secondary; sometimes (though not always) the open-source nature of the products emerges as a consequence of the nature of the organization, but the details of how that happens are community-first, code-second (and sometimes code-sort-of, code-last or code-never). “Openness” in this context is about accessibility and physical and emotional safety, about the ability to participate without fear. The checks and balances are principally about inclusivity, accessibility and community norms; codes of conduct and their enforcement.

It won’t surprise you, I suspect, to learn that environments that champion this brand of openness are much more accessible to women, minorities and otherwise marginalized members of society that make up a vanishingly small fraction of old-school open source culture. The Rust and Python communities are doing good work here, and the team at Glitch have done amazing things by putting community and collaboration ahead of everything else. But a surprising number of tool-and-platform companies, often in “pink-collar” fields, have taken the practices of open community building and turned themselves into something that, code or no, looks an awful lot like the best of what modern open source has to offer. If you can bring yourself to look past the fact that you can’t fork their code, Salesforce – Salesforce, of all the damn things – has one of the friendliest, most vibrant and supportive communities in all of software right now.

These two views aren’t going to be easy to reconcile, because the ideas of what “accountability” looks like in both contexts – and more importantly, the mechanisms of accountability built in to the systems born from both contexts – are worse than just incompatible. They’re not even addressing something the other worldview is equipped to recognize as a problem. Both are in some sense of the word open, both are to a different view effectively closed and, critically, a lot of things that look like quotidian routine to one perspective look insanely, unacceptably dangerous to the other.

I think that’s the critical schism the dialogue, the wildly mismatched understandings of the nature of risk and freedom. Seen in that light the recent surge of attention being paid to federated systems feels like a weirdly reactionary appeal to how things were better in the old days.

I’ve mentioned before that I think it’s a mistake to think of federation as a feature of distributed systems, rather than as consequence of computational scarcity. But more importantly, I believe that federated infrastructure – that is, a focus on distributed and resilient services – is a poor substitute for an accountable infrastructure that prioritizes a distributed and healthy community.  The reason Twitter is a sewer isn’t that Twitter is centralized, it’s that Jack Dorsey doesn’t give a damn about policing his platform and Twitter’s board of directors doesn’t give a damn about changing his mind. Likewise, a big reason Mastodon is popular with the worst dregs of the otaku crowd is that if they’re on the right instance they’re free to recirculate shit that’s so reprehensible even Twitter’s boneless, soporific safety team can’t bring themselves to let it slide.

That’s the other part of federated systems we don’t talk about much – how much the burden of safety shifts to the individual. The cost of evolving federated systems that require consensus to interoperate is so high that structural flaws are likely to be there for a long time, maybe forever, and the burden of working around them falls on every endpoint to manage for themselves. IRC’s (Remember IRC?) ongoing borderline-unusability is a direct product of a notion of openness that leaves admins few better tools than endless spammer whack-a-mole. Email is (sort of…) decentralized, but can you imagine using it with your junkmail filters off?

I suppose I should tip my hand at this point, and say that as much as I value the source part of open source, I also believe that people participating in open source communities deserve to be free not only to change the code and build the future, but to be free from the brand of arbitrary, mechanized harassment that thrives on unaccountable infrastructure, federated or not. We’d be deluding ourselves if we called systems that are just too dangerous for some people to participate in at all “open” just because you can clone the source and stand up your own copy. And I am absolutely certain that if this free software revolution of ours ends up in a place where asking somebody to participate in open development is indistinguishable from asking them to walk home at night alone, then we’re done. People cannot be equal participants in environments where they are subject to wildly unequal risk. People cannot be equal participants in environments where they are unequally threatened. And I’d have a hard time asking a friend to participate in an exercise that had no way to ablate or even mitigate the worst actions of the internet’s worst people, and still think of myself as a friend.

I’ve written about this before:

I’d like you to consider the possibility that that’s not enough.

What if we agreed to expand what freedom could mean, and what it could be. Not just “freedom to” but a positive defense of opportunities to; not just “freedom from”, but freedom from the possibility of.

In the long term, I see that as the future of Mozilla’s responsibility to the Web; not here merely to protect the Web, not merely to defend your freedom to participate in the Web, but to mount a positive defense of people’s opportunities to participate. And on the other side of that coin, to build accountable tools, systems and communities that promise not only freedom from arbitrary harassment, but even freedom from the possibility of that harassment.

More generally, I still believe we should work in the open as much as we can – that “default to open”, as we say, is still the right thing – but I also think we and everyone else making software need to be really, really honest with ourselves about what open means, and what we’re asking of people when we use that word. We’re probably going to find that there’s not one right answer. We’re definitely going to have to build a bunch of new tools.  But we’re definitely not going to find any answers that matter to the present day, much less to the future, if the only place we’re looking is backwards.

[Feel free to email me, but I’m not doing comments anymore. Spammers, you know?]

Categorieën: Mozilla-nl planet

Hacks.Mozilla.Org: Performance Updates and Hosting Moves: MDN Changelog for October 2018

Mozilla planet - vr, 09/11/2018 - 18:03
Done in October

Here’s what happened in October to the code, data, and tools that support MDN Web Docs:

Here’s the plan for November:

We shipped some changes designed to improve MDN’s page load time. The effects were not as significant as we’d hoped.

Shipped performance improvements

Our sidebars, like the Related Topics sidebar on <summary>, use a “mozToggler” JavaScript method to implement open and collapsed sections. This uses jQueryUI’s toggle effect, and is applied dynamically at load time. Tim Kadlec replaced it with the <details> element (KumaScript PR 789 and Kuma PR 4957), which semantically models open and collapsed sections. The <details> element is supported by most current browsers, with the notable exception of Microsoft Edge, which is supported with a polyfill.

Two copies of Chrome's performance tool, one showing 241ms with mozTogglers, and the other showing 94ms without it.

We expected at least 150ms improvement based on bench tests

The <details> update shipped October 4th, and the 31,000 pages with sidebars were regenerated to apply the change.

A second change was intended to reduce the use of Web Fonts, which must be downloaded and can cause the page to be repainted. Some browsers, such as Firefox Focus, block web fonts by default for performance and to save bandwidth.

One strategy is to eliminate the web font entirely. We replaced OpenSans with the built-in Verdana as the body font in September (PR 4967), and then again with Arial on October 22 (PR 5023). We’re also replacing Font Awesome, implemented with a web font, with inline SVG (PR 4969 and PR 5053). We expect to complete the SVG work in November.

A second strategy is to reduce the size of the web font. The custom Zilla font, introduced with the June 2017 redesign, was reduced to standard English characters, cutting the file sizes in half on October 10 (PR 5024).

These changes have had an impact on total download size and rendering time, and we’re seeing improvements in our synthetic metrics. However, there has been no significant change in page load as measured for MDN users. In November, we’ll try some more radical experiments to learn more about the components of page load time.

A graph of page load over time, declining noisily from 5 - 6 seconds to 4 -5 seconds over October

SpeedCurve Synthetic measurements show steady improvement, but not yet on target.

Moved MDN to MozIT

Ryan Johnson, Ed Lim, and Dave Parfitt switched production traffic from the Marketing to the IT servers on October 29th. The site was placed in read-only mode, so all the content was available during the transition. There were some small hiccups, mostly around running out of API budget for Amazon’s Elastic File System (EFS), but we handled the issues within the maintenance window.

A two-story, 19th-century building, loaded on 64 wheels, is moved down a street by a dozen workers in hardhats

Maisenbacher House Moving 8” by Katherine Johnson, CC BY 2.0

In the weeks leading up to the cut over, the team tested deployments, updated documentation, and checked data transfer processes. They created a list of tasks and assignments, detailed the process for the migration, and planned the cleanup work after the cut over. The team’s attention to detail and continuous communication made this a smooth transition for MDN’s users, with no downtime or bugs.

The MozIT cluster is very similar to the previous MozMEAO cluster. The technical overview from the October 10, 2017 launch is still a decent guide to how MDN is deployed.

There are a handful of changes, most of which MDN users shouldn’t notice. We’re now hosting images in Docker Hub rather than quay.io. The MozMEAO cluster ran Kubernetes 1.7, and the new MozIT cluster runs 1.9. This may be responsible for more reliable DNS lookups, avoiding occasional issues when connecting to the database or other AWS services.

In November, we’ll continue monitoring the new servers, and shut down the redundant services in the MozMEAO account. We’ll then re-evaluate our plans from the beginning of the year, and prioritize the next infrastructure updates. The top of the list is reliable acceptance tests and deploys across multiple AWS zones.

Shipped tweaks and fixes

There were 352 PRs merged in October:

This includes some important changes and fixes:

A sequencer with four effects and four steps that can be selected

A step sequencer demonstrating web audio APIs.

78 pull requests were from first-time contributors:

Planned for November

We’ll continue on performance experiments in November, such as removing Font Awesome and looking for new ways to lower page load time. We’ll continue ongoing projects, such as migrating and updating browser compatibility data and shipping more HTML examples like the one on <input>.

Ship recurring payments

In October, we shipped a new way to support MDN with one-time payments. For November, we’re working with Potato London again to add the option for monthly payments to MDN.

Interested in contributing to MDN? Don’t miss Getting started on MDN or jump right in to the Kuma repo to begin contributing code.

If you’re just getting started, take a look at the MDN wiki page for new contributors or have a look at

The post Performance Updates and Hosting Moves: MDN Changelog for October 2018 appeared first on Mozilla Hacks - the Web developer blog.

Categorieën: Mozilla-nl planet

Nicholas Nethercote: How to get the size of Rust types with -Zprint-type-sizes

Mozilla planet - vr, 09/11/2018 - 04:42

When optimizing Rust code it’s sometimes useful to know how big a type is, i.e. how many bytes it takes up in memory. std::mem::size_of can tell you, but often you want to know the exact layout as well. For example, an enum might be surprisingly big, in which case you probably will want to know if, for example, there is one variant that is much bigger than the others.

The -Zprint-type-sizes option does exactly this. Just pass it to a nightly version of rustc — it isn’t enabled on release versions, unfortunately — and it’ll print out details of the size, layout, and alignment of all types in use. For example, for this type:

enum E { A, B(i32), C(u64, u8, u64, u8), D(Vec<u32>), }

it prints the following, plus info about a few built-in types:

print-type-size type: `E`: 32 bytes, alignment: 8 bytes print-type-size discriminant: 1 bytes print-type-size variant `A`: 0 bytes print-type-size variant `B`: 7 bytes print-type-size padding: 3 bytes print-type-size field `.0`: 4 bytes, alignment: 4 bytes print-type-size variant `C`: 23 bytes print-type-size field `.1`: 1 bytes print-type-size field `.3`: 1 bytes print-type-size padding: 5 bytes print-type-size field `.0`: 8 bytes, alignment: 8 bytes print-type-size field `.2`: 8 bytes print-type-size variant `D`: 31 bytes print-type-size padding: 7 bytes print-type-size field `.0`: 24 bytes, alignment: 8 bytes

It shows:

  • the size and alignment of the type;
  • for enums, the size of the discriminant;
  • for enums, the size of each variant;
  • the size, alignment, and ordering of all fields (note that the compiler has reordered variant C‘s fields to minimize the size of E);
  • the size and location of all padding.

Every detail you could possibly want is there. Brilliant!

For rustc developers, there’s an extra-special trick for getting the size of a type within rustc itself. Put code like this into a file a.rs:

#![feature(rustc_private)] extern crate syntax; use syntax::ast::Expr; fn main() { let _x = std::mem::size_of::<Expr>(); }

and then compile it like this:

RUSTC_BOOTSTRAP=1 rustc -Zprint-type-sizes a.rs

I won’t pretend to understand how it works, but the use of rustc_private and RUSTC_BOOTSTRAP somehow let you see inside rustc while using it, rather than while compiling it. I have used this trick for PRs such as this one.

Categorieën: Mozilla-nl planet

Cameron Kaiser: Happy 8th birthday to us

Mozilla planet - vr, 09/11/2018 - 01:15
TenFourFox is eight years old! And nearly as mature!

Categorieën: Mozilla-nl planet

David Lawrence: Happy BMO Push Day!

Mozilla planet - vr, 09/11/2018 - 00:10

https://github.com/mozilla-bteam/bmo/tree/release-20181108.1)

the following changes have been pushed to bugzilla.mozilla.org:

  • [1436619] http:// in URL field
  • [1505762] count_only=1 results in error with REST API

discuss these changes on mozilla.tools.bmo.

Categorieën: Mozilla-nl planet

Firefox UX: How do people decide whether or not to get a browser extension?

Mozilla planet - do, 08/11/2018 - 21:11

The Firefox Add-ons Team works to make sure people have all of the information they need to decide which browser extensions are right for them. Past research conducted by Bill Selman and the Add-ons Team taught us a lot about how people discover extensions, but there was more to learn. Our primary research question was: “How do people decide whether or not to get a specific browser extension?”

We recently conducted two complementary research studies to help answer that big question:

  1. An addons.mozilla.org (AMO) survey, with just under 7,500 respondents
  2. An in-person think-aloud study with nine recruited participants, conducted in Vancouver, BC

The survey ran from July 19, 2018 to July 26, 2018 on addons.mozilla.org (AMO). The survey prompt was displayed when visitors went to the site and was localized into ten languages. The survey asked questions about why people were visiting the site, if they were looking to get a specific extension (and/or theme), and if so what information they used to decide to get it.

<figcaption>Screenshot of the survey message bar on addons.mozilla.org.</figcaption>

The think-aloud study took place at our Mozilla office in Vancouver, BC from July 30, 2018 to August 1, 2018. The study consisted of 45-minute individual sessions with nine participants, in which they answered questions about the browsers they use, and completed tasks on a Windows laptop related to acquiring a theme and an extension. To get a variety of perspectives, participants included three Firefox users and six Chrome users. Five of them were extension users, and four were not.

<figcaption>Mozilla office conference room in Vancouver, where the think-aloud study took place.</figcaption>What we learned about decision-making

People use social proof on the extension’s product page

Ratings, reviews, and number of users proved important for making a decision to get the extension in both the survey and think-aloud study. Think-aloud participants used these metrics as a signal that an extension was good and safe. All except one think-aloud participant used this “social proof” before installing an extension. The importance of social proof was backed up by the survey responses where ratings, number of users, and reviews were among the top pieces of information used.

<figcaption>Screenshot of Facebook Container’s page on addons.mozilla.org with the “social proof” outlined: number of users, number of reviews, and rating.</figcaption><figcaption>AMO survey responses to “Think about the extension(s) you were considering getting. What information did you use to decide whether or not to get the extension?”</figcaption>

People use social proof outside of AMO

Think-aloud participants mentioned using outside sources to help them decide whether or not to get an extension. Outside sources included forums, advice from “high authority websites,” and recommendations from friends. The same result is seen among the survey respondents, where 40.6% of respondents used an article from the web and 16.2% relied on a recommendation from a friend or colleague. This is consistent with our previous user research, where participants used outside sources to build trust in an extension.

<figcaption>Screenshot of an example outside source: TechCrunch article about Facebook Container extension.</figcaption><figcaption>AMO survey responses to “What other information did you use to decide whether or not to get an extension?”</figcaption>

People use the description and extension name

<figcaption>Screenshot of Facebook Container’s page on addons.mozilla.org with extension name, descriptions, and screenshot highlighted.</figcaption>

Almost half of the survey respondents use the description to make a decision about the extension. While the description was the top piece of content used, we also see that over one-third of survey respondents evaluate the screenshots and the extension summary (the description text beneath the extension name), which shows their importance as well.

Think-aloud participants also used the extension’s description (both the summary and the longer description) to help them decide whether or not to get it.

While we did not ask about the extension name in the survey, it came up during our think-aloud studies. The name of the extension was cited as important to think-aloud participants. However, they mentioned how some names were vague and therefore didn’t assist them in their decision to get an extension.

Themes are all about the picture

In addition to extensions, AMO offers themes for Firefox. From the survey responses, the most important part of a theme’s product page is the preview image. It’s clear that the imagery far surpasses any social proof or description based on this survey result.

<figcaption>Screenshot of a theme on addons.mozilla.org with the preview image highlighted.</figcaption><figcaption>AMO survey responses to “Think about the theme(s) you were considering getting. What information did you use to decide whether or not to get the theme?”</figcaption>

All in all, we see that while social proof is essential, great content on the extension’s product page and in external sources (such as forums and articles) are also key to people’s decisions about whether or not to get an extension. When we’re designing anything that requires people to make an adoption decision, we need to remember the importance of social proof and great content, within and outside of our products.

In alphabetical order by first name, thanks to Amy Tsay, Ben Miroglio, Caitlin Neiman, Chris Grebs, Emanuela Damiani, Gemma Petrie, Jorge Villalobos, Kev Needham, Kumar McMillan, Meridel Walkington, Mike Conca, Peiying Mo, Philip Walmsley, Raphael Raue, Richard Bloor, Rob Rayborn, Sharon Bautista, Stuart Colville, and Tyler Downer, for their help with the user research studies and/or reviewing this blog post.

How do people decide whether or not to get a browser extension? was originally published in Firefox User Experience on Medium, where people are continuing the conversation by highlighting and responding to this story.

Categorieën: Mozilla-nl planet

Mozilla Addons Blog: Extensions in Firefox 64

Mozilla planet - do, 08/11/2018 - 21:10

Following the explosion of extension features in Firefox 63, Firefox 64 moved into Beta with a quieter set of capabilities spread across many different areas.

Extension Management

The most visible change to extensions comes on the user-facing side of Firefox where the add-ons management page (about:addons) received an upgrade.

Changes on this page include:

  • Each extension is shown as a card that can be clicked.
  • Each card shows the description for the extension along with buttons for Options, Disable and Remove.
  • The search area at the top is cleaned up.
  • The page links to the Firefox Preferences page (about:preferences) and that page links back to about:addons, making navigation between the two very easy.  These links appear in the bottom left corner of each page.

These changes are part of an ongoing redesign of about:addons that will make managing extensions and themes within Firefox simpler and more intuitive. You can expect to see additional changes in 2019.

As part of our continuing effort to make sure users are aware of when an extension is controlling some aspect of Firefox, the Notification Permissions window now shows when an extension is controlling the browser’s ability to accept or reject web notification requests.

When an extension is installed, the notification popup is now persistently shown off of the main (hamburger) menu.  This ensures that the notification is always acknowledged by the user and can’t be accidentally dismissed by switching tabs.

Finally, extensions can now be removed by right-clicking on an extension’s browser action icon and selecting “Remove Extension” from the resulting context menu.

Even More Context Menu Improvements

Firefox 63 saw a large number of improvements for extension context menus and, as promised, there are even more improvements in Firefox 64.

The biggest change is a new API that can be called from the contextmenu DOM event to set a custom context menu in extension pages.  This API, browser.menus.overrideContext(), allows extensions to hide all default Firefox menu items in favor of providing a custom context menu UI.  This context menu can consist of multiple top-level menu items from the extension, and may optionally include tab or bookmark context menu items from other extensions.

To use the new API, you must declare the menus and the brand new menus.overrideContext permission. Additionally, to include context menus from other extensions in the tab or bookmarks contexts, you must also declare the tabs or bookmarks permissions, respectively.

The API is still being documented on MDN at the time of this writing, but the API takes a contextOptions object as a parameter, which includes the following values:

  • showDefaults: boolean that indicates whether to include default Firefox menu items in the context menu (defaults to false)
  • context: optional parameter that indicates the ContextType to override to allow menu items from other extensions in this context menu. Currently, only bookmark and tab are supported. showDefaults cannot be used with this option.
  • bookmarkId: required when context is bookmark. Requires bookmarks permission.
  • tabId: required when context is tab. Requires tabs permission.

While waiting for the MDN documentation to go live, I would highly encourage you to check out the terrific blog post by Yuki “Piro” Hiroshi that covers usage of the new API in great detail.

Other improvements to extension context menus include:

  • browser.menus.update() now allows extensions to update an icon without having to delete and recreate the menu item.
  • menus.create() and menus.update() now support a viewTypes property.  This is a list of view types that specifies where the menu item will be shown and can include tab, popup (pageAction/browserAction) or sidebar. It defaults to any view, including those without a viewType.
  • The menus.onShown and menus.onClicked events now include the viewType described above as part of their info object so extensions can determine the type of view where the menu was shown or clicked.
  • The menus.onClicked event also added a button property to indicate which mouse button initiated the click (left, middle, right).
Minor Improvements in Many Areas

In addition to the extension management in Firefox and the context menu work, many smaller improvements were made throughout the WebExtension API.

Page Actions
  • A new, optional manifest property for page actions called ‘pinned’ has been added.  It specifies whether or not the page action should appear in the location bar by default when the user installs the extension (default is true).
Tabs Content Scripts
  • Content scripts can now read from a <canvas> that they have modified.
Themes Private Browsing Keyboard Shortcuts Dev Tools
  • Extensions can now create devtools panel sidebars and use the new setPage() API to embed an extension page inside the devtools inspector sidebar.
Misc / Bug Fixes Thank You

A total of 73 features and improvements landed as part of Firefox 64. Volunteer contributors were a huge part of this release and a tremendous thank you goes out to our community, including: Oriol Brufau, Tomislav Jovanovic, Shivam Singhal, Tim Nguyen, Arshad Kazmi, Divyansh Sharma, Tom Schuster, Tim B, Tushar Arora, Prathiksha Guruprasad. It is the combined efforts of Mozilla and our amazing community that make Firefox a truly unique product. If you are interested in contributing to the WebExtensions ecosystem, please take a look at our wiki.

The post Extensions in Firefox 64 appeared first on Mozilla Add-ons Blog.

Categorieën: Mozilla-nl planet

Christian Legnitto: What is developer efficiency and velocity?

Mozilla planet - do, 08/11/2018 - 20:52

As I previously mentioned I am currently in the information gathering phase for improvements to desktop Firefox developer efficiency and velocity. While many view developer efficiency and velocity as the same thing–and indeed they are often correlated–it is useful to discuss how they are different.

I like to think of developer velocity as the rate at which a unit of work is completed. Developer efficiency is the amount of effort required to complete a unit of work.

If one were to think of the total development output as revenue, improvements to velocity would improve the top-line and improvements to efficiency would improve the bottom-line.

I like to visualize the differences by imagining a lone developer with the task to write a function to compute the Fibonacci series. If one were to magically increase the developer’s typing speed, that would be an increase to their velocity. If one created a library with an existing fibonacci implementation and the developer leveraged it instead, it would be an increase to their efficiency.

The trick I am here to help Mozilla with is identifying large improvements in both Firefox developer velocity and efficiency without requiring a lot of additional resources. I am focusing on some lower-hanging fruit that the organization has missed, hasn’t deployed for everyone, or needs a trusted outsider to help push through some red tape.

Categorieën: Mozilla-nl planet

Mike Hoye: A Summer Of Code Question

Mozilla planet - do, 08/11/2018 - 19:43

This is a lightly edited response to a question we got on IRC about how to best apply to participate in Google’s “Summer Of Code” program. this isn’t company policy, but I’ve been the one turning the crank on our GSOC application process for the last while, so maybe it counts as helpful guidance.

We’re going to apply as an organization to participate in GSOC 2019, but that process hasn’t started yet. This year it kicked off in the first week of January, and I expect about the same in 2019.

You’re welcome to apply to multiple positions, but I strongly recommend that each application be a focused effort; if you send the same generic application to all of them it’s likely they’ll all be disregarded. I recognize that this seems unfair, but we get a tidal wave of redundant applications for any position we open, so we have to filter them aggressively.

Successful GSOC applicants generally come in two varieties – people who put forward a strong application to work on projects that we’ve proposed, and people that have put together their own GSOC proposal in collaboration with one or more of our engineers.

The latter group are relatively rare, comparatively – they generally are people we’ve worked through some bugs and had some useful conversations with, who’ve done the work of identifying the “good GSOC project” bugs and worked out with the responsible engineers if they’d be open to collaboration, what a good proposal would look like, etc.

None of those bugs or conversations are guarantees of anything, perhaps obviously – some engineers just don’t have time to mentor a GSOC student, some of the things you’re interested in doing won’t make good GSOC projects, and so forth.

One of the things I hope to do this year is get better at clarifying what a good GSOC project proposal looks like, but broadly speaking they are:

  • Nice-to-have features, but non-blocking and non-critical-path. A struggling GSOC student can’t put a larger project at risk.
  • Few (good) or no (better) dependencies, on external factors, whether they’re code, social context or other people’s work. A good GSOC project is well-contained.
  • Clearly defined yes-or-no deliverables, both overall and as milestones throughout the summer. We need GSOC participants to be able to show progress consistently.
  • Finally, broad alignment with Mozilla’s mission and goals, even if it’s in a supporting role. We’d like to be able to draw a straight line between the project you’re proposing and Mozilla being incrementally more effective or more successful. It doesn’t have to move any particular needle a lot, but it has to move the needle a bit, and it has to be a needle we care about moving.

It’s likely that your initial reaction to this “that is a lot, how do I find all this out, what do I do here, what the hell”, and that’s a reasonable reaction.

The reason that this group of applicants is comparatively rare is that people who choose to go that path have mostly been hanging around the project for a bit, soaking up the culture, priorities and so on, and have figured out how to navigate from “this is my thing that I’m interested in and want to do” to “this is my explanation of how my thing fits into Mozilla, both from product engineering and an organizational mission perspective, and this is who I should be making that pitch to”.

This is not to say that it’s impossible, just that there’s no formula for it. Curiosity and patience are your most important tools, if you’d like to go down that road, but if you do we’d definitely like to hear from you. There’s no better time to get started than now.

Categorieën: Mozilla-nl planet

Pagina's