mozilla

Mozilla Nederland LogoDe Nederlandse
Mozilla-gemeenschap

Daniel Stenberg: 10,000 stars

Mozilla planet - di, 25/09/2018 - 23:15

On github, you can 'star' a project. It's a fairly meaningless way to mark your appreciation of a project hosted on that site and of course, the number doesn't really mean anything and it certainly doesn't reflect how popular or widely used or unused that particular software project is. But here I am, highlighting the fact that today I snapped the screenshot shown above when the curl project just reached this milestone: 10,000 stars.

In the great scheme of things, the most popular and starred projects on github of course have magnitudes more stars. Right now, curl ranks as roughly the 885th most starred project on github. According to github themselves, they host an amazing 25 million public repositories which thus puts curl in the top 0.004% star-wise.

There was appropriate celebration going on in the Stenberg casa tonight and here's a photo to prove it:

I took a photo when we celebrated 1,000 stars. It doesn't feel so long ago but was a little over 1500 days ago.

August 12 2014

Onwards and upwards!

Categorieën: Mozilla-nl planet

Daniel Pocock: Crossing the Great St Bernard Pass

Mozilla planet - di, 25/09/2018 - 16:26

It's a great day for the scenic route to Italy, home of Beethoven's Swiss cousins.

What goes up, must come down...

Descent into the Aosta valley

Categorieën: Mozilla-nl planet

The Mozilla Blog: Introducing Firefox Monitor, Helping People Take Control After a Data Breach

Mozilla planet - di, 25/09/2018 - 15:01

Data breaches, when information like your username and password are stolen from a website you use, are an unfortunate part of life on the internet today. It can be hard to keep track of when your information has been stolen, so we’re going to help by launching Firefox Monitor, a free service that notifies people when they’ve been part of a data breach. After testing this summer, the results and positive attention gave us the confidence we needed to know this was a feature we wanted to give to all of our users.

To give you a complete picture of what Firefox Monitor has to offer, here’s Cindy Hsiang, Product Manager for Firefox Monitor, to tell you more:

Here’s how Firefox Monitor helps you learn if you’ve been part of a data breach Step 1 – Visit monitor.firefox.com to see if your email has been part of a data breach

Visit monitor.firefox.com and type in your email address. Through our partnership with Troy Hunt’s “Have I Been Pwned,” your email address will be scanned against a database that serves as a library of data breaches. We’ll let you know if your email address and/or personal info was involved in a publicly known past data breach. Once you know where your email address was compromised you should change your password and any other place where you’ve used that password.

Visit monitor.firefox.com and type in your email address

Step 2 – Learn about future data breaches

Sign up for Firefox Monitor using your email address and we will notify you about data breaches when we learn about them. Your email address will be scanned against those data breaches, and we’ll let you know through a private email if you were involved.

If you’re wondering about how we’re handling your email address, rest assured we will protect your email address when it’s scanned. We talked about the technical details on how that works when we first launched the experiment. This is all in keeping with our principles at Mozilla, where we’re always looking for features that will protect people’s privacy and give them greater control when they’re online.

Firefox Monitor is just one of many things we’re rolling out this Fall to help people stay safe while online. Recently, we announced our roadmap to anti-tracking and in the next couple of months, we’ll release more features to arm and protect people’s rights online. For more on how to use Firefox Monitor, check out our Firefox Frontier blog. If you want to know more about the Firefox Monitor journey and how your feedback set this service in motion visit Matt Grimes’ Medium blog post.

Check out Firefox Monitor to see if you’ve been part of a data breach, and sign up to know if you’ve been affected the next time a data breach happens.

The post Introducing Firefox Monitor, Helping People Take Control After a Data Breach appeared first on The Mozilla Blog.

Categorieën: Mozilla-nl planet

The Firefox Frontier: Firefox Monitor, take control of your data

Mozilla planet - di, 25/09/2018 - 15:00

That sinking feeling. You’re reading the news and you learn about a data breach. Hackers have stolen names, addresses, passwords, survey responses from a service that you use. It seems … Read more

The post Firefox Monitor, take control of your data appeared first on The Firefox Frontier.

Categorieën: Mozilla-nl planet

The Mozilla Blog: $1.6 Million to Connect Unconnected Americans: Our NSF-WINS Grand Prize Winners

Mozilla planet - di, 25/09/2018 - 12:00
After months of prototyping and judging, Mozilla and the National Science Foundation are fueling the best and brightest ideas for bringing more Americans online

 

Today, Mozilla and the National Science Foundation (NSF) are announcing the grand prize winners in our Wireless Innovation for a Networked Society (NSF-WINS) Challenges — an audacious competition to connect millions of unconnected Americans.

The grand prize winners are as novel as they are promising: An 80-foot tower in rural Appalachia that beams broadband connectivity to residents. And, an autonomous network that fits in two suitcases — and can be deployed after earthquakes and hurricanes.

Says Mark Surman, Mozilla’s Executive Director: “We launched NSF-WINS in early 2017 with the goal of bringing internet connectivity to rural areas, disaster-struck regions, and other offline or under-connected places. Currently, some 34 million Americans lack high-quality internet access. That means 34 million Americans are at a severe economic, educational, and social disadvantage.”

“Now — after months of prototyping and judging — Mozilla and NSF are awarding $1.6 million to the most promising projects in the competition’s two categories. It’s part of Mozilla’s mission to keep the internet open and accessible, and to empower the people on the front lines of that work.”

Says Jim Kurose, head of the Directorate for Computer and Information Science and Engineering (CISE) at the NSF: “By investing in affordable, scalable solutions like these, we can unlock opportunity for millions of Americans.”

The NSF-WINS ‘Off the Grid Internet Challenge’ $400,000 grand prize winner is…

 

HERMES

HERMES (High­-frequency Emergency and Rural Multimedia Exchange System) by Rhizomatica in Philadelphia, PA.

When disasters strike, communications networks are among the first pieces of critical infrastructure to overload or fail.

HERMES bonds together an assortment of unexpected protocols — like GSM and short-wave radio — to fix this. HERMES enables local calling, SMS, and basic OTT messaging, all via equipment that can fit inside two suitcases.

“In an emergency, you want to be able to tell people you’re okay,” the Rhizomatica team says. “HERMES allows you to tell anyone, anywhere with a phone number that you’re okay. And that person can respond to you over text or with a voice message. It also allows someone from a central location to pass information to a disaster site, or to broadcast messages. We can now send a text message 700 miles through HERMES.”

Learn more about HERMES»

The NSF-WINS ‘Smart Community Networks Challenge’ $400,000 grand prize winner is…

 

Southern Connected Communities Network

Southern Connected Communities Network (SCCN) by the Highlander Research and Education Center in New Market, TN.

Many communities across the U.S. lack reliable internet access. Sometimes commercial providers don’t supply affordable rates; sometimes a particular community is too isolated; sometimes the speed and quality of access is too slow.

SCCN leverages infrastructure and community to fix this and bring broadband to rural Appalachia. SCCN uses an 80-foot tower that draws wireless backbone from Knoxville, TN via the public 11 GHz spectrum. The tower then redistributes this broadband connectivity to local communities using line-of-sight technology. This tower is owned and operated by the local residents.

“When you live in the rural South, your kids’ education, your next job, your healthcare, and your right to a political voice all are limited by slow, expensive, unreliable, and corporate-controlled internet connectivity — and that’s if it exists at all,” says Allyn Maxfield-Steele, Co-Executive Director of the Highlander Center. “So we’re claiming internet like the human right it has become. We’re building a local digital economy governed by us and for us.”

Learn more about Southern Connected Communities Network»

~

In addition to these two grand prize winners, Mozilla and the NSF are awarding second-, third-, and fourth-place prizes in each category. The winners are:

Off-the-Grid Internet Challenge:
  • Second place ($250,000) Project Lantern by Paper & Equator in New York, NY (and in collaboration with the Shared Reality Lab at McGill University). Project Lantern is a Wi-Fi hotspot device that lets you send maps and messages across town when the internet is down.

 

  • Third place ($100,000) EmergenCell (previously SELN) by Spencer Sevilla in Seattle, WA. EmergenCell is an off-the-grid and self-contained LTE network in a box for emergency response.

 

  • Fourth place ($50,000) Wind by the Guardian Project in Valhalla, NY. Wind is a network designed for opportunistic communication and sharing of local knowledge that provides off-grid services for everyday people, using the mobile devices they already have. The project also features decentralized software and a content distribution system.
Smart Community Networks Challenge:
  • Second place ($250,000) The Equitable Internet Initiative by Allied Media Projects in Detroit, MI. The Equitable Internet Initiative (EII) is an effort to redistribute power, resources, and connectivity in Detroit through community Internet technologies. EII is working toward a future where neighbors are authentically connected, with relationships of mutual aid that sustain the social, economic, and environmental health of neighborhoods.

 

  • Third place ($100,000) SMARTI (previously Solar Mesh) by the San Antonio Housing Authority in San Antonio, TX. In efforts to bridge the digital divide in San Antonio, the 19th worst connected city in the U.S., the San Antonio Housing Authority has created a prototype that marries solar energy with Wi-Fi mesh technologies.

 

  • Fourth place ($50,000) ESU 5 Homework Hotspot by Educational Service Unit 5 in Beatrice, NE. The ESU 5 Homework Hotspots are TV white space hotspots that help bridge the connectivity gap for students in Rural Nebraska.

Note: In February 2018, Mozilla and the NSF announced the first batch of winners: between $10,000 and $60,000 in grants for 20 promising design concepts. See those winners here.

The post $1.6 Million to Connect Unconnected Americans: Our NSF-WINS Grand Prize Winners appeared first on The Mozilla Blog.

Categorieën: Mozilla-nl planet

Robert O'Callahan: More Realistic Goals For C++ Lifetimes 1.0

Mozilla planet - di, 25/09/2018 - 09:27

Over two years ago I wrote about the C++ Lifetimes proposal and some of my concerns about it. Just recently, version 1.0 was released with a blog post by Herb Sutter.

Comparing the two versions shows many important changes. The new version is much clearer and more worked-out, but there are also significant material changes. In particular the goal has changed dramatically. Consider the "Goal" section of version 0.9.1.2: (emphasis original)

Goal: Eliminate leaks and dangling for */&/iterators/views/ranges
We want freedom from leaks and dangling – not only for raw pointers and references, but all generalized Pointers such as iterators—while staying true to C++ and being adoptable:
1. We cannot tolerate leaks (failure to free) or dangling (use-after-free). For example, a safe std:: library must prevent dangling uses such as auto& bad = vec[0]; vec.push_back(); bad = 42;.Version 1.0 doesn't have a "Goal" section, but its introduction says This paper defines the Lifetime profile of the C++ Core Guidelines. It shows how to efficiently diagnose many common cases of dangling (use-after-free) in C++ code, using only local analysis to report them as deterministic readable errors at compile time.The new goal is much more modest, I think much more reasonable, and highly desirable! (Partly because "modern C++" has introduced some extremely dangerous new idioms.)

The limited scope of this proposal becomes concrete when you consider its definition of "Owner". An Owner can own at most one type of data and it has to behave much like a container or smart pointer. For example, consider a data structure owning two types of data:

class X {
public:
X() : a(new int(0)), b(new char(0)) {}
int* get_a() { return &*a; }
char* get_b() { return &*b; }
private:
unique_ptr<int> a;
unique_ptr<char> b;
};This structure cannot be an Owner. It is also not an Aggregate (a struct/class with public fields whose fields are treated as separate variables for the purposes of analysis). It has to be a Value. The analysis has no way to refer to data owned by Values; as far as I can tell, there is no way to specify or infer accurate lifetimes for the return values of get_a and get_b, and apparently in this case the analysis defaults to conservative assumptions that do not warn. (The full example linked above has a trivial dangling pointer with no warnings.) I think this is the right approach, given the goal is to catch some common errors involving misuse of pointers, references and standard library features. However, people need to understand that code free of C++ Lifetime warnings can still easily cause memory corruption. (This vindicates the title of my previous blog post to some extent; insofar as C++ Lifetimes was intended to create a safe subset of C++, that promise has not eventuated.)

The new version has much more emphasis on annotation. The old version barely mentioned the existence of a [[lifetime]] annotation; the new version describes it and shows more examples. It's now clear you can use [[lifetime]] to group function parameters and into lifetime-equivalence classes, and you can also annotate return values and output parameters.

The new version comes with a partial Clang implementation, available on godbolt.org. Unfortunately that implementation seems to be very partial. For example the following buggy program is accepted without warnings:

int& f(int& a) {
return a;
}
int& hello() {
int x = 0;
return f(x);
}It's pretty clear from the spec that this should report a warning, and the corresponding program using pointers does produce a warning. OTOH there are some trivial false positives I don't understand: int* hello(int*& a) {
return a;
}:2:5: warning: returning a dangling Pointer [-Wlifetime]
return a;
^
:1:12: note: it was never initialized here
int* hello(int*& a) {
^The state of this implementation makes it unreliable as a guide to how this proposal will work in practice, IMHO.
Categorieën: Mozilla-nl planet

Daniel Stenberg: The Polhem prize, one year later

Mozilla planet - di, 25/09/2018 - 07:20

On September 25th 2017, I received the email that first explained to me that I had been awarded the Polhem Prize.

Du har genom ett omfattande arbete vaskats fram som en värdig mottagare av årets Polhemspris. Det har skett genom en nomineringskommitté och slutligen ett råd med bred sammansättning. Priset delas ut av Kungen den 19 oktober på Tekniska muséet.

My attempt of an English translation:

You have been selected as a worthy recipient of this year's Polhem prize through extensive work. It has been through a nomination committee and finally a council of broad composition. The prize is awarded by the King on October 19th at the Technical Museum. A gold medal

At the award ceremony in October 2017 I received the gold medal at the most fancy ceremony I could ever wish for, where I was given the most prestigious award I couldn't have imagined myself even being qualified for, handed over by no other than the Swedish King.

An entire evening with me in focus, where I was the final grand finale act and where my life's work was the primary reason for all those people being dressed up in fancy clothes!

Things have settled down since. The gold medal has started to get a little dust on it where it lies here next to me on my work desk. I still glance at it every once in a while. It still feels surreal. It's a fricking medal in pure gold with my name on it!

I almost forget the money part of the prize. I got a lot of money as well, but in retrospect it is really the honors, that evening and the gold medal that stick best in my memory. Money is just... well, money.

So did the award and prize make my life any different? Yes sure, a little, and I'll tell you how.

What's all that time spent on?

My closest surrounding of friends and family got a better understanding of what I've actually been doing all these long hours, all these years and more than one phrase in the style of "oh, so you actually did something useful?!" have been uttered.

Certainly I've tried to explain to them before, but nothing works as good as a gold medal from an award committee to say that what I do is actually appreciated "out there" and it has made a serious impact on the world.

I think I'm considered a little less weird now when I keep spending night hours in front of my computer when the house is otherwise dark and silent. Well, maybe still weird, but at least my weirdness has proven to result in something useful for mankind and that's more than many other sorts of weird do... We all have hobbies.

What is curl?

Family and friends have gotten a rudimentary level of understanding of what curl is and what it does. I'm not suggesting they fully grasp it or know what an "internet protocol" is now, but at least a lot of people understand that it works with "internet transfers". It's not like people were totally uninterested before, but when I was given this prize - by a jury of engineers no less - that says this is a significant invention and accomplishment with a value that "can not be overestimated", it made them more interested. The little video that was produced helped:

Some mysteries remain

People in general still have a hard time to grasp the reach of the project, how much time I've spent so far on it, how I can find motivation to keep up the work and not the least how this is all given away for free for everyone.

The simple fact that these are all questions that I've been asked I think is a small reward in itself. I think the fact that I was awarded this prize for my work on Open Source is awesome and I feel honored to be a person who introduces this way of thinking to some of the people who previously would think that you have to sell proprietary things or earn a lot of money for your products in order to impact and change society as a whole.

Not widely known

The Polhem prize is not widely known in Sweden among the general populace and thus neither is the fact that I won it. Only a very special subset of people know about this. Of course it is even less known outside of Sweden and in fact the information about the prize given in English is very sparse.

Next year's winner

The other day I received my invitation to participate in this year's award ceremony on November 14. Of course I'll happily accept that and I will be there and celebrate the winner this year!

The curl project

How did the prize affect the project itself, the project that I was awarded for having cared for this long?

It hasn't affected it much at all (as far as I can tell). The project has moved along like before and we've worked on fixing bugs and added features and cool things over time after my award just as we did before it. That's how it has felt like. Business as usual.

If anything, I think I might have gotten some renewed energy and interest in the project and the commit author statistics actually show that my commit frequency has gone up since around the time I got the award. Our gitstats show that I've done more than half of the commits every single month the last year, most of this time even more than 70% of the commits.

I may have served twenty years here, but I'm not done yet!

Categorieën: Mozilla-nl planet

Christian Legnitto: Working on Firefox desktop developer efficiency

Mozilla planet - di, 25/09/2018 - 02:36

I’m excited to announce that for the next couple of months I will be working with Mozilla on Firefox developer desktop efficiency!

Why is this important Developer velocity = product velocity = company velocity = mission velocity

Mozilla is an engineering company. Its interface to—and impact on—the world is through its primary product, the Firefox web browser. Firefox is of course created, maintained, and improved by Mozilla’s developers (both employees and community members). Thus, when one increases Firefox developer efficiency and velocity the velocity of the Firefox product increases. Because Firefox is Mozilla’s primary product, an increase in Firefox product velocity transitively increases the velocity of the company and the mission overall.

What I will be doing

For the first deliverable I’m going to come up with a comprehensive two year plan for increasing Firefox developer efficiency and velocity by best leveraging existing resources for the highest impact. Think of things like “this tool or workflow should be improved” rather than “we need to hire a team of 40 people to do X”. This work will largely be building on the foundation and vision of Mozilla’s “Engineering Workflow” team in addition to my past professional experience.

The work I am doing will be broken into three phases: definition, analysis, and prototype. This is currently in the definition phase, so I will mostly be gathering information about what Mozilla currently does, what it has tried in the past, and what the rest of the industry is doing.

What I will NOT be doing

Mozilla as a project, product, and company is massive. To prevent scope creep and have targeted impact, the following–while important–are out of scope for the current project:

  • Developer efficiency and velocity related to Mozilla products that are not Firefox.
  • Technical solutions deployed in production.
  • The development velocity and efficiency of outside contributors.
  • Firefox product code abstractions or improvements.
Who am I?

I am passionate about developer tooling, efficiency, engineering team organization, and company culture. I feel like I have been training my whole life to work on this at Mozilla:

  • I have been a Mozillian since the Firefox 1.0 days.
  • I managed Firefox releases and the Mozilla Release management team for a couple years.
  • I created Mozilla Pulse (the infrastructure tool, not the experimental add-on).
  • I managed release and development tools teams at Facebook.
  • I have contributed to Firefox as a community member.
  • I have advised over 40 companies on developer tools, release processes, structuring engineering orgs, and developer efficiency.
  • I enjoy writing code in Rust.
Feedback wanted

I would love to hear any thoughts or opinions on the current state of developing for desktop Firefox, any pain-points or solutions you have, or companies/open source projects that have particularly efficient development processes and tooling.

I’m not sure I have my Mozilla email set up correctly, so until then feel free to cc me on bugs in Bugzilla, Slack or IRC (@LegNeato), ping me on Twitter (@LegNeato), or email my personal email.

Categorieën: Mozilla-nl planet

The Rust Programming Language Blog: Announcing Rust 1.29.1

Mozilla planet - di, 25/09/2018 - 02:00

The Rust team is happy to announce a new version of Rust, 1.29.1. Rust is a systems programming language focused on safety, speed, and concurrency.

If you have a previous version of Rust installed via rustup, getting Rust 1.29.1 is as easy as:

$ rustup update stable

If you don’t have it already, you can get rustup from the appropriate page on our website, and check out the detailed release notes for 1.29.1 on GitHub.

What’s in 1.29.1 stable

A security vulnerability was found in the standard library where if a large number was passed to str::repeat it could cause a buffer overflow after an integer overflow. If you do not call the str::repeat function you are not affected. This has been addressed by unconditionally panicking in str::repeat on integer overflow. More details about this can be found in the security announcement.

Categorieën: Mozilla-nl planet

Cameron Kaiser: R.I.P., Charles W. Moore, a fine man who liked fine Macs

Mozilla planet - ma, 24/09/2018 - 20:44
A farewell and au revoir to a great gentleman in making the most of your old Mac, Charles W. Moore, who passed away at his home in rural Canada on September 16 after a long illness. Mr Moore was an early fan of TenFourFox, even back in the old bad Firefox 4 beta days, and he really made his famous Pismo PowerBook G3 systems work hard for it. Charles definitely was of the same mind I think a lot of our readers here are: "Even after going on a decade and a half, I still find them [his Pismos] a pleasure to use within the context of what they’re still good at." I'm sure most of us will agree the same is true for any classic computer in general and particularly Power Macs as a whole given how underwhelming Apple's current Mac offerings are. While later on he upgraded to a 17" Big Al, and although I admire the Pismo my favourite Mac laptop to this day remains the wonderfully customizable PowerBook 1400 (with a G3/466, thank you very much, and still looking for a solar cover!), I can think of few people who bore the standard of the classic Mac as a useful productivity device for as long as he did. Even old tools can still be the right tools when given the right job to do.

Go with God.

Categorieën: Mozilla-nl planet

Don Marti: Consent management at Mozfest 2018

Mozilla planet - ma, 24/09/2018 - 09:00

Good news. It looks like we're having a consent management mini-conference as part of Mozfest next month. (I'm one of the organizers for the Global Consent Manager session, and plan to attend the others.)

Cookie consent, a privacy vs user experience nightmare

This session aims to create a working group for improving the user experience of cookie consent popups. In Europe, the use of cookies was first regulated by the Privacy and Electronic Communications Directive 2002/58/EC, then revised by a 2009 amendment, and more recently by the GDPR. Cookie popups and the mechanism for providing consent can be tedious. Browsing the same website from different devices results in consent being asked again. A bad usability can lead users to give their consent without the necessary attention. In this session we will discuss the state of things and look at possible solutions. We will target a multidisciplinary audience of internet users, usability experts, browser developers, lawyers, and online advertisement professionals.

Global Consent Manager: improving user privacy and the consent experience for trusted web sites

We will discuss how consent management on the web works today, and the relationship between user privacy and reputable content providers. Web users face a confusing array of data sharing choices, and click fatigue can lead to poor user experience and possible inadvertent selection of options that do not match the user’s privacy norms.

☑ I blindly accept… (T&Cs)

Audience are engaged with an activity where they’re given clauses from a curated list of clauses from real T&Cs and they express whether it should have been mentioned outright or not. We have a discussion about digital privacy and ways to curb exploitation. Visitors try out our browser plug-in that filters out most important clauses from any T&C.

MOZFEST - DESIGN CONSENT FROM THE GROUND UP

This workshop offers an holistic space to create digital tools and environments in which consent underlies all aspects, from the way they are developed, to how data is stored and accessed, and to the way interactions happen between users. Prototyping consent into our tools will make them more fair and unbiased. Using a specific designed prototyping loop, teams quickly hypothesize, develop, test and assess ideas consentful data prototypes.

Categorieën: Mozilla-nl planet

Niko Matsakis: Office Hours #1: Cyclic services

Mozilla planet - ma, 24/09/2018 - 06:00

This is a report on the second “office hours”, in which we discussed how to setup a series of services or actors that communicate with one another. This is a classic kind of problem in Rust: how to deal with cyclic data. Usually, the answer is that the cycle is not necessary (as in this case).

The setup

To start, let’s imagine that we were working in a GC’d language, like JavaScript. We want to have various “services”, each represented by an object. These services may need to communicate with one another, so we also create a directory, which stores pointers to all the services. As each service is created, they add themselves to the directory; when it’s all setup, each service can access all other services. The setup might look something like this:

function setup() { var directory = {}; var service1 = new Service1(directory); var service2 = new Service2(directory); return directory; } function Service1(directory) { this.directory = directory; directory.service1 = self; ... } function Service2(directory) { this.directory = directory; directory.service2 = self; ... } “Transliterating” the setup to Rust directly

If you try to translate this to Rust, you will run into a big mess. For one thing, Rust really prefers for you to have all the pieces of your data structure ready when you create it, but in this case when we make the directory, the services don’t exist. So we’d have to make the struct use Option, sort of like this:

struct Directory { service1: Option<Service1>, service2: Option<Service2>, }

This is annoying though because, once the directory is initialized, these fields will never be None.

And of course there is a deeper problem: who is the “owner” in this cyclic setup? How are we going to manage the memory? With a GC, there is no firm answer to this question: the entire cycle will be collected at the end, but until then each service keeps every other service alive.

You could setup something with Arc (atomic reference counting) in Rust that has a similar flavor. For example, the directory might have an Arc to each service and the services might have weak refs back to the directory. But Arc really works best when the data is immutable, and we want services to have state. We could solve that with atomics and/or locks, but at this point we might want to step back and see if there is a better way. Turns out, there is!

Translating the setup to Rust without cycles

Our base assumption was that each service in the system needed access to one another, since they will be communicating. But is that really true? These services are actually going to be running on different threads: all they really need to be able to do is to send each other messages. In particular, they don’t need access to the private bits of state that belong to each service.

In other words, we could rework out directory so that – instead of having a handle to each service – it only has a handle to a mailbox for each service. It might look something like this:

#[derive(Clone)] struct Directory { service1: Sender<Message1>, service2: Sender<Message2>, } /// Whatever kind of message service1 expects. struct Message1 { .. } /// Whatever kind of message service2 expects. struct Message2 { .. }

What is this Sender type? It is part of the channels that ship in Rust’s standard library. The idea of a channel is that when you create it, you get back two “entangled” values: a Sender and a Receiver. You send values on the sender and then you read them from the receiver; moreover, the sender can be cloned many times (the receiver cannot).

The idea here is that, when you start your actor, you create a channel to communicate with it. The actor takes the Receiver and the Sender goes into the directory for other servies to use.

Using channels, we can refactor our setup. We begin by making the channels for each actor. Then we create the directory, once we have all the pieces it needs. Finally, we can start the actors themselves:

fn make_directory() { use std::sync::mpsc::channel; // Create the channels let (sender1, receiver1) = channel(); let (sender2, receiver2) = channel(); // Create the directory let directory = Directory { service1: sender1, service2: sender2, }; // Start the actors start_service1(&directory, receiver1); start_service2(&directory, receiver2); }

Starting a service looks kind of like this:

fn start_service1(directory: &Directory, receiver: Receiver<Message1>) { // Get a handle to the directory for ourselves. // Note that cloning a sender just produces a second handle // to the same receiver. let mut directory = directory.clone(); std::thread::spawn(move || { // For each message received on `receiver`... for message in receiver { // ... process the message. Along the way, // we might send a message to another service: match directory.service2(Message2 { .. }) { Ok(()) => /* message successfully sent */, Err(_) => /* service2 thread has crashed or otherwise stopped */, } } }); }

This example also shows off how Rust channels know when their counterparts are valid (they use ref-counting internally to manage this). So, for example, we can iterate over a Receiver to get every incoming message: once all senders are gone, we will stop iterating. Beware, though: in this case, the directory itself holds one of the senders, so we need some sort of explicit message to stop the actor.

Similarly, when you send a message on a Rust channel, it knows if the receiver has gone away. If so, send will return an Err value, so you can recover (e.g., maybe by restarting the service).

Implementing our own (very simple) channels

Maybe it’s interesting to peer “beneath the hood” a bit into channels. It also gives some insight into how to generalize what we just did into a pattern. Let’s implement a very simple channel, one with a fixed length of 1 and without all the error recovery business of counting channels and so forth.

Note: If you’d like to just view the code, click here to view the complete example on the Rust playground.

To start with, we need to create our Sender and Receiver types. We see that each of them holds onto a shared value, which contains the actual state (guarded by a mutex):

use std::sync::{Arc, Condvar, Mutex}; pub struct Sender<T: Send> { shared: Arc<SharedState<T>> } pub struct Receiver<T: Send> { shared: Arc<SharedState<T>> } // Hidden shared state, not exposed // to end-users struct SharedState<T: Send> { value: Mutex<Option<T>>, condvar: Condvar, }

To create a channel, we make the shared state, and then give the sender and receiver access to it:

fn channel<T: Send>() -> (Sender<T>, Receiver<T>) { let shared = Arc::new(SharedState { value: Mutex::new(None), condvar: Condvar::new(), }); let sender = Sender { shared: shared.clone() }; let receiver = Receiver { shared }; (sender, receiver) }

Finally, we can implement send on the sender. It will try to store the value into the mutex, blocking so long as the mutex is None:

impl<T: Send> Sender<T> { pub fn send(&self, value: T) { let mut shared_value = self.shared.value.lock().unwrap(); loop { if shared_value.is_none() { *shared_value = Some(value); self.shared.condvar.notify_all(); return; } // wait until the receiver reads shared_value = self.shared.condvar.wait(shared_value).unwrap(); } } }

Finally, we can implement receive on the Receiver. This just waits until the shared.value field is Some, in which case it overwrites it with None and returns the inner value:

impl<T: Send> Receiver<T> { pub fn receive(&self) -> T { let mut shared_value = self.shared.value.lock().unwrap(); loop { if let Some(value) = shared_value.take() { self.shared.condvar.notify_all(); return value; } // wait until the sender sends shared_value = self.shared.condvar.wait(shared_value).unwrap(); } } }

Again, here is a link to the complete example on the Rust playground.

Dynamic set of services

In our example thus far we used a static Directory struct with fields. We might like to change to a more flexible setup, in which the set of services grows and/or changes dynamically. To do that, I would expect us to replace the directory with a HashMap mapping from kind of service name to a Sender for that service. We might even want to put that directory behind a mutex, so that if one service panics, we can replace the Sender with a new one. But at that point we’re building up an entire actor infrastructure, and that’s too much for one post, so I’ll stop here. =)

Generalizing the pattern

So what was the general lesson here? In often happens that, when writing in a GC’d language, we get accustomed to lumping together all kinds of data together, and then knowing what data we should and should not touch. In our original JS example, all the services had a pointer to the complete state of one another – but we expected them to just leave messages and not to mutate the internal variables of other services. Rust is not so trusting.

In Rust, it often pays to separate out the “one big struct” into smaller pieces. In this case, we separated out the “message processing” part of a service from the rest of the service state. Note that when we implemented this message processing – e.g., our channel impl – we still had to use some caution. We had to guard the data with a lock, for example. But because we’ve separated the rest of the service’s state out, we don’t need to use locks for that, because no other service can reach it.

This case had the added complication of a cycle and the associated memory management headaches. It’s worth pointing out that even in our actor implementation, the cycle hasn’t gone away. It’s just reduced in scope. Each service has a reference to the directory, and the directory has a reference to the Sender for each service. As an example of where you can see this, if you have your service iterate over all the messages from its receiver (as we did):

for msg in self.receiver { .. }

This loop will continue until all of the senders associated with this Receiver go away. But the service itself has a reference to the directory, and that directory contains a Sender for this receiver, so this loop will never terminate – unless we explicitly break. This isn’t too big a surprise: Actor lifetimes tend to require “active management”. Similar problems arise in GC systems when you have big cycles of objects, as they can easily create leaks.

Categorieën: Mozilla-nl planet

The Servo Blog: This Week In Servo 114

Mozilla planet - ma, 24/09/2018 - 02:30

In the past week, we merged 95 PRs in the Servo organization’s repositories.

Big shout-out to @eijebong for digging into the underlying cause of an ongoing, frustrating intermittent problem with running websocket tests in CI.

Planning and Status

Our roadmap is available online, including the overall plans for 2018.

This week’s status updates are here.

Exciting Work in Progress Notable Additions
  • jdm worked around the problem preventing cross-compilation on macOS.
  • paulrouget corrected the pixel density reported by Android builds.
  • sumit0190 implemented automatic profiling support for IPC bytes channels.
  • jdm made it possible to use RUST_LOG with Android builds.
  • nox improved the validiation of GLSL names.
  • nupurbaghel implemented missing steps for the HTMLImageElement.complete API.
  • jdm added support for DEPTH_STENCIL renderbuffers on Android devices.
  • Manishearth implemented the missing BiquadFilter WebAudio node type.
  • nox improved the cross-origin checks for sharing canvas data.
  • paavininanda implemented all of the relevant mutations for responsive image elements.
  • jdm added CI for macOS -> Android cross-compilation.
  • AugstinCB reclaimed some memory that was leaked when a pipeline is closed.
  • emilio improved the upstream build system integration for SpiderMonkey.
  • ferjm corrected a number of implementation errors in the AudioBuffer API.
New Contributors

Interested in helping build a web browser? Take a look at our curated list of issues that are good for new contributors!

Categorieën: Mozilla-nl planet

Mozilla Localization (L10N): L10N Report: September Edition

Mozilla planet - vr, 21/09/2018 - 14:09

Please note some of the information provided in this report may be subject to change as we are sometimes sharing information about projects that are still in early stages and are not final yet.

Welcome!

New localizers

Are you a locale leader and want us to include new members in our upcoming reports? Contact us!

New community/locales added

Javanese and Sundanese locales have been added to Firefox Rocket and are now launching in Indonesia. Congrats to these new teams and to their new localizers! Come check out the Friends of the Lion section below for more information on who they are.

New content and projects What’s new or coming up in Firefox desktop

There are currently few strings landing in Nightly, making it a good time to catch up (if your localization is behind), or test your work. Focus in particular on certificate errors and preferences, given the amount of changes happening around Content blocking, Privacy and Security. The deadline to ship any localization update in Beta (Firefox 63) is October 9.

It’s been a while since we talked about Fluent, but that doesn’t mean that the migration is not progressing. Expect a lot of files moving to Fluent in the coming weeks, thanks to the collaboration with students from the Michigan University.

What’s new or coming up in mobile

Firefox Focus for Android is launching a new version this week, and it’s shipping with 9 new locales: Aymara (ay), Galician (gl), Huastec (hus), Marathi (mr), Punjabi (pa-IN), Náhuat Pipil (ppl), K’iche’ (quc), Sundanese (su), Yucatec Maya (yua).

What’s new or coming up in web projects

Activate Mozilla

The Activate Mozilla campaign aims at the grassroots of volunteer contributions. The initiative wants to bring more clarity on what are the most important areas to contribute to at Mozilla right now by providing guidance to mobilizers on how to recruit contributors and create community around meaningful Mozilla projects.

The project is added to Pontoon in a few required languages, with opt-in option for other languages. Once the completion reaches 95%, the locale will be enabled on production. There is no staging server, and any changes in Pontoon will be synced up and pushed to production directly.

Mozilla.org

Mozilla.org has a new tracking protection tour that highlights the content blocking feature. The update is ready for localization, and the tour will be available on production in early October.

Common Voice

The new home page was launched earlier this month. It is only available for 50% of the users during the A/B testing phase. The new look will roll out to all users in the next sprint. The purpose of this redesign is to convert more visitors to the site to contribute. We hope the new looking will help improve the conversion rate we currently have, which is between 10-15%. Though the change doesn’t increase more localization work at string level, the team believes it will bring more people to donating their voices to established locales.

If your language is not available on the current site, the best way to make a request is through Pontoon by following these step-by-step instructions. To learn more about Common Voice project and discussions, check them out on Discourse.

What’s new or coming up in Foundation projects

A very quick update on the fundraising campaign: the September fundraiser is being sent out in English and localization will start very soon

On the Advocacy side, the copyright campaign will continue after the unfortunate vote from the EU Parliament and the next steps are being discussed. The final vote is scheduled towards the end of year.

Stay tuned!

What’s new or coming up in Support What’s new or coming up in Pontoon
  • New homepage. The new homepage was designed and developed by Pramit Singhi as part of Google Summer of Code. Apart from the new design, it also brings several important content changes. It presents Pontoon as the place to localize Mozilla projects, explains the “whys” and “hows” of localization at Mozilla in general, brings a clear call to action, and moves in-context localization demo to a separate page.

Guided tour. Another product of Google Summer of Code is a guided tour of Pontoon, designed and developed by an experienced Pontoon contributor Vishal Sharma. It’s linked from the homepage as a secondary call to action, and consists of two pieces: the actual tour, which explains the translation user interface, and the tutorial project, which demonstrates more details through carefully chosen strings.

  • System projects. Vishal also developed the ability to mark projects as “system projects”, which are hidden from dashboards. The aforementioned Tutorial project and Pontoon Intro are both treated as system projects.
  • Read-only locales. Last month we enabled locales previously not in Pontoon, in read-only mode. That means dashboards and the API now present full project status across all locales, all Mozilla translations are accessible in the Locales tab, and the Translation Memory of locales previously not available in Pontoon has improved. Check out the newsgroup for more details on which locales were enabled for which projects.
  • Unchanged filter improvement. Thanks to Raivis Dejus, the Unchanged filter now works as expected. Previously, it returned all strings for which the source string is the same as one of the suggestions. Now, it only compares source strings with active translations (show in the string list).

 

Pontoon tips

Here’s a quick Pontoon tip that a lot of people already know, but can still help some.

Pontoon has a feature that automatically identifies specific elements in strings, highlights them and makes them clickable. Here’s an example:

This feature is meant to allow you to easily copy placeables into your translation by clicking them. This saves you time and reduces the risk of introducing typos if you manually rewrite them, or partially select them while copy/pasting them.

One common misconception is to think those elements should always be kept in English. While it’s certainly true in multiple cases (variables, HTML tags like in the screenshot above…), there are several places where Pontoon highlights parts of a string that could or should be translated.

Here’s an example where all the highlighted elements should be translated:

Here Pontoon thinks those words are acronyms, and that you could potentially keep them in your translation. It turns out here they are not acronyms, it’s just a sentence in full caps, so we can simply ignore the highlights and translate it like any other string.

Here’s a last example where Pontoon successfully detects an acronym, and it could have been kept but the localizer decided to translate it anyway (and it’s okay):

To summarize the feature, Pontoon does its best to guess what parts of a string you are likely to keep in your translation, but these are suggestions only.

Also remember, you’re not alone! If you have a doubt, you can always reach out to the l10n PM owning the project. They will clarify the context for you and help you better identify false positives.

Events
  • Jakarta (Indonesia) Rocket Sprint held on August 11-12 added two new languages to the product. Javanese contributor Akhlis summarized the weekend activity with his blog.
  • Pune (India) l10n community event just happened (Sept. 1-2). Come check out some pictures:
  • Want to showcase an event coming up that your community is participating in? Reach out to any l10n-driver and we’ll include that (see links to emails at the bottom of this report)
Friends of the Lion
  • Ali Demirtaş, who surpassed the goal of over 1 thousand suggestions for Turkish.
  • Congratulations to the contributors who have helped launch Firefox Rocket in Javanese and Sundanese! Here they are:
    • Javanese Team:
      • Rizki Dwi Kelimutu
      • Dian Ina Mahendra
      • Armen Ringgo Sukiro
      • Nur Fahmia
      • Nuri Abidin
      • Akhlis Purnomo
    • Sundanese Team:
      • Fauzan Alfi Agirachman
      • Muhammad Fadhil
      • Mira Marsellia
      • Yusup Ramdani
      • Iskandar Alisyahbana Adnan
  • Ahmad Nourallah, who localized numerous Support articles as part of the “Top 20” month.

Know someone in your l10n community who’s been doing a great job and should appear here? Contact on of the l10n-drivers and we’ll make sure they get a shout-out (see list at the bottom)!

Useful Links Questions? Want to get involved?

Did you enjoy reading this report? Let us know how we can improve by reaching out to any one of the l10n-drivers listed above.

Categorieën: Mozilla-nl planet

Paul Bone: Disassembling JITed code in GDB

Mozilla planet - ma, 06/08/2018 - 16:00

I’ve been making changes to the JIT in SpiderMonkey, and sometimes get a SEGFAULT, okay so open it in gdb, then this happens:

Thread 1 "js" received signal SIGSEGV, Segmentation fault. 0x0000129af35af5e9 in ?? ()

Not helpful, maybe there’s something in the stack?

(gdb) backtrace #0 0x0000129af35af5e9 in () #1 0x0000129af35b107d in () #2 0xfff9800000000000 in () #3 0xfff8800000000002 in () #4 0xfff8800000000002 in ()

Still not helpful, I’m reasonably confident the crash is in JITed code which has no debugging symbols or other info. So I don’t know what it’s actually executing when it crashed.

In case it’s not apparent, this is a short blog post where I can make notes of one way to get some more information when debugging JITed code.

First of all, those really large addresses (frames 2, 3 and 4) look suspicious. I’m not sure what causes that.

Now, I know the change I made to the JIT, so it’s likely that that’s the code that’s crashing, I just don’t know why. It would help to see what code is being executed:

(gdb) disassemble No function contains program counter for selected frame.

What it’s trying to say, is that the current program counter at this level in the backtrace does not correspond with the C program (SpiderMonkey). Yes, unless we did a call or goto of something invalid, then we’re probably executing JITed code.

Let’s get more info:

(gdb) info registers rax 0x7ffff54b30c0 140737308733632 rbx 0xe4e4e4e400000891 -1953184670468274031 rcx 0xc 12 rdx 0x7ffff54c1058 140737308790872 rsi 0xa 10 rdi 0x7ffff54c1040 140737308790848 rbp 0x7fffffff9438 0x7fffffff9438 rsp 0x7fffffff9418 0x7fffffff9418 r8 0x7fffffff9088 140737488326792 r9 0x8 8 r10 0x7fffffff9068 140737488326760 r11 0x7ffff5d2f128 140737317630248 r12 0x0 0 r13 0x0 0 r14 0x7ffff54a0040 140737308655680 r15 0x0 0 rip 0x129af35af5e9 0x129af35af5e9 eflags 0x10202 [ IF RF ] cs 0x33 51 ss 0x2b 43 ds 0x0 0 es 0x0 0 fs 0x0 0 gs 0x0 0

These are the values in the CPU registers. The debugger the rip (program counter) and rsp (stack pointer) and rbp (frame pointer) registers to know what it’s executing and to read the stack, including the calls that lead to this one. We can use this too, we’re going to use rip to figure out what’s being executed, it’s current value is 0x129af35af5e9.

(gdb) dump memory code.raw 0x129af35af5e9 0x129af35af600

Then in a shell:

$ hexdump -C code.raw 00000000 83 03 01 c7 02 4b 00 00 00 e9 82 00 00 00 49 bb |.....K........I.| 00000010 a8 ab d1 f5 ff 7f 00 |.......|

I have asked gdb, to write the contents of memory at the instruction pointer to a file named code.raw. Note that on x86-64 you need to write at least 15 bytes, as some instructions can be that long; I have 23 bytes.

I’d normally disassemble code using the objdump program:

$ objdump -d code.raw objdump: code.raw: File format not recognised

In this case it needs extra clues about the raw data in this file. We tell it the file format, the machine "i386" and give the disassembler more information about the machine "x86-64".

$ objdump -b binary -m i386 -M x86-64 -D code.raw code.raw: file format binary Disassembly of section .data: 00000000 <.data>: 0: 83 03 01 addl $0x1,(%rbx) 3: c7 02 4b 00 00 00 movl $0x4b,(%rdx) 9: e9 82 00 00 00 jmpq 0x90 e: 49 rex.WB f: bb a8 ab d1 f5 mov $0xf5d1aba8,%ebx 14: ff (bad) 15: 7f 00 jg 0x17

Yay. I can see the instruction it crashed on. Adding the number 1 to the 32-bit value stored at the address pointed to by rbx. I’d like some more context, so I have to get the instructions that lead to this. Note that after the jmpq instruction nothing makes sense, that’s okay since that jump is always taken.

(gdb) dump memory code.raw 0x2ce07c3895e6 0x2ce07c3895f7 ... $ objdump -b binary -m i386 -M x86-64 -D code.raw code.raw: file format binary Disassembly of section .data: 00000000 <.data>: 0: 49 8b 1b mov (%r11),%rbx 3: 83 03 01 addl $0x1,(%rbx) 6: c7 02 4b 00 00 00 movl $0x4b,(%rdx) c: e9 82 00 00 00 jmpq 0x93

When I go back three bytes I get lucky and find another valid instruction that also makes sense.

(gdb) dump memory code.raw 0x2ce07c3895e5 0x2ce07c3895f7 ... $ objdump -b binary -m i386 -M x86-64 -D code.raw code.raw: file format binary Disassembly of section .data: 00000000 <.data>: 0: 00 49 8b add %cl,-0x75(%rcx) 3: 1b 83 03 01 c7 02 sbb 0x2c70103(%rbx),%eax 9: 4b 00 00 rex.WXB add %al,(%r8) c: 00 e9 add %ch,%cl e: 82 (bad) f: 00 00 add %al,(%rax) ...

Gibberish. Unfortunately I just have to guess which byte an instruction might begin on. Or go back byte-by-byte finding instructions that make sense. There was quiet a bit of experimentation, and a lot more gibberish until I found:

(gdb) dump memory code.raw 0x2ce07c3895dd 0x2ce07c3895f7 ... $ objdump -b binary -m i386 -M x86-64 -D code.raw code.raw: file format binary Disassembly of section .data: 00000000 <.data>: 0: bb 28 f1 d2 f5 mov $0xf5d2f128,%ebx 5: ff (bad) 6: 7f 00 jg 0x8 8: 00 49 8b add %cl,-0x75(%rcx) b: 1b 83 03 01 c7 02 sbb 0x2c70103(%rbx),%eax 11: 4b 00 00 rex.WXB add %al,(%r8) 14: 00 e9 add %ch,%cl 16: 82 (bad) 17: 00 00 add %al,(%rax) ...

This is almost correct (except for all the gibberish). But at least it starts on an instruction that kind-of makes sense with a valid-looking memory address. But wait, that instruction uses ebx a 32-bit register. Which is not what I’m expecting since the code I’m JITing works with 64-bit memory addresses. And all that gibberish could be part of a memory address, it has bytes like 0xff and 0x7f in it!

I go back one more byte:

(gdb) dump memory code.raw 0x2ce07c3895dc 0x2ce07c3895f7 ... $ objdump -b binary -m i386 -M x86-64 -D code.raw code.raw: file format binary Disassembly of section .data: 00000000 <.data>: 0: 49 bb 28 f1 d2 f5 ff movabs $0x7ffff5d2f128,%r11 7: 7f 00 00 a: 49 8b 1b mov (%r11),%rbx d: 83 03 01 addl $0x1,(%rbx) 10: c7 02 4b 00 00 00 movl $0x4b,(%rdx) 16: e9 82 00 00 00 jmpq 0x9d

Got it. That’s a long instruction (which I’ll talk more about in my next article) Now that we have the extra byte at the beginning. x86 has prefix bytes for some instructions which can override some things about the instruction. In this case 0x49 is saying this instruction operates on 64-bit data (well 0x48 says that and +1 is part of the register address).

And there’s the bug (3rd line). I’m dereferencing this address, the one that I load into r11 once, and then again during the addl. I should only de-reference it once. The cause was that I misunderstood SpiderMonkey’s macro assembler’s mnemonics.

Update 2018-08-07

One response to this pointed out that I could have just used:

(gdb) disassemble 0x12345, +0x100

To disassemble a range of memory, and wouldn’t have had the "No function contains program counter for selected frame." error. They even suggested I could use something like:

(gdb) disassemble $rip-50, +0x100

I’ll definitely try these next time, they might not be the exact syntax. I haven’t tested them..

Update 2018-08-18

Another tip is to use: x/20i $pc

That’s the whole command. x means that GDB should use the $pc as a memory location and not as a literal; /20i means "treat that memory location as containing instructions and show 20 of them"

You can also use this with display, like in display x/4i $pc so that every time you stepi, it will auto-print the next 4 instructions.

Categorieën: Mozilla-nl planet

Mozilla Open Policy & Advocacy Blog: Decision in Oracle v. Google Fair Use Case Could Hinder Innovation in Software Development

Mozilla planet - wo, 18/04/2018 - 02:59

The technology industry was dealt a major setback when the Federal Circuit recently decided in Oracle v. Google that Google’s use of Java “declaring code” was not a fair use. The copyright doctrine of Fair Use impacts a developer’s ability to learn from and improve on the work of others, which is a crucial part of software development. Because of this ruling, copyright law today is now at odds with how software is developed.*

This is the second time in this eight year case that the Federal Circuit’s ruling has diverged from how software is written. In 2014, the court decided that declaring code can be copyrighted, a ruling with which we disagreed. Last year we filed another amicus brief in this case, advocating that Google’s implementation of the APIs should be considered a fair use. In this recent decision, the court found that copying the Java declaring code was not a protected fair use of that code.

We believe that open source software is vital to security, privacy, and open access to the internet. We also believe that Fair Use is critical to developing better, more secure, more private, and more open software because it allows developers to learn from each other and improve on existing work. Even the Mozilla Public License explicitly acknowledges that it “is not intended to limit any rights” under applicable copyright doctrines such as fair use.

The Federal Circuit’s decision is a big step in the wrong direction. We hope Google appeals to the Supreme Court and that the Supreme Court sets us back on a better course.

 

* When Google released its Android operating system, it incorporated some code from Sun Microsystem’s Java APIs into the software. Google copied code in those APIs that merely names functions and performs other general housekeeping functions (called “declaring code”) but wrote all the substantive code (called “implementing code”) from scratch. Software developers generally use declaring code to define the names, format, and organization ideas for certain functions, and implementing code to do the actual work (telling the program how to perform the functions). Developers specifically rely on “declaring code” to enable their own programs to interact with other software, resulting in code that is efficient and easy for others to use.

The post Decision in Oracle v. Google Fair Use Case Could Hinder Innovation in Software Development appeared first on Open Policy & Advocacy.

Categorieën: Mozilla-nl planet

Chris Ilias: Why we participate in support

Mozilla planet - ma, 19/03/2018 - 16:05

Why do you participate in user support?
Have you ever wondered why any of the people who answer support questions, and write documentation take the time to do it?

This is a followup to a post I wrote about dealing with disgruntled users.

Firefox is a tool of Mozilla to influence an industry toward open standards, and against software silos. By having enough market share in the browser world, web-developers are forced to support open standards.
Users will not use Firefox if they do not know how to use it, or if it is not working as expected. Support exists to retain users. If their experience of using Firefox is bad, we’re here to make it good, so they continue to use Firefox.

That experience includes user support. The goal is not only to help users with their problems, but remove any negative feeling they may have had. That should be the priority of every person participating in support.

Dealing with disgruntled users is an inherent part of user support. In those cases, it is important to remind ourselves what the user wants to achieve, and what it takes to make their experience a pleasant one.

In the end, users will be more willing to forgive individual issues out of fondness of the company. That passion for helping users will attract others, and the community will grow.

Categorieën: Mozilla-nl planet

Mozilla Marketing Engineering & Ops Blog: MozMEAO SRE Status Report - February 28, 2018

Mozilla planet - wo, 28/02/2018 - 01:00

Here’s what happened on the MozMEAO SRE team from February 16 - February 28th.

Current work support.mozilla.org (SUMO)

Most of our recent efforts have been related to the SUMO migration to AWS. We’ll be running the stage and production environments in our Oregon-A and Oregon-B clusters, with read-only failover in Frankfurt.

Links
Categorieën: Mozilla-nl planet

Mike Conley: Making tab switching faster in Firefox with tab warming

Thunderbird - do, 11/01/2018 - 21:00
Making tab operations fast

Since working on the Electrolysis team (and having transitioned to working on various performance initiatives), I’ve been working on making tab operations feel faster in Firefox. For example, I wrote a few months back about a technique we used to make tab closing faster.

Today, I’m writing to talk about how we’re trying to make tab switching feel faster in some cases.

What is “tab warming”?

When you switch a tab in multi-process Firefox, traditionally we’d send a message to the content process to tell it to paint its layers, and then we’d wait for the compositor to tell us that it had received those layers before finally doing the tab switch.

With the exception of some degenerate cases, this mechanism has worked pretty well since we introduced it, but I think we can do slightly better.

“Tab warming” is what we’re calling the process of pre-emptively rendering the layers for a tab, and pre-emptively uploading them to the compositor, when we’re pretty sure you’re likely to switch to that tab.1

Maybe this is my Canadian-ness showing, but I like to think of it almost like coming in from shoveling snow off of the driveway, and somebody inside has already made hot chocolate for you, because they knew you’d probably be cold.

For many cases, I don’t actually think tab warming will be very noticeable; in my experience, we’re able to render and upload the layers2 for most sites quickly enough for the difference to be negligible.

There are certain sites, however, that we can’t render and upload layers for as quickly. These are the sites that I think warming will help with.

Here’s an example of such a site

The above link is using SVGs and CSS to do an animation. Unfortunately, on my MBP, if I have this open in a background tab in Firefox right now, and switch to it, there’s an appreciable delay between clicking that tab and it finally being presented to me.3

With tab warming enabled, when you hover over the tab with your mouse cursor, the rendering of that sophisticated SVG will occur while your finger is still on its way to click on the mouse button to actually choose the tab. Those precious milliseconds are used to do the rendering and uploading, so that when the click event finally comes, the SVG is ready and waiting for you.

Assuming a sufficiently long delay between hover and click, the tab switch should be perceived as instantaneous. If the delay was non-zero but still not long enough, we will have nonetheless shaved that time off in eventually presenting the tab to you.

And in the event that we were wrong, and you weren’t interested in seeing the tab, we eventually throw the uploaded layers away.

On my own machine, this makes a significant difference in the perceived tab switch performance with the above site.

Trying it out in Nightly

Tab warming is currently controlled via this preference:

browser.tabs.remote.warmup.enabled

and is currently off by default while we test it and work out more kinks. If you’re interested in helping us test, flip that preference in Firefox Nightly, and file bugs if you see it introducing strange behaviour.

Hopefully we’ll be able to flip it on by default soon. Stay tuned!

Translations

Thanks to generous volunteers on the web, this article has been translated to the following languages:

  1. Russian, by HTR Mobile
  1. Right now, we simply detect whether you’re hovering a tab with a mouse to predict that you’re likely going to choose that, but there are certain more opportunities to introduce warming based on other user behaviours. 

  2. We can even interrupt JavaScript to do this, thankfully! 

  3. I suspect WebRender will eventually help with the raw rendering performance, but that’s still a little ways off from being shipped to users. 

Categorieën: Mozilla-nl planet

Mozilla GFX: WebRender newsletter #11

Mozilla planet - wo, 03/01/2018 - 14:40

Newsletter #11 is finally here, even later than usual due to an intense week in Austin where all of Mozilla’s staff and a few independent contributors gathered, followed by yours truly taking two weeks off.

Our focus before the Austin allhands was on performance, especially on Windows. We had some great results out of this and are shifting priorities back to correctness issues for a little while.

Notable WebRender changes
  • Martin added some clipping optimizations in #2104 and #2156.
  • Ethan improved the performance of rendering large ellipses.
  • Kvark implemented different texture upload strategies to be selected at runtime depending on the driver. This has a very large impact when using Windows.
  • Kvark worked around the slow depth clear implementation in ANGLE.
  • Glenn implemented splitting rectangle primitives, which allows moving a lot of pixels to the opaque pass and reduce overdraw.
  • Ethan sped up ellipse calculations in the shaders.
  • Morris implemented the drop-shadow() CSS filter.
  • Gankro introduced deserialize_from in serde for faster deserialization, and added it to WebRender.
  • Glenn added dual-source blending path for subpixel text when supported, yielding performance improvements when the text color is different between text runs.
  • Many people fixed a lot of bugs, too many for me to list them here.
Notable Gecko changes
  • Sotaro made Gecko use EGL_EXPERIMENTAL_PRESENT_PATH_FAST_ANGLE for WebRender. This avoids a full screen copy when presenting. With this change the peak fps of http://learningwebgl.com/lessons/lesson03/index.html on P50(Win10) was changed from 50fps to 60fps.
  • Sotaro prevented video elements from rendering at 60fps when they have a lower frame rate.
  • Jeff removed two copies of the display list (one of which happens on the main thread).
  • Kats removed a performance cliff resulting from linear search through clips. This drastically improves MazeSolver time (~57 seconds down to ~14 seconds).
  • Jeff removed a copy of the glyph buffer
  • Lots and lots of more fixes and improvements.
Enabling WebRender in Firefox Nightly

In about:config:

  • set “gfx.webrender.enabled” to true,
  • set “gfx.webrender.blob-images” to true,
  • set “image.mem.shared” to true,
  • if you are on Linux, set “layers.acceleration.force-enabled” to true.

Note that WebRender can only be enabled in Firefox Nightly.

Categorieën: Mozilla-nl planet

Pagina's