mozilla

Mozilla Nederland LogoDe Nederlandse
Mozilla-gemeenschap

Mozilla Reaffirms Commitment to Transgender Equality

Mozilla Blog - wo, 07/11/2018 - 23:21

As an employer, Mozilla has a long-standing commitment to diversity, inclusion and fostering a supportive work environment. In keeping with that commitment, today we join the growing list of companies publicly opposed to efforts to erase transgender protections through reinterpretation of existing laws and regulations, as well as any policy or regulation that violates the privacy rights of those who identify as transgender, gender non-binary, or intersex.

The rights, identities and humanity of transgender, gender non-binary and intersex people deserve affirmation.

A workplace that is inclusive of a diversity of backgrounds and experiences is also good for business. We’re glad to see companies across the industry joining the Human Rights Campaign’s statement and sharing this perspective.

At Mozilla, we have clear community guidelines dictating our expectations of employees and volunteers and recently rolled out workplace guidelines for transitioning individuals and their managers. These actions are a part of our commitment to ensuring a welcoming, respectful and inclusive culture.

We urge the federal government to end its exploration of policies that would erase transgender protections and erode decades of hard work to create parity, respect, opportunity and understanding for transgender professionals.

The post Mozilla Reaffirms Commitment to Transgender Equality appeared first on The Mozilla Blog.

Categorieën: Mozilla-nl planet

Chris Pearce: On learning Go and a comparison with Rust

Mozilla planet - ma, 05/11/2018 - 03:28
I spoke at the AKL Rust Meetup last month (slides) about my side project doing data mining in Rust. There were a number of engineers from Movio there who use Go, and I've been keen for a while to learn Go and compare it with Rust and Python for my data mining side projects, so that inspired me to knuckle down and learn Go.
Go is super simple. I was able to learn the important points in a couple of evenings by reading GoByExample, and I very quickly had an implementation of the FPGrowth algorithm in Go up and running. For reference, I also have implementations of FPGrowth in Rust, PythonJava and C++
As a language, Go is very simple. The language lacks many of the higher level constructs of other modern languages, but the lack of these make it very easy to learn, straightforward to use, and easy to read and understand. It feels similar to Python. There's little hidden functionality; you can't overload operators for example, and there's no generics or macros, so the implementation for everything has to be rewritten for every type. This gets tedious, but it does at least mean the implementation for everything is simple and explicit, the code right in front of you.
I also really miss the functional constructs that are built into many other languages, like mapping a function over a sequence, filter, any, all, etc. With Go, you need to reimplement these yourself, and because there's no generics (yet), you need to do it for every type you want to use these on. The lack of generics is also painful when writing custom containers.
Not being able to key a map with a struct containing a slice was a nuisance for my problem domain; I ended up having to write a custom tree-set data structure due to this; though it was very easy to write thanks to in built maps. Whereas Rust, or even Java, has traits/functions you can implement to ensure things can be hashed.
The package management for Go feels a bit tacked on; requiring all Go projects to be in a GO_PATH seems a consequence of not having a tool the equal of Rust's Cargo coupled with something like crates.io.
And Go's design decision to use the case of a symbol's first letter to express whether that symbol is public or private is annoying. I have a long standing habit of using foo as the name for a single instance of type Foo, but that pattern doesn't work in Go. The consequence of this design choice is it leads programmers to using lots of non-descriptive names for things. Like single letter variable names. Or the dreaded myFoo.
The memory model of Go is simple, and again I think the simplicity is a strength of the language. Go uses escape analysis to determine whether a value escapes outside of a scope, and moves such values to the heap if so. Go also dynamically grows goroutines' stacks, so there's no stack overflow. Go is garbage collected, so you don't have to worry about deallocating things.
I found that thinking of values as being on the heap or stack wasn't a helpful mental model with Go. Once I started to think of variables as references to values and values being shared when I took the address (via the & operator), the memory model clicked.
I think Go's simple memory model and syntax make it a good candidate as a language to teach to beginner programmers, more so than Rust.
The build times are impressively fast, particularly on an incremental build. After the initial build of my project, I was getting build times to fast to perceive on my 2015 13" MBP, which is impressive. Rust has vastly slower build time.
The error messages produced by the Go compiler were very spartan. The Rust compiler produces very helpful error messages, and in general I think Rust is leading here.
Go has a very easy to use profile package which you can embed in your Go program. Combined with GraphViz, it produces simple CPU utilization graphs like this one:CPU profile graph produced by Go's "profile" package and GraphViz.
Having an easy to use profiler bundled with your app is a huge plus. As we've seen with Firefox, this makes it easy for your users to send you profiles of their workloads on their own hardware. The graph visualization is also very simple to understand.
The fact that Go lacks the ability to mark variables/parameters as immutable is mind-boggling to me. Given the language designers came from C, I'm surprised by this. I've written enough multi-threaded and large system code to know the value of restricting what can mess with your state.
Goroutines are pretty lightweight and neat. You can also use them to make a simple "generator" object; spawn a goroutine to do your stateful computation, and yield each result by pushing it into a channel. The consumer can block on receiving the next value by receiving on the channel, and the producer will block when it pushes into a channel that's not yet been received on. Note you could do this with Rust too, but you'd have to spawn an OS thread to do this, which is more heavy weight than a goroutine, which are basically userspace threads.
Rust's Rayon parallelism crate is simply awesome, and using that I was able to easily and effectively parallelize my Rust FPGrowth implementation using Rayon's parallel-iterators. As best as I can tell, Go doesn't have anything on par with Rayon for parallelism. Go's goroutines are great for lightweight concurrency, but they don't make it as easy as using's Rayon's par_iter() to trivially parallelize a loop. Note, parallelism is not concurrency.
All of my attempts to parallelize my Go FPGrowth implementation as naively as I'd parallelized my Rust+Rayon implementation resulted in a slower Go program. In order to parallelize FPGrowth in Go, I'd have to do something complicated, though I'm sure channels and goroutines would make that easier than in a traditional language like Java or C++.
Go would really benefit from something like Rayon, but unfortunately due to Go's lack of immutability and a borrow checker, it's not safe to naively parallelize arbitrary loops like it is in Rust. So Rust wins on parallelism. Both languages are strong on concurrency, but Rust pulls ahead due to its safety features and Rayon.
Comparing Rust to Go is inevitable... Go to me feels like the spiritual successor to C, whereas Rust is the successor to C++.

I feel that Rust has a learning curve, and before you're over the hump, it can be hard to appreciate the benefits of the constraints Rust enforces. For Go, you get over that hump a lot sooner. Whereas with Rust, you get over that hump a lot later, but the heights you reach after are much higher.

Overall, I think Rust is superior, but if I'd learned Go first I'd probably be quite happy with Go.
Categorieën: Mozilla-nl planet

The Servo Blog: This Week In Servo 118

Mozilla planet - ma, 05/11/2018 - 01:30

In the past week, we merged 75 PRs in the Servo organization’s repositories.

Planning and Status

Our roadmap is available online, including the overall plans for 2018.

This week’s status updates are here.

Exciting Work in Progress Notable Additions
  • eijebong updated hyper to a version that uses async I/O.
  • ferjm updated the gstreamer binaries to support more media types on Android.
  • notriddle fixed a web compatibility problem with assigning names to iframes.
  • ajeffrey created a Magic Leap port of Servo.
  • CYBAI avoided a crash when unloading browser contexts.
  • jdm worked around an incorrectly implemented GL API on older Android devices.
  • paulrouget added a shutdown synchronization mechanism for the Android port.
  • ferjm implemented byte-range seeking for file URLs.
  • jdm made it possible to disable bluetooth support at the platform level.
New Contributors
  • Sean Voisen

Interested in helping build a web browser? Take a look at our curated list of issues that are good for new contributors!

Categorieën: Mozilla-nl planet

David Humphrey: Observations on Hacktoberfest 2018

Mozilla planet - zo, 04/11/2018 - 02:53

This term I'm teaching two sections of our Topics in Open Source Development course. The course aims to take upper-semester CS students into open source projects, and get them working on real-world software.

My usual approach is to put the entire class on the same large open source project. I like this method, because it means that students can help mentor each other, and we can form a shadow-community off to the side of the main project. Typically I've used Mozilla as a place to do this work.

However, this term I've been experimenting with having students work more freely within open source projects on GitHub in general. I also wanted to try encouraging students to work on Hacktoberfest as part of their work.

Hacktoberfest

Hacktoberfest is a yearly event sponsored by DigitalOcean, Twilio, and GitHub, which encourages new people to get involved in open source. Submit five pull requests on GitHub during October, get a T-Shirt and stickers. This year, many companies have joined in and offered to also give extra swag or prizes if people fix bugs in their repos (e.g., Microsoft).

Some of my students have done Hacktoberfest in the past and enjoyed it, so I thought I'd see what would happen if I got all my students involved. During the month of October, I asked my students to work on 1 pull request per week, and also to write a blog post about the experience, what they learned, what they fixed, and to share their thoughts.

Results

Now that October has ended, I wanted to share what I learned by having my students do this. During the month I worked with them to answer questions, support their problems with git and GitHub, explain build failures on Travis, intervene in the comments of pull requests, etc. Along the way I got to see what happens when a lot of new people suddenly get thrust into open source.

You can find all the PRs and blog posts here, but let me take you through some raw numbers and interesting facts:

  • 61 Students began Hacktoberfest and 91% were able to finish the required 5 PRs, three completed 6, and one student completed 10.
  • 307 Pull Requests were made to 180 repositories. As of Nov 1, 53% of these have already been merged.
  • 42,661 lines of code were added, 10,387 lines deleted in 3,465 files. Small changes add up.
  • The smallest PR was a fix for a single character, the largest added 10K lines by refactoring an iOS project to use a new Swift networking library (NOTE: there were a few really large PRs which I haven't included, because they were mostly generated files in the form of node_modules). Many PRs fixed bugs by simply deleting code. "Sir, are you sure this counts?" Yes, it most certainly does.

One of the things I was interested in seeing was which languages the students would choose to work in, when given the choice. As a result, I tried to keep track of the languages being used in PRs. In no particular order:

  • Rust
  • Swift
  • Scala
  • JavaScript
  • React
  • node.js
  • Markdown
  • PHP
  • Lua
  • Localization files (many types, many, many Natural languages)
  • JSON
  • C#
  • C++
  • Java
  • Go
  • Ruby
  • Python
  • HTML, CSS
  • Solidity (Etherium)

I was also interested to see which projects the students would join. I'll discuss this more broadly below, but here are some of the more notable projects to which I saw the students submit fixes:

  • TravisCI
  • Microsoft VSCode
  • Mozilla Focus for iOS
  • Mozilla Addons (Frontend)
  • Brave for iOS
  • Handbrake
  • Ghost (blog platform)
  • Pandas
  • Keras
  • Jest
  • Monaco Editor
  • Microsoft (documentation for various projects)
  • Auth0
  • 30 Seconds of Code
  • Angular Material
  • Oh my zsh

A number of students did enough work in the project that they were asked to become collaborators. In two cases, the students were asked to take over the project and become maintainers! Careful what you touch.

Observations

I think the most valuable feedback comes from the student blogs, which I'll share quotes from below. Before I do that, let me share a few things that I observed and learned through this process.

  1. Because Hacktoberfest is a bit of a game, people game the system. I was surprised at the number of "Hacktoberfest-oriented" repos I saw. These were projects that were created specifically for Hacktoberfest in order to give people an easy way to contribute; or they were pre-existing but also provided a way to get started I hadn't foreseen. For example:

    I'll admit that I was not anticipating this sort of thing when I sent students out to work on open source projects. However, after reading their blog posts about the experience, I have come to the conclusion that for many people, just gaining experience with git, GitHub, and the mechanics of contribution, is valuable no matter the significance of the "code."

  2. For those students who focused too much on repos like those mentioned above, there was often a lot of frustration, since the "maintainers" were absent and the "community" was chaotic. People would steal issues from one another, merges would overwrite previous contributions, and there was little chance for personal growth. I saw many people tire of this treatment, and eventually decide, on their own, that they needed a better project. Eventually, the "easy" way became too crowded and untenable.

  3. Finding good bugs to work on continues to be hard. There are hundreds-of-thousands of bugs labeled "Hacktoberfest" on GitHub. But my students eventually gave up trying to use this label to find things. There were too many people trying to jump on the same bugs (~46K people participated in Hacktoberfest this year). I know that many of the students spent as much time looking for bugs as they did fixing them. This is an incredible statement, given that they are millions of open bugs on GitHub. The open source community in general needs to do a much better job connecting people with projects. Our current methods don't work. If you already know what you want to work on, then it's trivial. But if you are truly new (most of my students were), it's daunting. You should be able to match your skills to the kinds of work happening in repos, and then find things you can contribute toward. GitHub needs to do better here.

  4. A lot of my students speak more than one language, and wanted to work on localization. However, Hacktoberfest only counts work done in PRs toward your total. Most projects use third-party tools (e.g., Transifex) outside of GitHub for localization. I think we should do more to recognize localization as first-order contribution. If you go and translate a big part of a project or app, that should count toward your total too.

  5. Most of the students participated in three or more projects vs. focusing in just one. When you have as many students as I do all moving in and out of new projects, you come to realize how different open source projects are. For example, there are very few standards for how one "signs up" to work on something: some projects wanted an Issue; some wanted you to open a WIP PR with special tags; some wanted you to get permission; some wanted you to message a bot; etc. People (myself included) tend to work in their own projects, or within an ecosystem of projects, and don't realize how diverse and confusing this can be for new people. A project being on GitHub doesn't tell you that much about what the expectations are going to be, and simply having a CONTRIBUTING.md file isn't enough. Everyone has one, and they're all different!

Quotes from Student Blogs: "What. A. Month!"

More than the code they wrote, I was interested in the reflections of the students as they moved from being beginners to feeling more and more confident. You can find links to all of the blog posts and PRs here. Here are some examples of what it was like for the students, in their own words.

Different approaches to getting started

Many students began by choosing projects using tech they already knew.

"I decided to play to my strengths, and look at projects that had issues recommended for beginners, as well as developed in python"

"Having enjoyed working with Go during my summer internship, I was like: Hey, let's look for a smaller project that uses Go!"

Others started with something unknown, new, and different.

"my goal for this month is to contribute to at least two projects that use Rust"

"I'm in a position where I have to learn C# for another project in another course, so the first thing I did was search up issues on Github that were labeled 'hacktoberfest' and uses the C# language."

"My first pull request was on a new language I have not worked with before called Ruby...I chose to work with Ruby because it was a chance to work with something that I have not worked with before."

I was fascinated to see people evolve over the five weeks, and some who were scared to venture out of familiar territory at first, ended up in something completely new by the end.

"After I completed my pull request, I felt very proud of myself because I learned a new language which I did not think I was able to do at the beginning of this."

"I was able to learn a new programming language, Python, which was something I always wanted to do myself. If it wasn't for Hacktoberfest, I might keep procrastinating and never get started."

Overcoming Impostor Syndrome

Perhaps the single greatest effect of working on five PRs (vs. one or two), is that it helped to slowly convince people that they despite how they felt initially, they could in fact contribute, and did belong.

"Everyone posting here (on GitHub) is a genius, and it's very daunting."

"In the beginning, I am afraid to find issues that people post on GitHub."

"As I am looking at the C code I am daunted by the sheer size of some of the function, and how different functions are interacting with each other. This is surprisingly different from my experience with C++, or Object Oriented Programming in general."

Small successes bring greater confidence:

"Fixing the bug wasn't as easy as I thought it would be - the solution wasn't too straightforward - but with the help of the existing code as reference, I was able to find a solution. When working on a project at school, the instructions are laid out for you. You know which functions to work on, what they should do, and what they shouldn't do. Working on OSS was a completely different experience. School work is straightforward and because of that I'd be able to jump straight in and code. Being used to coding in this way made my first day trying to fix the bug very frustrating. After a day of trying to fix functions that looked as if they could be related to the bug without success, I decided to look at it again the next day. This time, I decided to take my time. I removed all of my edits to the code and spent around an hour just reading the code and drawing out the dependencies. This finally allowed me to find a solution."

"I learned that it's not going to be easy to contribute to a project right away, and that's okay...it's important to not get overwhelmed with what you're working on, and continue 1 step at a time, and ask questions when you need help. Be confident, if I can do it, you can too."

"I learned that an issue didn't need to be tagged as 'beginner friendly' for it to be viable for me, that I could and should take on more challenging issues at this point, ones that would feel more rewarding and worthwhile."

A Sense of Accomplishment

Many students talked about being "proud" of their work, or expressed surprise when a project they use personally accepted their code:

"I do feel genuinely proud that I have 10 lines of code landed in the VSCode project"

"After solving this issue I felt very proud. Not only did I contribute to a project I cared about, I was able to tackle an issue that I had almost no knowledge of going in, and was able to solve it without giving up. I had never worked with React before, and was scared that I would not be able to understand anything. In actuality it was mostly similar to JavaScript and I could follow along pretty well. I also think I did a good job of quickly finding the issue in the codebase and isolating the part of code where the issue was located."

"I used to think those projects are maintained by much more capable people than me, now the thinking is 'yeah, I can contribute to that.' To be honest, I would never think I was able to participate in an event like Hacktoberfest and contribute to the open-source community in this way. Now, it is so rewarding to see my name amount the contributors."

Joining the Larger Open Source Community

Finally, a theme I saw again and again was students beginning to find their place within the larger, global, open source community. Many students had their blogs quoted or featured on social media, and were surprised that other people around the world had seen them and their work.

"I really liked how people from different backgrounds and ethnicity can help one another when it comes to diversifying their code with different cultures. I really like the fact I got to share my language (Punjabi) with other people in the community through localizing"

"getting to work with developers in St.Petersburg, Holland, Barcelona and America."

"Now I can say I have worked with people from around the world!"

Conclusion

I'm really impressed with Hacktoberfest, and thankful for DigitalOcean, Twilio, GitHub and others who take it upon themselves to sponsor this. We need events like this where everyone is aware that new people are joining, and it's OK to get involved. Having a sense that "during October, lots of new people are going to be coming" is important. Projects can label issues, people can approach reviews a bit differently, and everyone can use a bit more patience and support. And there's no need for any of that to end now that it's November.

Hopefully this helps you if you're thinking about having your students join Hacktoberfest next year. If you have other questions, get in touch. And if you'd like to help me grade all this work, I'd be happy for that contribution :)

Categorieën: Mozilla-nl planet

Cameron Kaiser: The Space Grey Mac mini is too little, too late and too much

Mozilla planet - za, 03/11/2018 - 20:26
I've got a stack of G4 minis here, one of which will probably be repurposed to act as a network bridge with NetBSD, and a white C2D mini which is my 10.6 test machine. They were good boxes when I needed them and they still work great, but Apple to its great shame has really let the littlest Mac rot. Now we have the Space Grey mini, ignoring the disappointing and now almost mold-encrusted 2014 refresh which was one step forward and two steps back, starting at $800 with a quadcore i3, 8GB of memory and 128GB of SSD. The pictures on Ars Technica also show Apple's secret sauce T2 chip on-board.

If you really, really, really want an updated mini, well, here's your chance. But with all the delays in production and Apple's bizarrely variable loadouts over the years the mini almost doesn't matter anymore and the price isn't cheap Mac territory anymore either (remember that the first G4 Mac mini started at $500 in 2005 and people even complained that was too much). If you want low-end, then you're going to buy a NUC-type device or an ARM board and jam that into a tiny case, and you can do it for less unless you need a crapload of external storage (the four Thunderbolt 3 ports on the Space Grey mini are admittedly quite compelling). You can even go Power ISA if you want to: the "Tiny Talos" a/k/a Raptor Blackbird is just around the corner, with the same POWER9 goodness of the bigger T2 systems in a single socket and the (in fairness: unofficial) aim is to get it under $700. That's what I'm gonna buy. Heck, if I didn't have the objections I do to x86, I could probably buy a lot more off-the-shelf for $800 and get more out of it since I'm already transitioning to Linux at home anyway. Why would I bother with chaining myself to the sinking ship that is macOS when it's clear Apple's bottom line is all about iOS?

Don't get me wrong: I'm glad to see Apple at least make a token move to take their computer lines seriously and the upgrade, though horribly delayed and notable more for its tardiness than what's actually in it, is truly welcome. And it certainly would build optimism and some much-needed good faith for whatever the next Mac Pro is being more than vapourware. But I've moved on and while I like my old minis, this one wouldn't lure me back.

Categorieën: Mozilla-nl planet

Cameron Kaiser: TenFourFox FPR11b1 available

Mozilla planet - za, 03/11/2018 - 05:59
TenFourFox Feature Parity 11 beta 1 is now available (downloads, hashes, release notes). As mentioned, FPR11 and FPR12 will be smaller in scope due to less low-hanging fruit to work on and other competing projects such as the POWER9 JIT, but there are still new features and improvements.

The most notable one is my second attempt to get unique origin for data: URIs to stick (issue 525). This ran aground in FPR10 and had to be turned off because of compatibility issues with the Firefox 45 version of uBlock Origin, which would be too major an add-on for me to ignore breaking. FPR11 now has a shim in it to allow the old behaviour for data URL access initiated by the internal system principal (including add-ons) but use the new behaviour for web content, and seems to properly reject the same test cases while allowing uBlock to run normally. As before, we really need this in the browser to defend against XSS attacks, so please test thoroughly. Once again, if you experience unusual behaviour in this version, please flip security.data_uri.unique_opaque_origin to false and restart the browser. If the behaviour changes, then this was the cause and you should report it in the comments.

FPR11 also has a more comprehensive fix for sites that use Cloudflare Rocket Loader, a minor speedup to the JavaScript JIT, and new support for two JavaScript features (Symbol.hasInstance and Object.getOwnPropertyDescriptors). In addition, for a small additional speed boost, CSS error reporting is now disabled by default except for debug builds (JavaScript error reporting is of course maintained). If you want this back on, set layout.css.report_errors to true.

After a lot of test cases and poring through bug reports, I think that the issue with Citibank is issue 533. Sites that use that combination of front-end deployment tools will not work in Firefox 50 or lower, which worryingly may affect quite a few, and it seems to be due to the large front-end refactor that landed in Firefox 51. The symptom is usually an error that looks like "this is undefined" in the JavaScript console. Like the other two looming JavaScript issues we are starting to face, this requires a large amount of work and is likely to have substantial regressions if I can get it finished (let alone get it building) at all. I'm looking through the changes to see if any obviously affect the scope of this to see if the actual error that is reported can be worked around, but it may just be masking some bigger problem which requires much more surgery and, if so, would not be feasible to fix unfortunately either.

Categorieën: Mozilla-nl planet

Mozilla VR Blog: Principles of Mixed Reality Permissions

Mozilla planet - vr, 02/11/2018 - 15:57
Principles of Mixed Reality Permissions

Virtual and Augmented Reality (VR and AR) — known together as Mixed Reality (MR) — introduce a new dimension of physicality to current web security and privacy concerns. Problems that are already difficult on the 2D web, like permissions and clickjacking, become even more complex when users are immersed in a 3D experience. This is especially true on head-worn displays, where there is no analogous concept to the 2D “window,” and everything a user sees might be rendered by the web application. Compounding the difficulty of obtaining permission is the more intimate nature of the data collected by the additional sensors required to enable AR and VR experiences.

To enable immersive MR experiences, devices have sensors that not only capture information about the physical world around the user (far beyond the sensors common on mobile phones), but also capture personal details about the user and (possibly) bystanders. For example, these sensors could create detailed 3D maps of the physical world (either by using underlying platform capabilities like the ability to intersect 3D rays with a model of world around the user, or by direct camera access), infer biometric data like height and gait, and potentially find and recognize nearby faces in the numerous cameras typically present on these devices. The infrared sensor that detects when a head-mounted device is worn could eventually disclose more detailed biometrics like perspiration and pulse rate, and some devices already incorporate eye-tracking.

For each sensor, there are straightforward uses in MR applications. A model of the world allows devices to place content on surfaces, hide content under tables, or warn users if they’re about to walk into a wall in VR. A user’s height and gait are revealed by the precise 3D motion of their head and hands in the world, information that is essential for rendering content from the correct position in an immersive experience. Eye-tracking can support natural interaction and allow disabled people to navigate using just their eyes. Access to camera data allows applications to detect and track objects in the world, like equipment being repaired or props being used in a game.

Unfortunately, there are concerns associated with each sensor—a data leak involving users’ home data could violate their right against unreasonable search; height and gait can be used as unique personal identifiers; a malicious application could use biometric data like pupil tracking and perspiration to infer users’ political or sexual preferences or track the location of bystanders who have not given consent and may not even be aware they are being seen. This is particularly worrying when governments may have access to this data.

At Mozilla, our mission is empowering people on the internet. The web is an integral part of modern life, and individual security and privacy are fundamental rights. When there are potential negative consequences, browsers typically request consent. However, as we collect and pass more data over the internet, we’ve fallen behind on ensuring users give informed consent. This trend could have far-reaching impact on users as more and more of their interactions move onto MR devices.

Informed Consent

The idea of informed consent originates in medical ethics and the idea that individuals have the right to exercise control over critical aspects of their lives. The internet is now a fundamental piece of people’s lives and society in general, and at Mozilla we strongly believe that informed consent is a right on the internet as well. Unfortunately, providing informed consent for internet users suffers from similar issues as informed consent in medicine, where users may not understand what they are being told and may not be motivated to consider their choices in the moment. Most importantly, the immersive web must have a foundation of trust to start from.

Principles of Mixed Reality Permissions

Obtaining informed consent requires disclosure, comprehension, and voluntariness. In order to be informed, people must have all necessary information, presented in a way they understand; in this context, that includes the data being collected or transmitted and the risks of unauthorized disclosure. To be able to consent, a person must not only be able to understand the disclosed information, but also be able to make a decision free of coercion or manipulation.

Completely and accurately presenting the information required for informed consent is challenging. Permissions have already become too complex to easily communicate to users what data is gathered and the potential consequences of its use or misuse. For example, PokémonGo uses access to the accelerometer and gyroscope in the the phone to align the Pokémon with the player’s orientation in the world and determine if they might be driving (i.e. they shouldn’t be playing the game). However, it can also be used by a bad actor to recover your password. These more subtle risks may be linked to more severe consequences.

Interactions between multiple sensors presents an additional permissions challenge—what happens when we combine accelerometer data with biometric data and microphone access? What happens if we add camera access? Individually, these sensors have complex threats; taken together, it is difficult to convey the full breadth of possible risks without sounding hyperbolic.

Given the new challenges of the immersive web, we have an opportunity to rework how we approach permissions and consent to better empower people. While we don’t yet understand what to tell users, we propose four principles as the basis for approaching this problem: permissions should be progressive, accountable, comfortable, expressive (PACE).

Principles Progressive

The idea of progressive web applications is well understood in the web community, referring to the design of websites that work on a variety of devices and take advantage of the capabilities of each, creating progressively more capable sites as the capabilities of the device better match their needs. In Mixed Reality, the capabilities of devices are much more varied, requiring more dramatic changes to sites that want to support as many people as possible. Beyond just device capabilities, the intimate (even invasive) nature of AR sensing means that users may not want to grant the full capabilities of their device to all websites.

To both support a diversity of devices and respect user privacy, browsers need to embrace the idea of progressive permissions—giving people better controls over permission granting—by providing context for sensor access and enhancing the capabilities granted to websites gradually. This principle is closely related to the concept of informed consent; by requesting dangerous permissions out of context, sites risk providing incomplete disclosure and impacting comprehensibility. For example, most applications and sites request all necessary permissions at install or startup, then persist those permissions indefinitely.

The idea of providing context to permissions is not new; some mobile apps and websites already present people with an explanations of the permissions it will request at startup, providing a description of why access is required. Users can then select and approve/deny each permission at this point. If the user later accesses a feature that required a denied permission, then the application could re-present the request.

Part of “progressivity” is responsibly collecting data only the when needed and not persisting sensor use when not necessary. A person who has accepted microphone access to allow verbal input has not accepted unfettered microphone access for eavesdropping.

Therefore, progressive permissions should also be bidirectional, allowing users to turn permissions on and off repeatedly throughout the lifetime of a web app. In this example, a user might reasonably expect a site to use the microphone during input, and then stop using the sensor when input is complete—even if it still has permission to use it.

Also consider an application that requests camera access. At home, I grant it. At work, I open the application and it immediately uses the camera, compromising confidential information. We don’t want to keep prompting, but want the user to be aware of, and have control over, when sensor data is available to the application, changing permissions as they desire, depending on their preferences, context and needs (in contrast to current permissions, such as the camera permissions in the figure below). This principle is mutually reinforced by accountability.

Principles of Mixed Reality Permissions

Accountable

Accountability pertains to what happens after a permission is granted. All active or granted permissions should be easy to inspect and easy to change. We envision a user interface that is simple to access that lists:

  • current permissions
  • when each permission was approved/denied
  • data currently collected/monitored by the page
  • a toggle that allows easy switching between approval/denial of each permission (without requiring page reload)

Revocation should be straightforward, and only impact related features (revoking camera access should only affect features that require the camera, not prevent use of the entire site).

Additionally, when a website uses device resources, such as accessing files, there should be a method to hold the site accountable for resources accessed and/or modified. As browsers adopt new architectures to improve security through techniques like site isolation, identifying which pages are using which resources becomes easier, allowing browsers to report more accurate and granular usage data to users.

Examples of browsers continuing to execute JavaScript even after the browser is closed or the screen is turned off are troubling and violate accountability expectations. Some sensors, including motion and light sensors, aren’t protected by permissions and are exposed to JavaScript. These sensors also represent potential side channels for retrieving sensitive data and should be considered when designing accountability measures.

Comfortable

Users already report fatigue about excessive permissions requests. Embracing progressivity and accountability without taking this fatigue into account runs the risk disrespecting users’ attention and increasing this fatigue. Therefore permissions must also be comfortable. When we talk about permissions being comfortable, we’re explicitly referring to this need to balance user control with reduced friction. Interrupting users’ tasks, asking for permission at the wrong times, and excessive permissions requests can lead people to “give up” and automatically accept permissions to “get on with it.”

As we increase the amount and variety of information being sensed, we should consider alternatives to simple permission dialogs. For example, in some cases, browsers could use implicit permissions based on user action (e.g., pressing a button to take a picture might implicitly give camera access). In 3D immersive MR, where the user is using a head-worn display, permission requests that are presented in the immersive environment should provide a comfortable UX that is easily identified as being presented by the browser (as opposed to the page). If requests are jarring or visually uncomfortable, users may not take their time and consider them, but quickly accept (or dismiss) them to return to the immersive experience. Over time, we hope the web community will develop a consistent design language for various permissions across multiple browsers and environments.

Approaches to comfort can build on the previous principles: implicitly granting one kind of permission can be balanced by maintaining accountability and visibility of what data the site has access to, and by providing a simple and obvious way to examine and modify permissions.

Expressive

Expressiveness relates to the browser handling different permissions for different sensors differently, instead of assuming one size fits all (i.e., presenting a similar sort of prompt for any capability that needs user permission). The current permissions approach divides sensors into two categories: dangerous (requiring a prompt) and not (generally accessible without additional user input). Unfortunately, interactions between “not-dangerous” sensors, like the accelerometer and the touch screen used for input, can leak data like passwords (by watching the motion of the device when the user types)[1]. In an immersive context, devices have considerably more powerful sensors, resulting in more complex and difficult to predict interactions.

A possible solution to more expressive permissions is permission bundling, grouping related permissions together. However, this risks violating user expectations and could result in a less progressive approach.

Entering immersive mode will automatically require activating certain sensors; for example, a basic VR application will use motion sensors for rendering and be given an estimate of where the floor is so it can avoid placing virtual objects below the floor; from these, an application will be able to infer your height . These sorts of secondary inferences are not always so obvious. Even in a small study of 5 users, three participants believed that the only data collected by their VR device was either data they provided when creating an account or basic usage data (such as how frequently they use an application). Only two participants were aware that the device sensors collected and transmitted much more data. The richer the application, the more likely one or more of the sensors involved will be transmitting data that can be used to uniquely identify individuals.

One of these three participants explicitly stated that their VR system, an Oculus Rift, could not collect audio data.

Looking Forward

Accurately and completely explaining the data that’s being collected and potential consequences is central to acquiring informed consent, but there’s a danger that permissions prompts will become opaque legal waivers. As we add more sensors to devices and collect more personal and environmental data, it’s tempting to simply add more permission prompts. However, permission fatigue is already a serious issue.

When possible, we should identify opportunities for implicit consent. For example, you don’t have to give permission every time you move or click a mouse on the 2D web. When we do require explicit consent, platforms should provide a comfortable and consistent user experience.

The goal of permissions should be to obtain informed consent. In addition to designing technical solutions, we need to educate the public about the types of data collected by devices and potential consequences. While this should be required for making informed choices about permissions, it’s not sufficient. We need to combine the three aspects of informed consent (disclosure, comprehension, voluntariness) with the four PACE principles (progressive, accountable, comfortable, expressive) to provide an immersive web experience that empowers people to take control of their privacy on the internet.

The strength of the web is the ability for people to casually and ephemerally browse pages and follow links while knowing that their browser makes this activity safe—this is the foundation of trust in the web. This foundation becomes even more important in the immersive web due to the potential new pathways for abuse of the rich, intimate data available from these devices.

Current events demonstrate the dangers of rampant data collection and misuse of personal data on the web; mixed reality devices, and the new kinds of data they generate present an opportunity to change the conversation about permissions and consent on the web.

We propose the PACE principles to encourage MR enthusiasts and privacy researchers to consider new approaches to data collection that will inform and empower users while respecting their time and energy. These solutions will not all be technical, but will likely include education, advocacy, and design leadership. As VR and AR devices enter the mainstream tech environment, we should proactively explore the viability of new directions, rather than waiting and reacting to the greater damage that might come from future data breaches and abuse.

  1. in this specific case, and for this reason, the devicemotion API has been deprecated in favor of a new sensor API ↩︎

Categorieën: Mozilla-nl planet

QMO: Firefox 64 Beta 8 Testday, November 9th

Mozilla planet - vr, 02/11/2018 - 14:41

Hello Mozillians,

We are happy to let you know that Friday, November 09th, we are organizing Firefox 64 Beta 8 Testday. We’ll be focusing our testing on:  Multi-Select Tabs and Removal of Live Bookmarks and Feed.

Check out the detailed instructions via this etherpad.

No previous testing experience is required, so feel free to join us on #qa IRC channel where our moderators will offer you guidance and answer your questions.

Join us and help us make Firefox better!

See you on Friday!

Categorieën: Mozilla-nl planet

Will Kahn-Greene: Socorro: October 2018 happenings

Mozilla planet - vr, 02/11/2018 - 14:00
Summary

Socorro is the crash ingestion pipeline for Mozilla's products like Firefox. When Firefox crashes, the Breakpad crash reporter asks the user if the user would like to send a crash report. If the user answers "yes!", then the Breakpad crash reporter collects data related to the crash, generates a crash report, and submits that crash report as an HTTP POST to Socorro. Socorro saves the crash report, processes it, and provides an interface for aggregating, searching, and looking at crash reports.

October was a busy month! This blog post covers what happened.

Read more… (4 mins to read)

Categorieën: Mozilla-nl planet

Mozilla GFX: WebRender newsletter #28

Mozilla planet - vr, 02/11/2018 - 12:42

WebRender’s 28th newsletter is here, and as requested, today’s little story is about picture caching. It’s a complex topic so I had to simplify a lot here.

WebRender’s original strategy for rendering web pages was “We re-render everything each frame”. Quite a bold approach when all modern browsers’ graphics engines are built around the idea of first rendering web content into layers and then compositing these layers into the screen. This compositing approach can be seen as a form of mandatory caching. It is driven by the observation that most websites are very static and optimizes for it often at the expense of making it harder to handle dynamic content.

compositorWebRender on the other hand almost took it as a mission statement to do the opposite and a go back to a simpler rendering model that just renders everything directly without painting/compositing separation.

I do have to add a bit of nuance: as you scroll with WebRender today, we re-draw a rectangle for each glyph on screen each frame, but the rasterization of the glyphs has always been cached in a traditional text rendering fashion. Most other things were initially redrawn each frame.

Working this way has a few advantages. In gecko, a lot of code goes into figuring out what should go into what layer and within these these layers figuring out the minimal amount of pixels that need to be re-drawn. More often than we’d like, figuring these things out takes longer than it would have taken to just re-draw everything (but of course you don’t know that until it’s too late!). Creating and destroying layers is very costly which means that when heuristics fail to guess what will change, stuttering can happen and it looks bad.
So layerization is expensive, its heuristics are hard to maintain and overall it is hard to make it work well with very dynamic content where a lot of elements are animated and transition from static to animated (motion design type of things).

On the other hand WebRender spends no time figuring layerization out and the cost of changing a single thing is about the same as the cost of changing everything. Instead of spending time optimizing for static content assuming rendering is expensive, the overall strategy is to make rendering cheap and optimize for dynamic and static content alike.

This approach performs well and scales to very complex web pages but there has to be a limit to how much work can be done each frame. And web developers have no limit as shown by this absolutely insane reproduction of an oil paining in pure CSS. Most browser will spend a ton of time painting this incredibly complex page into a layer and will let you scroll through it smoothly by just moving the layer around. WebRender on the other hand re-draws everything each frame as you scroll, and on most GPUs this is too much. ouch.
In addition to that, even if WebRender is fast enough to render complex content every frame at 60fps, there are valuable power optimizations we could get from not redrawing some things continuously.

In short, a “picture” in WebRender is a collection of drawing commands and the scene is represented as a tree of pictures. The idea behind picture caching is to have a very simple cost model for rendering a picture (much much simpler than Gecko’s layerization heuristics) and opt into caching the rendered content of a picture when we think it is profitable (as opposed to always having to cache the content into a layer). Because we don’t have a separation between painting and compositing in WebRender, it isn’t very expensive to switch between cached and non-cached the way creating and destroying layers is expensive in Gecko today and we don’t need double or triple buffering as we do for layers which also means a lot less memory overhead.

I’m excited about this hybrid approach between traditional compositing and WebRender’s initial “re-draw everything” strategy. I think that it has the potential to offer the best of both worlds. Glenn is making good progress on this front and we are hoping to have the core of the picture caching infrastructure in place before WebRender hits the release population.

Notable WebRender and Gecko changes
  • Bobby improved GPU memory usage by better evicting standalone entries in the texture cache.
  • Kats fixed a bad interaction between the startup sanity check and performance telemetry.
  • Kats restored a specific dragging behavior for the scroll thumb on Windows.
  • Kvark improved the performance of solid line decorations by moving them to the opaque pass.
  • Matt improved text selection performance.
  • Emilio optimized inset clip paths into using complex clip regions instead of blob images.
  • Glenn removed primitive indices from chasing .
  • Glenn introduced a picture traversal pass .
  • Glenn introduced primitive clustering.
  • Glenn stored picture indices in instances.
  • Nical fixed a bug with srgb/linear color conversion in WebRender.
  • Nical fixed the new scene debug indicator.
  • Nical improved GPU memory usage by fixing a tiled blob image cache eviction issue.
  • Sotaro fixed a synchronization issue when deleting external GPU textures.
  • Sotaro fixed an intermittent crash in the IPC layer.
Ongoing work
  • Kvark is improving the clipping/scrolling APIs.
  • Matt is investigating D3D upload performance and options.
  • Gankro is further investigating text selection performance.
  • Gankro and Nical are working on blob image recoordination.
  • Doug is making progress on Document splitting.
  • Glenn is making progress on picture caching.
Enabling WebRender in Firefox Nightly
  • In about:config set “gfx.webrender.all” to true,
  • restart Firefox.
Categorieën: Mozilla-nl planet

Mozilla Addons Blog: November’s Featured Extensions

Mozilla planet - do, 01/11/2018 - 22:29

Firefox Logo on blue background

Pick of the Month: Undo Close Tab

by Manuel Reimer
Access recently closed tabs by right-clicking the icon in your toolbar.

“The extension does exactly what is stated: it restores tabs, and not just the last one closed, but up to 25 recent tabs.”

Featured: Lilo

by Lilo
Help fund social and environmental causes by simply using Lilo search.

“Very positive. This engine allows us to help in a concrete way without any effort or money.”

Featured: SoundFixer

by unrelenting.technology
Adjust the gain and pan levels for almost any audio you encounter on the web.

“Wonderful – so many videos I like are so low I can’t hear them at all. This really, really works! Yay!”

If you’d like to nominate an extension for featuring, please send it to amo-featured [at] mozilla [dot] org for the board’s consideration. We welcome you to submit your own add-on!

The post November’s Featured Extensions appeared first on Mozilla Add-ons Blog.

Categorieën: Mozilla-nl planet

Pagina's