mozilla

Mozilla Nederland LogoDe Nederlandse
Mozilla-gemeenschap

Emma Irwin: Call for Feedback! Draft of Goal-Metrics for Diversity & Inclusion in Open Source (CHAOSS)

Mozilla planet - di, 19/06/2018 - 20:18

<figcaption class="wp-caption-text">open source stars — opensource.com CC BY-SA 2.0</figcaption>

 

In the last few months, Mozilla has invested in collaboration with other open source project leaders and academics who care about improving diversity & inclusion in Open Source through the CHAOSS D&I working group. Contributors so far include:

Alexander Serebrenik (Eindhoven University of Technology) , Akshita Gupta (Outreachy), Amy Marrich (OpenStack), Anita Sarma (Oregon State University), Bhagashree Uday (Fedora), Daniel Izquierdo (Bitergia), Emma Irwin (Mozilla), Georg Link (University of Nebraska at Omaha), Gina Helfrich (NumFOCUS), Nicole Huesman (Intel) and Sean Goggins ((University of Missouri).

Our goals are to first establish a set of peer-validated goal-metrics, for understanding diversity & inclusion in FOSS ; Second, to identify technology, and research methodologies for understanding the success of our interventions in ways that keep ethics of privacy, and consent at center. And finally, that we document this work in ways that communities can reproduce the report for themselves.

For Mozilla this follows the recommendations coming out of our D&I research to create Metrics that Matter, and to work across Open Source with others projects trying to solve the same problems. I am very excited to share our first draft of goal-metrics for your feedback.

<figcaption class="wp-caption-text">D&I Working Group — Initial set of Goal — Metrics</figcaption>

Demographics

Communication

Contribution

Events

Governance

Leadership

Project Places

Recognition

Ethics

Please note that we know these are incomplete, we know there are likely existing resources that can improve, or even disprove some of these — and that is the point of this blog post! Please review and provide feedback either — via a Github issue, pull request, or by reaching out to someone in the working group, or by joining our working group call (next one July 20th, 9am PST) — which you can find the video link here.

You can find one or more of us at the following events as well:

FacebookTwitterGoogle+Share

Categorieën: Mozilla-nl planet

Mozilla VR Blog: Introducing A-Terrain - a cartography component for A-Frame

Mozilla planet - di, 19/06/2018 - 20:00
Introducing A-Terrain - a cartography component for A-Frame

Have you ever wanted to make a small web app to share your favorite places with your friends? For example your favorite photographs attached to a hike, or just a view of your favorite peak, or your favorite places downtown, or a suggested itinerary for friends visiting?

Right now it is difficult to easily incorporate third party map data into your own projects. Creating 3d games or VR experiences with real world maps requires access to proprietary software or closed data ecosystems. To do it from scratch requires pulling data from multiple sources, such as image servers, and elevation servers. It also requires substantial math expertise. As well, often you may want to stylize the rendering to suit your own specific use cases. You may have a tron like video game aesthetic for your project and yet the building geometry you're forced to work with doesn't allow you to change colors. While there are many map providers, such as Apple, Google Maps and suchlike, and there are many viewers - most of these tools are specialized around showing high fidelity maps that are as true to reality as possible. What's missing is a middle ground - where you can take map data and easily put it in your own projects - creating your own mash ups.

We see A-Terrain as a starting point or demo for how the web can be different. With this component you can build whatever 3D experience you want and use real world data.

We’ve arranged for Cesium ion (see http://cesium.com) to make the data set available for free for people to try out. Currently the dataset includes world elevation, satellite images and 3d buildings for San Francisco.

For example here is a stylized view of San Francisco as viewed from ground level on the Embarcadero:

Introducing A-Terrain - a cartography component for A-Frame

You can try this example yourself in your browser here (use AWSD or arrow keys to move around):

https://anselm.github.io/aterrain/examples/helloworld/tile.html .

This component can also be used as a quick and dirty globe renderer (although if you're really interested in that specific use case then Cesium itself may be more suitable):

Introducing A-Terrain - a cartography component for A-Frame

I have added some rudimentary navigation controls using hash arguments on the URL. For example here is a piece of Mt Whitney:

https://anselm.github.io/aterrain/examples/place/index.html#lat=36.57850&lon=-118.29226&elev=1000

Introducing A-Terrain - a cartography component for A-Frame

The real strength of a tool like this is composability — to be able to mix different components together. For example here is A-Terrain and Mozilla Hubs being used for a collaborative hiking trip planning scenario to the Grand Canyon:

Introducing A-Terrain - a cartography component for A-Frame

Here is the URL for the above. This will take you to a random room ID - share that room ID with your friends to join the same room:




As another example of lightweight composability I place a tiny duck on the earths surface above Oregon. This is just a few lines of scripting:

Introducing A-Terrain - a cartography component for A-Frame

This example can be visited here:

https://anselm.github.io/aterrain/examples/helloworld/duck.html

To accomplish all this we leverage A-Frame — a browser based framework that lets users build 3d environments easily. The A-Frame philosophy is to take complicated behaviors and wrap them up html tags. If you can write ordinary HTML you can build 3d environments.

A-Frame is part of a Mozilla initiative to foster the open web —to raise the bar on what people can create on the web. Using A-Frame anybody can make 3d, virtual or augmented reality experiences on the web. These experiences can be shared instantly with anybody else in the world — running in the browser, on mobile phones, tablets and high end head mounted displays such as the Oculus Rift and the HTC Vive. You don’t need to buy a 3d authoring tool, you don’t need to ask somebody else permission if you can publish your app, you don’t publish your apps through an app store, you don’t need a special viewer to view the experience — it just runs — just like any ordinary web page.

I want to mention just a few of the folks who’ve helped bring this to this point — this includes Lars Bergstrom at Mozilla, Patrick Cozzi at Cesium, Shehzan especially (who was tireless in answering my dumb questions about coordinate re-projections), Blair MacIntyre (who had the initial idea) and Joshua Marinacci (who has been suggesting improvements and acting as a sounding board as well as testing this work).

The source code for this project is here:

https://github.com/anselm/aterrain

We’re all especially interested in seeing what kinds of experiences people build, and what directions this goes in. I'm especially interested in seeing AR use cases that combine this component with Augmented Reality frameworks such as recent Mozilla initiatives here : https://www.roadtovr.com/mozilla-launches-ios-app-experiment-webar/ . Please keep us posted on your work!

Categorieën: Mozilla-nl planet

Dave Townsend: Taming Phabricator

Mozilla planet - di, 19/06/2018 - 19:31

So Mozilla is going all-in on Phabricator and Differential as a code review tool. I have mixed feelings on this, not least because it’s support for patch series is more manual than I’d like. But since this is the choice Mozilla has made I might as well start to get used to it. One of the first things you see when you log into Phabricator is a default view full of information.

A screenshot of Phabricator's default view

It’s a little overwhelming for my tastes. The Recent Activity section in particular is more than I need, it seems to list anything anyone has done with Phabricator recently. Sorry Ted, but I don’t care about that review comment you posted. Likewise the Active Reviews section seems very full when it is barely listing any reviews.

But here’s the good news. Phabricator lets you create your own dashboards to use as your default view. It’s a bit tricky to figure out so here is a quick crash course.

Click on Dashboards on the left menu. Click on Create Dashboard in the top right, make your choices then hit Continue. I recommend starting with an empty Dashboard so you can just add what you want to it. Everything on the next screen can be modified later but you probably want to make your dashboard only visible to you. Once created click “Install Dashboard” at the top right and it will be added to the menu on the left and be the default screen when you load Phabricator.

Now you have to add searches to your dashboard. Go to Differential’s advanced search. Fill out the form to search for what you want. A quick example. Set “Reviewers” to “Current Viewer”, “Statuses” to “Needs Review”, then click Search. You should see any revisions waiting on you to review them. Tinker with the search settings and search all you like. Once you’re happy click “Use Results” and “Add to Dashboard”. Give your search a name and select your dashboard. Now your dashboard will display your search whenever loaded. Add as many searches as you like!

Here is my very simple dashboard that lists anything I have to review, revisions I am currently working on and an archive of closed work:

A Phabricator dashboard

Like it? I made it public and you can see it and install it to use yourself if you like!

Categorieën: Mozilla-nl planet

David Humphrey: Building Large Code on Travis CI

Mozilla planet - di, 19/06/2018 - 16:45

This week I was doing an experiment to see if I could automate a build step in a project I'm working on, which requires binary resources to be included in a web app.

I'm building a custom Linux kernel and bundling it with a root filesystem in order to embed it in the browser. To do this, I'm using a dockerized Buildroot build environment (I'll write about the details of this in a follow-up post). On my various computers, this takes anywhere from 15-25 minutes. Since my buildroot/kernel configs won't change very often, I wondered if I could move this to Travis and automate it away from our workflow?

Travis has no problem using docker, and as long as you can fit your build into the alloted 50 minute build timeout window, it should work. Let's do this!

First attempt

In the simplest case, doing a build like this would be as simple as:

sudo: required services: - docker ... before_script: - docker build -t buildroot . - docker run --rm -v $PWD/build:/build buildroot ... deploy: # Deploy built binaries in /build along with other assets

This happily builds my docker buildroot image, and then starts the build within the container, logging everything as it goes. But once the log gets to 10,000 lines in length, Travis won't produce more output. You can still download the Raw Log as a file, so I wait a bit and then periodically download a snapshot of the log in order to check on the build's progress.

At a certain point the build is terminated: once the log file grows to 4M, Travis assumes that all the size is noise, for example, a command running in an infinite loop, and terminates the build with an error.

Second attempt

It's clear that I need to reduce the output of my build. This time I redirect build output to a log file, and then tell Travis to dump the tail-end of the log file in the case of a failed build. The after_failre and after_success build stage hooks are perfect for this.:

before_script: - docker build -t buildroot . > build.log 2>&1 - docker run --rm -v $PWD/build:/build buildroot >> build.log 2>&1 after_failure: # dump the last 2000 lines of our build, and hope the error is in that! - tail --lines=2000 build.log after_success: # Log that the build worked, because we all need some good news - echo "Buildroot build succeeded, binary in ./build"

I'm pretty proud of this until it fails after 10 minutes of building with an error about Travis assuming the lack of log messages (which are all going to my build.log file) means my build has stalled and should be terminated. Turns out you must produce console output every 10 minutes to keep Travis builds alive.

Third attempt

Not only is this a common problem, Travis has a built-in solution in the form of travis_wait. Essentially, you can prefix your build command with travis_wait and it will tolerate there being no output for 20 minutes. Need more than 20, you can optionally pass it the number of minutes to wait before timing out. Let's try 30 minutes:

before_script: - docker build -t buildroot . > build.log 2>&1 - travis_wait 30 docker run --rm -v $PWD/build:/build buildroot >> build.log 2>&1

This builds perfectly...for 10 minutes. Then it dies with a timeout due to there being no console output. Some more research reveals that travis_wait doesn't play nicely with processes that fork or exec.

Fourth attempt

Lots of people suggest variations on the same theme: run a command that spins and periodically prints something to stdout, and have it fork your build process:

before_script: - docker build -t buildroot . > build.log 2>&1 - while sleep 5m; do echo "=====[ $SECONDS seconds, buildroot still building... ]====="; done & - time docker run --rm -v $PWD/build:/build buildroot >> build.log 2>&1 # Killing background sleep loop - kill %1

Here we log something at 5 minute intervals, while the build progresses in the background. When it's done, we kill the while loop. This works perfectly...until it hits the 50 minute barrier and gets killed by Traivs:

$ docker build -t buildroot . > build.log 2>&1 before_script $ while sleep 5m; do echo "=====[ $SECONDS seconds, buildroot still building... ]====="; done & $ time docker run --rm -v $PWD/build:/build buildroot >> build.log 2>&1 =====[ 495 seconds, buildroot still building... ]===== =====[ 795 seconds, buildroot still building... ]===== =====[ 1095 seconds, buildroot still building... ]===== =====[ 1395 seconds, buildroot still building... ]===== =====[ 1695 seconds, buildroot still building... ]===== =====[ 1995 seconds, buildroot still building... ]===== =====[ 2295 seconds, buildroot still building... ]===== =====[ 2595 seconds, buildroot still building... ]===== =====[ 2895 seconds, buildroot still building... ]===== The job exceeded the maximum time limit for jobs, and has been terminated.

The build took over 48 minutes on the Travis builder, and combined with the time I'd already spent cloning, installing, etc. there isn't enough time to do what I'd hoped.

Part of me wonders whether I could hack something together that uses successive builds, Travis caches and move the build artifacts out of docker, such that I can do incremental builds and leverage ccache and the like. I'm sure someone has done it, and it's in a .travis.yml file in GitHub somewhere already. I leave this as an experiment for the reader.

I've got nothing but love for Travis and the incredible free service they offer open source projects. Every time I concoct some new use case, I find that they've added it or supported it all along. The Travis docs are incredible, and well worth your time if you want to push the service in interesting directions.

In this case I've hit a wall and will go another way. But I learned a bunch and in case it will help someone else, I leave it here for your CI needs.

Categorieën: Mozilla-nl planet

Mozilla GFX: WebRender newsletter #20

Mozilla planet - di, 19/06/2018 - 11:47

Newsletter number twenty is here, delayed again by a combination of days off and the bi-annual Mozilla AllHands which took place last week in San Francisco.
A big highlight in the WebRender side is the work on porting all primitives to the brush system approaching completion. Individually, porting each primitive doesn’t sound like much but with all of the pieces coming together:

  • Most complex primitives can be segmented, moving a lot of pixels to the opaque pass and using intermediate targets to render the tricky parts.
  • The majority of the alpha pass is now using the brush image shader which greatly improves batching. We generate about 2 to 3 times less draw calls on average now than a month ago.

This translates into noticeable performance imporvements on a lot of very complex pages. The most outstanding remaining performance issues are now caused by the CPU fallback which we are working on moving off of the critical path, so things are looking very promiscing especially with the mountain of other performance improvements we are currently holding off on to concentrate on correctness.

Speaking of fixing correctness issues, as usual we can see from the lists below that there is also a lot of progress in this area.

Notable WebRender changes
  • Kvark fixed an issue with invalid local rect indices.
  • Hugh Gallagher merged the ResourceUpdates and Transaction types.
  • Lee implemented rounding off sub-pixel offsets coming from scroll origins.
  • Kvark fixed a bug in the tracking of image dirty rects.
  • Glenn ported inset/outset border styles to brush shaders.
  • Kvark fixed an issue with document placement and scissoring.
  • Kvark added a way to track display items when debugging.
  • Glenn ported double, groove and ridge broders to the brush shader system.
  • Patrick fixed a crash with pathfinder.
  • Martin improved the API for adding reference frames.
  • Kats fixed a crash with blob images.
  • Gankro fixed a cache collision bug.
  • Glenn ported dots and dashes to the brush shader infrastructure.
  • Glenn fixed the invalidation of drop shadows when the local rect is animating.
  • Glenn removed empty batches to avoid empty draw calls and batch breaks.
  • Kats moved building the hit-test tree to the scene building phase.
  • Marting improved the clipping documentation.
  • Glenn removed the transform variant of the text shader.
  • Lee fixed support for arbitrarily large font sizes.
  • Glenn fixed box shadows when the inner mask has invalid size.
  • Glenn fixed a crash with zero-sized borders.
  • Nical added a debug indicator showing when rendering happens.
Notable Gecko changes
  • Sotaro enabled DirectComposition to present the window.
  • Sotaro fixed an issue with device resets on Windows.
  • Kats enabled a lot of tests and benchmark on the CI (spread over many bugzilla entries).
  • Kats improved a synchronization mechanism between the content process and WebRender.
  • Sotaro implemented a shader cache to improve startup times.
  • Kats improved hit-testing correctness with respect to touch action.
  • Kats fixed a bug related to position-sticky.
  • Jeff fixed the invalidation of blob images with changing transforms.
  • Lee fixed an issue with very large text.
  • Sotaro improved the way we reset the EGL state.
  • Kats fixed some async scene building bugs.
  • Kats avoided a crash with iframes referring to missing pipelines.
  • Kats prevented blob images from allocaitng huge surfaces.
  • Kats fixed a shutdown issue.
  • Jeff fixed some issue with fractional transforms and blob images.
  • Bob Owen improved the way memory is allocated when recording blob images.
  • Kats fixed a race condition with async image pipelines and display list flattening.
  • Kats fixed an APZ issue that affected youtube.
  • Kats fixed an issue causing delayed canvas updates under certain conditions.
  • Kats fixed some hit testing issues.
  • Sotaro fixed a startup issue when WebRender initialization fails.
  • Markus removed a needless blob image fallback caused by invisible outlines which was causing performance issues.
  • Kats fixed a crash.
Enabling WebRender in Firefox Nightly

In about:config, just set “gfx.webrender.all” to true and restart the browser. No need to toggle any other pref.

Reporting bugs

The best place to report bugs related to WebRender in Gecko is the Graphics :: WebRender component in bugzilla.
Note that it is possible to log in with a github account.

Categorieën: Mozilla-nl planet

This Week In Rust: This Week in Rust 239

Mozilla planet - di, 19/06/2018 - 06:00

Hello and welcome to another issue of This Week in Rust! Rust is a systems language pursuing the trifecta: safety, concurrency, and speed. This is a weekly summary of its progress and community. Want something mentioned? Tweet us at @ThisWeekInRust or send us a pull request. Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub. If you find any errors in this week's issue, please submit a PR.

Updates from Rust Community News & Blog Posts Crate of the Week

This week's crate is SIMDNoise, a crate to use modern CPU vector instructions to generate various types of noise really fast. Thanks to gregwtmtno for the suggestion!

Submit your suggestions and votes for next week!

Call for Participation

Always wanted to contribute to open-source projects but didn't know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

If you are a Rust project owner and are looking for contributors, please submit tasks here.

Updates from Rust Core

66 pull requests were merged in the last week

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

No RFCs were approved this week.

Final Comment Period

Every week the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now.

RFCs Tracking Issues & PRs New RFCs Upcoming Events Online Africa Europe North America

If you are running a Rust event please add it to the calendar to get it mentioned here. Email the Rust Community Team for access.

Rust Jobs

Tweet us at @ThisWeekInRust to get your job offers listed here!

Quote of the Week

In Rust it’s the compiler that complains, with C++ it’s the colleagues

– Michal 'Vorner' Vaner on gitter

(selected by llogiq per one unanimous vote)

Please submit your quotes for next week!

This Week in Rust is edited by: nasa42 and llogiq.

Categorieën: Mozilla-nl planet

Nicholas Nethercote: San Francisco Oxidation meeting notes

Mozilla planet - di, 19/06/2018 - 02:08

At last week’s Mozilla All Hands meeting in San Francisco we had an Oxidation meeting about the use of Rust in Firefox. It was low-key, being mostly about status and progress. The notes are here for those who are interested.

Categorieën: Mozilla-nl planet

Firefox UX: Written by Amy Lee & Eric Pang

Mozilla planet - ma, 18/06/2018 - 22:51

Written by Amy Lee & Eric Pang

Firefox has a motion team?! Yes we do!

Motion may sometimes feel like an afterthought or worse yet “polish”. For the release of Firefox Quantum (one of our most significant releases to date), we wanted to ensure that motion was not a second class citizen and that it would play an important role in how users perceived performance in the browser.

We (Amy & Eric) make up the UX side of the “motion team” for Firefox. We say this in air quotes because the motion team was essentially formed based on our shared belief that motion design is important in Firefox. With a major release planned, we thought this would be the perfect opportunity to have a team working on motion.

Step 1: Make a Sticker

We made a sticker and started calling ourselves the motion team.

<figcaption>We created a sticker to formalize the “motion team”.</figcaption>Step 2: Audit Existing Motions

The next plan of action was to audit the existing motions in Firefox across mobile and desktop. We documented them by taking screen recordings and highlighted the areas that needed the most attention.

<figcaption>An audit of current animations was performed and documented for Firefox on Android, iOS, and desktop to explore areas of opportunity for motion.</figcaption>

From this exercise it was clear that consistency and perceived performance were high on our list of improvements.

The next step was to gather inspiration for a mood board. From there, we formed a story that would become the foundation of our motion design language.

During this process, we asked ourselves:

How can we make the browser feel smoother, faster and more responsive?.

<figcaption>Inspiration was collected and documented in a mood board to help guide the motion story.</figcaption>Step 3: Defining a Motion Story

With Photon (Firefox’s new design language) stemming from Quantum we knew there was going to be an emphasis on speed in our story. Before starting work on any new motions, we created a motion curve to reflect this. The aim was to have a curve that would be perceived as fast yet still felt smooth and natural when applied to tabs and menu items. Motion should also be informative (i.e showing where your bookmarked item is saved, when your tab is done loading) and lastly, have personality. We defined our story based on these considerations.

<figcaption>Motion curve that was applied to menu and tab animations.</figcaption>

The motion story was presented to the rest of the UX team during a work week held in Toronto (the UX team is distributed across several countries so work weeks are planned for in-person collaboration).

This was our mission statement:

The motion design language in Firefox Quantum is defined by three principles: Quick, Informative and Whimsical. Following these guiding principles, we aim to achieve a cohesive, consistent, and enjoyable experience within the family of Firefox browsers.

Next we presented some preliminary concepts to support these principles:

Quick

Animations should be fast and nimble and never keep the user waiting longer than they need to. The aim is to prioritize user perceived performance over technical benchmarks.

<figcaption>Panel and new tab animation.</figcaption>Informative

Motion should help ease the user through the experience. It should aid the flow of actions, giving clear guidance for user orientation: spatial or temporal.

<figcaption>Left: Download icon animation indicates download progress. Right: Star icon animation shows the action of saving a bookmark and the location of the bookmark after it’s saved (the library).</figcaption>Whimsical

Even though most people would not associate whimsy with a browser, we wanted to incorporate some playful elements as part of Firefox’s personality (and maybe ourselves).

<figcaption>Icon animations with some whimsy.</figcaption>

After getting feedback and buy-in from the rest of the UX team on the motion story, we were able to start working with them in applying motion to their designs.

Step 4: Design Motions

The Photon project was divided across various functional teams all focusing on different design aspects of Firefox. With motion overlapping many of these teams we started opening communication channels with each that would directly impact our work. We worked especially close with the visual/interaction team since it didn’t make sense to start motion design on components that were not yet close to complete. We had regular check-ins to set a rough ordering/priority of when we would schedule motion work of specific components.

Once we had near final visuals/interactions, it was time to get into After Effects and start animating!

https://medium.com/media/f099215e9a6dea67eec0d1a6e4ee729b/hrefStep 5: Implementation

Implementing animations, especially detailed ones such as bookmarking and downloading, was an interesting challenge for the development team (Jared Wein, Sam Foster, and Jim Porter).

Rather than have a developer try to reproduce our motion designs through code, which can become tedious, we wanted to be able to export the motion directly. This ensured that the nuances of the motion was not lost during implementation.

To have the animations performant in the browser, the file sizes also needed to be small. This was done by using SVG assets and CSS animations.

We explored existing tools but did not find anything suitable that would be compatible with the browser. We created our own process and designed the animations in After Effects and used the Bodymovin extension to export them as JSON files.

One developer in particular, Markus Stange made this method possible by writing a tool, to convert JSON files into SVG sprite sheets. Sam further refined the tool and it became an essential asset in translating timeline animations from After Effects into CSS animations.

<figcaption>Page reload icon animation using a SVG sprite sheet.</figcaption>

After rounds of asset production, refinements, reviews, and testing, the big day came with the launch of Firefox Quantum!

<figcaption>A large part of the Firefox Quantum team was located in Toronto, Ontario.</figcaption>

We were a bit anxious awaiting feedback since this was the most significant update since Firefox 1.0 first launched in 2004.

Thankfully the release was met with positive reviews. We were happy that even some of our motions got a kudos.

So that wraps up the story of how the “motion team” for Firefox came to be. But that’s not all we do here at Mozilla. These days you’ll also find Eric busy working away at UX for Web Payments, Privacy and Search, while Amy casts her Dark theme magic and spins out the next UX iteration of Activity Stream and Onboarding.

If you haven’t given Firefox Quantum a try, take it for a spin and let us know what you think.

Written by Amy Lee & Eric Pang was originally published in Firefox User Experience on Medium, where people are continuing the conversation by highlighting and responding to this story.

Categorieën: Mozilla-nl planet

Air Mozilla: Creative Media Awards Webinar

Mozilla planet - ma, 18/06/2018 - 22:00

Creative Media Awards Webinar This is an informational webinar for a global public audience interested in learning more about Mozilla's Creative Media Awards track. This event is being streamed...

Categorieën: Mozilla-nl planet

About:Community: Firefox 61 new contributors

Mozilla planet - ma, 18/06/2018 - 18:41

With the upcoming release of Firefox 61, we are pleased to welcome the 59 developers who contributed their first code change to Firefox in this release, 53 of whom were brand new volunteers! Please join us in thanking each of these diligent and enthusiastic individuals, and take a look at their contributions:

Categorieën: Mozilla-nl planet

QMO: Firefox 61 Beta 14 Testday Results

Mozilla planet - ma, 18/06/2018 - 13:59

Hello Mozillians!

As you may already know, last Friday – June 15th – we held a new Testday event, for Firefox 61 Beta 14.

Thank you all for helping us make Mozilla a better place!

From India team: Aishwarya Narasimhan, Mohamed Bawas, Surentharan, amirthavenkat, Monisha Ravi

Results:

– several test cases executed for Fluent Migration of Preferences, Accessibility Inspector: Developer Tools and Web Compatibility.

Thanks for another successful testday

Categorieën: Mozilla-nl planet

Tarek Ziadé: IOActivityMonitor in Gecko

Mozilla planet - ma, 18/06/2018 - 00:00

This is a first blog post of a series on Gecko, since I am doing a lot of C++ work in Firefox these days. My current focus is on adding tools in Firefox to try to detect what's going on when something goes rogue in the browser and starts to drain your battery life.

We have many ideas on how to do this at the developer/user level, but in order to do it properly, we need to have accurate ways to measure what's going on when the browser runs.

One thing is I/O activity.

For instance, a WebExtension worker that performs a lot of disk writes is something we want to find out about, and we had nothing to track all I/O activities in Firefox, without running the profiler.

When Firefox OS was developed, a small feature was added in the Gecko network lib, called NetworkActivityMonitor.

That class was hooked as an NSPR layer to send notifications whenever something was sent or received on a socket, and was used to blink the small icon phones usually have to signal that something is being transferred.

After the Firefox OS project was discontinued in Gecko, that class was left in the Gecko tree but not used anymore, even though the option was still there.

Since I needed a way to track all I/O activity (sockets and files), I have refactored that class into a generalised version that can be used to get notified every time data is sent or received in any file or socket.

The way it works is pretty simple: when a file or a socket is created, a new NSPR layer is added so every read or write is recorded and eventually dumped into an XPCOM array that is notified via a timer.

This design makes it possible to track along sockets, any disk file that is accessed by Firefox. For SQLite databases, since there's no way to get all FD handles (theses are kept internal to the sqlite lib), the IOActivityMonitor class provides manual methods to notify when a read or a write happens. And our custom SQLite wrapper in Firefox allowed me to add calls like I would do in NSPR.

It’s landed in Nightly :

And you can see how to use it in its Mochitest

Categorieën: Mozilla-nl planet

Cameron Kaiser: TenFourFox FPR8b1 available

Mozilla planet - zo, 17/06/2018 - 00:49
TenFourFox Feature Parity Release 8 beta 1 is now available (downloads, release notes, hashes). There is much less in this release than I wanted because of a family member in the hospital and several technical roadblocks. Of note, I've officially abandoned CSS grid again after an extensive testing period due to the fact that we would need substantial work to get a functional implementation, and a partially functional implementation is worse than none at all (in the latter case, we simply gracefully degrade into block-level <div>s). I also was not able to finish the HTML <input> date picker implementation, though I've managed to still get a fair amount completed of it, and I'll keep working on that for FPR9. The good news is, once the date picker is done, the time picker will use nearly exactly the same internal plumbing and can just be patterned off it in the same way. Unlike Firefox's implementation, as I've previously mentioned our version uses native OS X controls instead of XUL, which also makes it faster. That said, it is a ghastly hack on the Cocoa widget side and required some tricky programming on 10.4 which will be the subject of a later blog post.

That's not to say this is strictly a security patch release (though most of the patches for the final Firefox 52, 52.9, are in this beta). The big feature I did want to get in FPR8 did land and seems to work properly, which is same-site cookie support. Same-site cookie support helps to reduce cross-site request forgeries by advising the browser the cookie in question should only be sent if a request originates from the site that set it. If the host that triggered the request is different than the one appearing in the address bar, the request won't include any of the cookies that are tagged as same-site. For example, say you're logged into your bank, and somehow you end up opening another tab with a malicious site that knows how to manipulate your bank's transfer money function by automatically submitting a hidden POST form. Since you're logged into your bank, unless your bank has CSRF mitigations (and it had better!), the malicious site could impersonate you since the browser will faithfully send your login cookie along with the form. The credential never leaked, so the browser technically didn't malfunction, but the malicious site was still able to impersonate you and steal your money. With same-site cookies, there is a list of declared "safe" operations; POST forms and certain other functions are not on that list and are considered "unsafe." Since the unsafe action didn't originate from the site that set the cookie, the cookie isn't transmitted to your bank, authentication fails and the attack is foiled. If the mode is set to "strict" (as opposed to "lax"), even a "safe" action like clicking a link from an outside site won't send the cookie.

Same-site cookie support was implemented for Firefox 60; our implementation is based on it and should support all the same features. When you start FPR8b1, your cookies database will be transparently upgraded to the new database schema. If you are currently logged into a site that supports same-site cookies, or you are using a foxbox that preserves cookie data, you will need to log out and log back in to ensure your login cookie is upgraded (I just deleted all my cookies and started fresh, which is good to give the web trackers a heart attack anyway). Github and Bugzilla already have support, and I expect to see banks and other high-security sites follow suit. To see if a cookie on a site is same-site, make sure the Storage Inspector is enabled in Developer tools, then go to the Storage tab in the Developer tools on the site of interest and look at the Cookies database. The same-site mode (unset, lax or strict) should be shown as the final column.

FPR8 goes live on June 25th.

Categorieën: Mozilla-nl planet

Dustin J. Mitchell: Actions as Hooks

Mozilla planet - vr, 15/06/2018 - 17:00

You may already be familiar with in-tree actions: they allow you to do things like retrigger, backfill, and cancel Firefox-related tasks They implement any “action” on a push that occurs after the initial hg push operation.

This article goes into a bit of detail about how this works, and a major change we’re making to that implementation.

History

Until very recently, actions worked like this: First, the decision task (the task that runs in response to a push and decides what builds, tests, etc. to run) creates an artifact called actions.json. This artifact contains the list of supported actions and some templates for tasks to implement those actions. When you click an action button (in Treeherder or the Taskcluster tools, or any UI implementing the actions spec), code running in the browser renders that template and uses it to create a task, using your Taskcluster credentials.

I talk a lot about functionality being in-tree. Actions are yet another example. Actions are defined in-tree, using some pretty straightforward Python code. That means any engineer who wants to change or add an action can do so – no need to ask permission, no need to rely on another engineer’s attention (aside from review, of course).

There’s Always a Catch: Security

Since the beginning, Taskcluster has operated on a fairly simple model: if you can accomplish something by pushing to a repository, then you can accomplish the same directly. At Mozilla, the core source-code security model is the SCM level: try-like repositories are at level 1, project (twice) repositories at level 2, and release-train repositories (autoland, central, beta, etc.) are at level 3. Similarly, LDAP users may have permisison to push to level 1, 2, or 3 repositories. The current configuration of Taskcluster assigns the same scopes to users at a particular level as it does to repositories.

If you have such permission, check out your scopes in the Taskcluster credentials tool (after signing in). You’ll see a lot of scopes there.

The Release Engineering team has made release promotion an action. This is not something that every user who can push to level-3 repository – hundreds of people – should be able to do! Since it involves signing releases, this means that every user who can push to a level-3 repository has scopes involved in signing a Firefox release. It’s not quite as bad as it seems: there are lots of additional safeguards in place, not least of which is the “Chain of Trust” that cryptographically verifies the origin of artifacts before signing.

All the same, this is something we (and the Firefox operations security team) would like to fix.

In the new model, users will not have the same scopes as the repositories they can push to. Instead, they will have scopes to trigger specific actions on task-graphs at specific levels. Some of those scopes will be available to everyone at that level, while others will be available only to more limited groups. For example, release promotion would be available to the Release Management team.

Hooks

This makes actions a kind of privilege escalation: something a particular user can cause to occur, but could not do themselves. The Taskcluster-Hooks service provides just this sort of functionality: a hook creates a task using scopes assiged by a role, without requiring the user calling triggerHook to have those scopes. The user must merely have the appropriate hooks:trigger-hook:.. scope.

So, we have added a “hook” kind to the action spec. The difference from the original “task” kind is that actions.json specifies a hook to execute, along with well-defined inputs to that hook. The user invoking the action must have the hooks:trigger-hook:.. scope for the indicated hook. We have also included some protection against clickjacking, preventing someone with permission to execute a hook being tricked into executing one maliciously.

Generic Hooks

There are three things we may wish to vary for an action:

  • who can invoke the action;
  • the scopes with which the action executes; and
  • the allowable inputs to the action.

Most of these are configured within the hooks service (using automation, of course). If every action is configured uniquely within the hooks service, then the self-service nature of actions would be lost: any new action would require assistance from someone with permission to modify hooks.

As a compromise, we noted that most actions should be available to everyone who can push to the corresponding repo, have fairly limited scopes, and need not limit their inputs. We call these “generic” actions, and creating a new such action is self-serve. All other actions require some kind of external configuration: allocating the scope to trigger the task, assigning additional scopes to the hook, or declaring an input schema for the hook.

Hook Configuration

The hook definition for an action hook is quite complex: it involves a complex task definition template as well as a large schema for the input to triggerHook. For decision tasks, cron tasks, an “old” actions, this is defined in .taskcluster.yml, and we wanted to continue that with hook-based actions. But this creates a potential issue: if a push changes .taskcluster.yml, that push will not automatically update the hooks – such an update requires elevated privileges and must be done by someone who can sanity-check the operation. To solve this, ci-admin creates tasks hooksed on the .taskcluster.yml it finds in any Firefox repository, naming each after a hash of the file’s content. Thus, once a change is introduced, it can “ride the trains”, using the same hash in each repository.

Implementation and Implications

As of this writing, two common actions are operating as hooks: retrigger and backfill. Both are “generic” actions, so the next step is to start to implement some actions that are not generic. Ideally, nobody notices anything here: it is merely an implementation change.

Once all actions have been converted to hooks, we will begin removing scopes from users. This will have a more significant impact: lots of activities such as manually creating tasks (including edit-and-create) will no longer be allowed. We will try to balance the security issues against user convenience here. Some common activities may be implemented as actions (such as creating loaners). Others may be allowed as exceptions (for example, creating test tasks). But some existing workflows may need to change to accomodate this improvement.

We hope to finish the conversion process in July 2018, with that time largely taken with a slow rollout to accomodate unforseen implications. When the project is finished, Firefox releases and other sensitive operations will be better-protected, with minimal impact to developers’ existing worflows.

Categorieën: Mozilla-nl planet

Byron Jones: in-tree annotations of third-party code (moz.yaml)

Mozilla planet - vr, 15/06/2018 - 09:30

I've recently landed changes on mozilla-central to provide initial support for in-tree annotations of third-party code. (Bug 1454868, D1208, r5df5e745ce6e).

Why
  • Provide consistency and discoverability to third-party code, its origin (repository, version, SHA, etc), and Mozilla-local modifications
  • Simplify the process for auditing vendorerd versions and licenses
  • Establish a structure which allows automation to drive vendoring
How
  • Using the example moz.yaml from the top of moz_yaml.pl create a moz.yaml in the top level of third-party code
  • Verify the manifest with mach vendor manifest --verify path/to/moz.yaml
Next
  • We will be creating moz.yaml files in-tree (help here is appreciated!)
  • Add tests to ensure moz.yaml files remain valid
  • At some point we'll add automation-driven vendoring to simplify and standardise the process of updating vendored code
moz.yaml Template

From python/mozbuild/mozbuild/moz_yaml.py#l50

--- # Third-Party Library Template # All fields are mandatory unless otherwise noted # Version of this schema schema: 1 bugzilla: # Bugzilla product and component for this directory and subdirectories product: product name component: component name # Document the source of externally hosted code origin: # Short name of the package/library name: name of the package description: short (one line) description # Full URL for the package's homepage/etc # Usually different from repository url url: package's homepage url # Human-readable identifier for this version/release # Generally "version NNN", "tag SSS", "bookmark SSS" release: identifier # The package's license, where possible using the mnemonic from # https://spdx.org/licenses/ # Multiple licenses can be specified (as a YAML list) # A "LICENSE" file must exist containing the full license text license: MPL-2.0 # Configuration for the automated vendoring system. # Files are always vendored into a directory structure that matches the source # repository, into the same directory as the moz.yaml file # optional vendoring: # Repository URL to vendor from # eg. https://github.com/kinetiknz/nestegg.git # Any repository host can be specified here, however initially we'll only # support automated vendoring from selected sources initiall. url: source url (generally repository clone url) # Revision to pull in # Must be a long or short commit SHA (long preferred) revision: sha # List of patch files to apply after vendoring. Applied in the order # specified, and alphabetically if globbing is used. Patches must apply # cleanly before changes are pushed # All patch files are implicitly added to the keep file list. # optional patches: - file - path/to/file - path/*.patch # List of files that are not deleted while vendoring # Implicitly contains "moz.yaml", any files referenced as patches # optional keep: - file - path/to/file - another/path - *.mozilla # Files/paths that will not be vendored from source repository # Implicitly contains ".git", and ".gitignore" # optional exclude: - file - path/to/file - another/path - docs - src/*.test # Files/paths that will always be vendored, even if they would # otherwise be excluded by "exclude". # optional include: - file - path/to/file - another/path - docs/LICENSE.* # If neither "exclude" or "include" are set, all files will be vendored # Files/paths in "include" will always be vendored, even if excluded # eg. excluding "docs/" then including "docs/LICENSE" will vendor just the # LICENSE file from the docs directory # All three file/path parameters ("keep", "exclude", and "include") support # filenames, directory names, and globs/wildcards. # In-tree scripts to be executed after vendoring but before pushing. # optional run_after: - script - another script
Categorieën: Mozilla-nl planet

Daniel Pocock: The questions you really want FSFE to answer

Mozilla planet - vr, 15/06/2018 - 09:28

As the last man standing as a fellowship representative in FSFE, I propose to give a report at the community meeting at RMLL.

I'm keen to get feedback from the wider community as well, including former fellows, volunteers and anybody else who has come into contact with FSFE.

It is important for me to understand the topics you want me to cover as so many things have happened in free software and in FSFE in recent times.

last man standing

Some of the things people already asked me about:

  • the status of the fellowship and the membership status of fellows
  • use of non-free software and cloud services in FSFE, deviating from the philosophy that people associate with the FSF / FSFE family
  • measuring both the impact and cost of campaigns, to see if we get value for money (a high level view of expenditure is here)

What are the issues you would like me to address? Please feel free to email me privately or publicly. If I don't have answers immediately I would seek to get them for you as I prepare my report. Without your support and feedback, I don't have a mandate to pursue these issues on your behalf so if you have any concerns, please reply.

Your fellowship representative

Categorieën: Mozilla-nl planet

Niko Matsakis: MIR-based borrow check (NLL) status update

Mozilla planet - vr, 15/06/2018 - 06:00

I’ve been getting a lot of questions about the status of “Non-lexical lifetimes” (NLL) – or, as I prefer to call it these days, the MIR-based borrow checker – so I wanted to post a status update.

The single most important fact is that the MIR-based borrow check is feature complete and available on nightly. What this means is that the behavior of #![feature(nll)] is roughly what we intend to ship for “version 1”, except that (a) the performance needs work and (b) we are still improving the diagnostics. (More on those points later.)

The MIR-based borrow check as currently implemented represents a huge step forward from the existing borrow checker, for two reasons. First, it eliminates a ton of borrow check errors, resulting in a much smoother compilation experience. Second, it has a lot less bugs. More on this point later too.

You may be wondering how this all relates to the “alias-based borrow check” that I outlined in my previous post, which we have since dubbed Polonius. We have implemented that analysis and solved the performance hurdles that it used to have, but it will still take some effort to get it fully ready to ship. The plan is to defer that work and ultimately ship Polonius as a second step: it will basically be a “MIR-based borrow check 2.0”, offering even fewer errors.

Would you like to help?

If you’d like to be involved, we’d love to have you! The NLL working group hangs out on the #wg-nll stream in Zulip. We have weekly meetings on Tuesdays (3:30pm Eastern time) where we discuss the priorities for the week and try to dole out tasks. If that time doesn’t work for you, you can of course pop in any time and communicate asynchronously. You can also always go look for work to do amongst the list of GitHub issues – probably the diagnostics issues are the best place to start.

Transition period

As I mentioned earlier, the MIR-based borrow checker fixes a lot of bugs – this is largely a side effect of making the check operate over the MIR. This is great! However, as a result, we can’t just “flip the switch” and enable the MIR-based borrow checker by default, since that would break existing crates (I don’t really know how many yet). The plan therefore is to have a transition period.

During the transition period, we will issue warnings if your program used to compile with the old borrow checker but doesn’t with the new checker (because we fixed a bug in the borrow check). The way we do this is to run both the old and the new borrow checker. If the new checker would report an error, we first check if the old check would also report an error. If so, we can issue the error as normal. If not, we issue only a warning, since that represents a case that used to compile but no longer does.

The good news is that while the MIR-based checker fixes a lot of bugs, it also accepts a lot more code. This lessens the overall impact. That is, there is a lot of code which ought to have gotten errors from the old borrow check (but never did), but most of that code won’t get any errors at all under the new check. No harm, no foul. =)

Performance

One of the main things we are working on is the performance of the MIR-based checker, since enabling the MIR-based borrow checker currently implies significant overhead during compilation. Take a look at this chart, which plots rustc build times for the clap crate:

clap-rs performance

The black line (“clean”) represents the “from scratch” build time with rustc today. The orange line (“nll”) represents “from scratch” build times when NLL is enabled. (The other lines represent incremental build times in various combinations.) You can see we’ve come a long way, but there is still plenty of work to do.

The biggest problem at this point is that we effectively have to “re-run” the type check a second time on the MIR, in order to compute all the lifetimes. This means we are doing two type-checks, and that is expensive. However, this second type check can be significantly simpler than the original: most of the “heavy lifting” has been done. Moreover, there are lots of opportunities to cache work between them so that it only has to be done once. So I’m confident we’ll make big strides here. (For example, I’ve got a PR up right now that adds some simple memoization for a 20% win, and I’m working on follow-ups that add much more aggressive memoization.)

(There is an interesting corollary to this: after the transition period, the first type check will have no need to consider lifetimes at all, which I think means we should be able to make it run quite a bit faster as well, which should mean a shorter “time till first error” and also help things like computing autocompletion information for the RLS.)

Diagnostics

It’s not enough to point out problems in the code, we also have to explain the error in an understandable way. We’ve put a lot of effort into our existing borrow checker’s error message. In some cases, the MIR-based borrow checker actually does better here. It has access to more information, which means it can be more specific than the older checker. As an example1, consider this error that the old borrow checker gives:

error[E0597]: `json` does not live long enough --> src\main.rs:38:17 | 38 | let v = json["data"]["search"]["edges"].as_array(); | ^^^^ borrowed value does not live long enough ... 52 | } | - `json` dropped here while still borrowed ... 90 | } | - borrowed value needs to live until here

The error isn’t bad, but you’ll note that while it says “borrowed value needs to live until here” it doesn’t tell you why the borrowed value needs to live that long – only that it does. Compare that to the new error you get from the same code:

error[E0597]: `json` does not live long enough --> src\main.rs:39:17 | 39 | let v = json["data"]["search"]["edges"].as_array(); | ^^^^ borrowed value does not live long enough ... 53 | } | - borrowed value only lives until here ... 70 | ", last_cursor)) | ----------- borrow later used here

The new error doesn’t tell you “how long” the borrow must last, it points to a concrete use. That’s great.

Other times, though, the errors from the new checker are not as good. This is particularly true when it comes to suggestions and tips for how to fix things. We’ve gone through all of our internal diagnostic tests and drawn up a list of about 37 issues, documenting each point where the checker’s message is not as good as the old one, and we’re working now on drilling through this list.

Polonius

In my previous blog post, I described a new version of the borrow check, which we have since dubbed Polonius. That analysis further improves on the MIR-based borrow check that is in Nightly now. The most significant improvement that Polonius brings has to do with “conditional returns”. Consider this example:

fn foo<T>(vec: &mut Vec<T>) -> &T { let r = &vec[0]; if some_condition(r) { return r; } // Question: can we mutate `vec` here? On Nightly, // you get an error, because a reference that is returned (like `r`) // is considered to be in scope until the end of the function, // even if that return only happens conditionally. Polonius can // accept this code. vec.push(...); }

In this example, vec is borrowed to produce r, and r is then returned – but only sometimes. In the MIR borrowck on nightly, this will give an error – when r is returned, the borrow is forced to last until the end of foo, no matter what path we take. The Polonius analysis is more precise, and understands that, outside of the if, vec is no longer referenced by any live references.

We originally intended for NLL to accept examples like this: in the RFC, this was called Problem Case #3. However, we had to remove that support because it was simply killing compilation times, and there were also cases where it wasn’t as precise as we wanted. Of course, some of you may recall that in my previous post about Polonius I wrote:

…the performance has a long way to go ([Polonius] is currently slower than existing analysis).

I’m happy to report that this problem is basically solved. Despite the increased precision, the Polonius analysis is now easily as fast as the existing Nightly analysis, thanks some smarter encoding of the rules as well as the move to use datafrog. We’ve not done detailed comparisons, but I consider this problem essentially solved.

If you’d like, you can try Polonius today using the -Zpolonius switch to Nightly. However, keep in mind that this would be a ‘pre-alpha’ state: there are still some known bugs that we have not prioritized fixing and so forth.

Conclusion

The key take-aways here:

  • NLL is in a “feature complete” state on Nightly.
  • We are doing a focused push on diagnostics and performance, primarily.
  • Even once it ships, we can expect further improvements in the future, as we bring in the Polonius analysis.
  1. Hat tip to steveklabnik for providing this example!

Categorieën: Mozilla-nl planet

Nick Cameron: What do you think are the most interesting/exciting projects using Rust?

Mozilla planet - wo, 13/06/2018 - 18:26

Last week I tweeted "What do you think are the most interesting/exciting projects using Rust? (No self-promotion :-) )". The response was awesome! Jonathan Turner suggested I write up the responses as a blog post, and here we are.

I'm just going to list the suggestions, crediting is difficult because often multiple people suggested the same projects, follow the Twitter thread if you're interested:

Categorieën: Mozilla-nl planet

The Mozilla Blog: WITHIN creates distribution platform using WebVR

Mozilla planet - wo, 13/06/2018 - 17:45

Virtual Reality (VR) content has arrived on the web, with help from the WebVR API. It’s a huge inflection point for a medium that has struggled for decades to reach a wide audience. Now, anyone with access to an internet-enabled computer or smartphone can enjoy VR experiences, no headset required. A good place to start? WITHIN’s freshly launched VR website.

From gamers to filmmakers, VR is the bleeding edge of self-expression for the next generation. It gives content creators the opportunity to tell stories in new ways, using audience participation, parallel narratives, and social interaction in ever-changing virtual spaces. With its immersive, 360-degree audio and visuals, VR has outsized power to activate our emotions and to put us in the center of the action.

WITHIN is at the forefront of this shift toward interactive filmmaking and storytelling. The company was one of the first to launch a VR distribution platform that showcases best-in-class VR content with high production values.

“Film is this incredible medium. It allows us to feel empathy for people that are very different from us, in worlds completely foreign to our own,” said Chris Milk, co-founder of WITHIN, in a Ted Talk. “I started thinking, is there a way I could use modern and developing technologies to tell stories in different ways, and tell different kinds of stories that maybe I couldn’t tell using the traditional tools of filmmaking that we’ve been using for 100 years?”

Simple to use

WITHIN’s approach is to bring curated and original VR experiences directly to viewers for free, rather than trying to gain visibility for their content through existing channels. Until now, VR content was mostly presented to headset users via the manufacturer’s store websites. So if you shelled out hundreds of dollars for an Oculus Rift or HTC Vive, you would see a library of content when you fired up your rig.

With its new site, WITHIN is making VR content accessible to everyone, whether they’re watching on a laptop, mobile phone, or headset. The company produces immersive VR experiences with high-profile partners like the band OK Go and Tyler Hurd. It also distributes top-tier VR experiences, like family-friendly animation and nature shows, with 360-degree visuals and stereoscopic sound.

“We aim to make it as easy as possible for fans to discover and share truly great VR experiences,” said Jon Rittenberg, Content Launch Manager at WITHIN.

WebVR JavaScript API

The key to reaching a vast potential audience of newcomers is to make a platform that is simple to use and easy to explore. Most importantly, it should work without exposing visitors to technical hurdles. That’s a challenge for two reasons.

First, the web is famously democratic. Companies like WITHIN have no control over who comes to their site, what device they’re on, what operating system that device runs, or how much bandwidth they have. Second, the web is still immature as a VR platform, with a growing but limited number of tools.

To build a platform that ‘just works’, the engineers at WITHIN turned to the WebVR API. Mozilla engineers built the foundation for the WebVR API with a goal to give companies like WITHIN a simpler way to support a range of viewing options without having to rewrite their code for each platform. WebVR provides support for exposing VR devices and headsets to web apps, enabling developers to translate position and movement information from the display into movement around a 3D scene.

Adapting content to devices

Using the WebVR specification, the company built its WITHIN WebVR site so it could adapt to dozens of factors and give new viewers a consistently great experience. In an amazing proof-of-concept for VR on the web, the company was able to optimize each streaming experience to a wide range of platforms and displays, like Vive, Rift, PSVR, iOS, Android, GearVR, and Oculus Go.

“The API really helped us out. It gave us all the pieces we needed,” said Jono Brandel, Lead Designer at WITHIN. “Without the WebVR API, we could not have done any of this stuff in the browser. We wouldn’t have access to VR headsets.”

Gorgeous content

The WITHIN WebVR site does a fantastic job of adapting its VR content to a range of devices. The site can identify a visitor’s device and push content suited for that device, making it easy on the end user. The majority of visitors to WITHIN’s VR site arrive on a Cardboard device that works with their smartphone. That delivers a basic experience: 3D stereoscopic visuals with some gyroscopic controls.

WITHIN uses WebVR to connect to any viewer or device

Headset users get the same content with higher resolution visuals and binaural audio, which brings life-like sound effects to VR experiences. The VR content can also adapt to different head and hand tracking inputs, and supports common navigational tools in popular headsets. Folks visiting via a browser can view VR content just as they would play a 3D, interactive game or watch a 360-degree video online.

To get this level of adaptive support took quite a bit of work behind the scenes. “We have a room filled with a ton of devices: smartphones, computers, and operating systems. We’ve got everything,” Brandel said. “It’s really cool that one code base supports all of these platforms.”

A more capable web platform

The web is a great platform for creating and experiencing VR. It’s easy to share content broadly, across continents and cultures. And it’s simple to get started building 3D experiences using free tools like A-Frame, invented by Mozilla engineers with help from a talented and dedicated open source community.

“We’re excited to see such big platforms making a bet on WebVR,” said Lars Bergstrom, Director of Mixed Reality at Mozilla. “As new devices reach more people, we expect the WebVR specification will continue to grow and evolve.”

Mozilla and WITHIN are also collaborating to make the open web platform even better for VR distribution. The two companies are working together on a series of experiments to make WebVR versions of popular players as capable as native applications, using tech standards WebGL and WebAssembly.

The goal is to make it simpler for content creators to push their stories and games to the web, without having to do a lot of coding work. The two companies are exploring how to use Unity’s popular gaming platform to streamline the publication to the web, while still delivering performance, stability, and scale for immersive experiences.

“The Unity ecosystem is already mature – and it’s where the designers and developers are focused,” said Christopher Van Wiemeersch, Senior UX Web Engineer at Mozilla. “We’re hoping that WebAssembly and the Unity-WebVR Exporter can help us use the web as a distribution platform and not only as a content-creation platform. You’re using JavaScript under the hood, but you don’t have to learn it yourself.”

Earlier this year, Mozilla released Unity WebVR Assets, a tool that aims to reduce complexity for content authors by letting them export content from the Unity platform and have the experiences work on the web. You can check it out in the Unity Asset Store.

If you’re a filmmaker interested in getting your VR experience on the WITHIN platform, you can submit your project here for consideration.

 

The post WITHIN creates distribution platform using WebVR appeared first on The Mozilla Blog.

Categorieën: Mozilla-nl planet

Daniel Stenberg: curl survey 2018 analysis

Mozilla planet - di, 12/06/2018 - 15:27

This year, 670 individuals spent some of their valuable time on our survey and filled in answers that help us guide what to do next. What's good, what's bad, what to remove and where to emphasize efforts more.

It's taken me a good while to write up this analysis but hopefully the results here can be used all through the year as a reminder what people actually think and how they use curl and libcurl.

A new question this yeas was in which continent the respondent lives, which ended up with an unexpectedly strong Euro focus:

What didn't trigger any surprises though was the question of what protocols users are using, which basically identically mirrored previous years' surveys. HTTP and HTTPS are the king duo by far.

Read the full 34 page analysis PDF.

Some other interesting take-aways:

  • One person claims to use curl to handle 19 protocols! (out of 23)
  • One person claims to use curl on 11 different platforms!
  • Over 5% of the users argue for a rewrite in rust.
  • Windows is now the second most common platform to use curl on.
Categorieën: Mozilla-nl planet

Pagina's