mozilla

Mozilla Nederland LogoDe Nederlandse
Mozilla-gemeenschap

This Week In Rust: This Week in Rust 262

Mozilla planet - ti, 27/11/2018 - 06:00

Hello and welcome to another issue of This Week in Rust! Rust is a systems language pursuing the trifecta: safety, concurrency, and speed. This is a weekly summary of its progress and community. Want something mentioned? Tweet us at @ThisWeekInRust or send us a pull request. Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub. If you find any errors in this week's issue, please submit a PR.

Updates from Rust Community News & Blog Posts Crate of the Week

This week's crate is modulator, a crate of abstract modulators for use in audio synthesizers (and possibly elsewhere). Thanks to Andrea Pessino for the suggestion!

Submit your suggestions and votes for next week!

Call for Participation

Always wanted to contribute to open-source projects but didn't know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

If you are a Rust project owner and are looking for contributors, please submit tasks here.

Updates from Rust Core

173 pull requests were merged in the last week

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

No RFCs were approved this week.

Final Comment Period

Every week the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now.

RFCs Tracking Issues & PRs New RFCs Upcoming Events Online Africa Asia Europe North America

If you are running a Rust event please add it to the calendar to get it mentioned here. Please remember to add a link to the event too. Email the Rust Community Team for access.

Rust Jobs

Tweet us at @ThisWeekInRust to get your job offers listed here!

Quote of the Week

"I did not want to inflict memory management on my son" – @M_a_s_s_i

– Massimiliano Mantione during his RustFest talk

Thanks to llogiq for the suggestion!

Please submit your quotes for next week!

This Week in Rust is edited by: nasa42, llogiq, and Flavsditz.

Discuss on r/rust.

Categorieën: Mozilla-nl planet

The Rust Programming Language Blog: Rust Survey 2018 Results

Mozilla planet - ti, 27/11/2018 - 01:00

Another year means another Rust survey, and this year marks Rust's third annual survey. This year, the survey launched for the first time in multiple languages. In total 14 languages, in addition to English, were covered. The results from non-English languages totalled 25% of all responses and helped pushed the number of responses to a new record of 5991 responses. Before we begin the analysis, we just want to give a big "thank you!" to all the people who took the time to respond and give us your thoughts. It’s because of your help that Rust will continue to improve year after year.

Do you use Rust

Despite having an increase in the number of responses, this year also saw an increase in the percentage of Rust users. Up from last year’s 66.9% Rust users, this year nearly three-quarters of responses were from Rust users.

Rust Users Time with Rust

How long have you worked in Rust

We’re seeing a steady stream of new users into Rust. At the time of the survey, ~23% of Rust users had been using it for 3 months or less. Likewise, nearly a quarter of the users have used Rust for at least 2 years.

How long did it take to be productive

Over 40% of Rust users felt productive in Rust in less than a month of use, and over 70% felt productive in their first year. Unfortunately, there is a noticeable struggle among users, as over 22% do not yet feel productive.

How long have you been unproductive

Looking closer at these users who feel unproductive in Rust, only about 25% are in their first month of use. The challenge here is to find ways to help bridge users to productivity so they don't get stuck.

How much do you use Rust?

Size of summed Rust projects

Rust projects are continuing to trend to larger sizes, with larger overall investments. Medium to large investments in Rust (those totally over 10k and 100k lines of code respectively) have grown from 8.9% in 2016, to 16% in 2017, to 23% this year.

How often do you use Rust

We’ve also seen a growth in Rust regular usage. Up from 17.5% last year, Rust daily usage is now nearly a quarter of users. In total, Rust weekly total usage has risen from 60.8% to 66.4%.

Rust expertise

How would you rate your Rust expertise

Rather than being a simple curve, Rust expertise has two peaks: one around a "3", and another at "7", showing that users tend to see themselves as just above beginner or experienced without necessarily being expert.

How difficult are Rust concepts

Rust users generally felt that Enums and Cargo were largely easy concepts; followed by Iterators, Modules, and Traits. More challenging concepts of Trait Bounds and Unsafe came next. Lastly, the most challenging concepts were Macros, Ownership & Borrowing, and Lifetimes. These challenges match closely to feedback we’ve heard in years past and continue to be a focus of continued productivity improvements like NLL and the continued macro system improvements.

What programming languages are you familiar with

Humorously, we see that Rust isn't actually the top programming language that users were familiar with. Instead, it pulled in a 2nd place behind Python.

Rust toolchain

Which Rust version do you use

We’re seeing similar numbers in users of the current stable release since last year. Perhaps surprisingly, we’re continuing to see a rise in the number of users who use the Nightly compiler in their workflow. For the second year in a row, Nightly usage has continued to rise, and is now over 56% (up from 51.6% of last year).

When asked why they used nightly, people responded with a broad range of reasons including: access to 2018 edition, asm, async/await, clippy, embedded development, rocket, NLL, proc macros, and wasm.

Has upgrading the compiler broken your code

The percentage of people who see a breakage during a routine compiler update has stayed the same since last year, with 7.4% saying they’ve experienced breakage.

If so how much work to fix it

Breakages generally leaned to requiring minor fixes, though some reported having moderate or major fixes to upgrade to the next stable compiler.

Preferred install method

We again see a strong showing for rustup, which continues to hold at 90% of Rust installs. Linux distros follow as a distant second at 17%.

Experience with Rust tools

Tools like rustfmt and rustdoc had a strong show, with lots of positive support. Following these is the clippy tool -- despite having fewer users, its users enjoy the tool. The IDE support tools Rust Language Server and racer had positive support but unfortunately, of the tools surveyed, generated a few more dislike votes and comments. The bindgen tool has relatively small userbase.

Rust workflow

Which platform are you developing on

Linux continues to be a powerhouse among Rust developers, holding on to roughly 80% of Rust developers. Windows usage has grown slightly from 31% last year to 34% this year, its second year in a row of growth.

Which platforms are you developing for

Linux and Windows continued to show strongly as targets for Rust applications. Other platforms held largely the same as last year, with one exception: WebAssembly. The new technology has showed impressive growth, nearly doubling from last year's 13% to this year's 24%.

What editors do you use

Vim, the front-runner in editors for two years has now finally been bested by VSCode, which grew from 33.8% of Rust developers to 44.4% this year.

Rust at work

Do you use Rust at work

Rust continues is slow-and-steady growth in the workplace. We're now seeing year-over-year growth of full-time and part-time Rust, growing from last year's 4.4% full-time and 16.6% part-time to this year's 8.9% full-time and 21.2% part-time, a doubling of full-time Rust commercial use. In total, Rust commercial use grew from 21% to just over 30% of Rust users.

Is your company evaluating Rust

There is more room for Rust to grow into more companies, over a third of which users report aren't currently looking into evaluating Rust in the coming year. When paired with the survey data that said that nearly half of non-users needed the company support, this shows the need for further company outreach or more company-focused information about Rust.

Feeling welcome

Do you feel welcome in the Rust community

An important part of the Rust community efforts are ensuring that the Rust project is a welcoming place for its users. New users should feel encouraged to explore, share ideas, and generally be themselves.

When asked, both current Rust users and non-users largely felt welcome, though over a quarter of responses weren't sure. There was also some regional variation in these responses. For example, responses on the Russian version of the survey showed double the percent of unwelcome feelings at 4%. Mainland China showed even more at 8%.

There's a challenge here to help Rust communities worldwide feel like they are part of what makes Rust unique, as Rust continues to grow a strong presence in more areas of the world.

Are you underrepresented in tech

The number of people in Rust who self-identify as being part of a group underrepresented in technology is growing slowly year-over-year. The survey also highlights some challenges, as the number of women is still lower than the industry average of women in programming fields.

Rust Non-Users

A big part of a welcoming Rust community is reaching out to non-users as well. As we have in years past, we again asked the reasons why people weren't using Rust.

How long before you stopped

For those who stopped using Rust, just over 50% stopped using Rust in less than a month. Likewise, roughly 50% of people who left Rust managed to use it for more than a month before stopping.

Why are you not using Rust

Many non-users responded that they did want to learn Rust, but there were factors that slowed them down. First among these is that the companies the responders work for do not themselves use Rust. Nearly one half of the non-users were blocked by the lack of company support.

Additionally, 1 in 4 non-users were slowed by the feeling of Rust being too intimidating or complicated. The work towards improving Rust IDE support has helped (down from 25% to 16%), though we still see a strong push towards even better IDE support among non-users.

Challenges

As we've done in past years, we asked for your comments in where Rust can improve. This year, we see some familiar themes as well as some new ones in this feedback. The top ten themes this year are:

  1. the need for better library support
  2. a more improved IDE experience
  3. the need for broader adoption of Rust generally
  4. a richer ecosystem of tools and support
  5. an improved learning curve
  6. the need for important language features and crates to be stable and supported
  7. support for async programming
  8. support for GUI development
  9. better documentation
  10. improved compile times

New this year is the rising need to support GUI development, showing that Rust continues to grow not only on the server, but that people are feeling the need to stretch into application development.

"Improve Rust marketing. Many people don't know about it"

Comments remind us that while Rust may be well-known in some circles, it still has room to grow and in many tech circles Rust may not yet be well-known.

"Keeping a focus on adoption/tutorials/books/novice experience will pay dividends in the years to come."

In addition to outreach, a broader set of documentation would in turn help reach a broader audience.

"Stability and maturity of developer tools, make it easier to get a working setup and to debug applications"

Many people commented on the IDE support, pointing out not only instability or inaccuracy in the RLS, but also the need for a much stronger IDE story that covered more areas, like easier debugging.

"The maturity of the ecosystem and libraries. Have a good ecosystem of "standard" libraries is key for the future of the language"

A common theme continues to be the need to push libraries to completion and grow the set of "standard" libraries that users can use. Some comments point out this isn't the fault of maintainers, who are already working hard to write and publish the crates, but that generally more companies need to get involved and offer commercial support.

"Ergonomics and discoverability of "putting it together" documentation"

Some people pointed out that ergonomics goes hand in hand with richer documentation, seeing that these aren't separate concepts but rather challenges that should be tackled together in a unified approach.

Looking forward

This year saw the strongest survey yet. Not only was it the largest community survey, it was the first to cover languages outside of English. Rust continues to grow steadily, and with it, both its strengths and challenges are introduced to a broader audience.

We look forward to using your feedback in planning for 2019, and we're excited to see where we can take Rust next.

Categorieën: Mozilla-nl planet

William Lachance: Making contribution work for Firefox tooling and data projects

Mozilla planet - mo, 26/11/2018 - 17:13

One of my favorite parts about Mozilla is mentoring and working alongside third party contributors. Somewhat surprisingly since I work on internal tools, I’ve had a fair amount of luck finding people to help work on projects within my purview: mozregression, perfherder, metrics graphics, and others have all benefited from the contributions of people outside of Mozilla.

In most cases (a notable exception being metrics graphics), these have been internal-tooling projects used by others to debug, develop, or otherwise understand the behaviour of Firefox. On the face of it, none of the things I work on are exactly “high profile cutting edge stuff” in the way, say, Firefox or the Rust Programming Language are. So why do they bother? The exact formula varies depending on contributor, but I think it usually comes down to some combination of these two things:

  1. A desire to learn and demonstrate competence with industry standard tooling (the python programming language, frontend web development, backend databases, “big data” technologies like Parquet, …).
  2. A desire to work with and gain recognition inside of a community of like-minded people.

Pretty basic, obvious stuff — there is an appeal here to basic human desires like the need for security and a sense of belonging. Once someone’s “in the loop”, so to speak, generally things take care of themselves. The real challenge, I’ve found, is getting people from the “I am potentially interested in doing something with Mozilla internal tools” to the stage that they are confident and competent enough to work in a reasonably self-directed way. When I was on the A-Team, we classified this transition in terms of a commitment curve:

prototype commitment curve graphic by Steven Brown

The hardest part, in my experience, is the initial part of that curve. At this point, people are just dipping their toe in the water. Some may not have a ton of experience with software development yet. In other cases, my projects may just not be the right fit for them. But of course, sometimes there is a fit, or at least one could be developed! What I’ve found most helpful is “clearing a viable path” forward for the right kind of contributor. That is, some kind of initial hypothesis of what a successful contribution experience would look like as a new person transitions from “explorer” stage in the chart above to “associate”.

I don’t exactly have a perfect template for what “clearing a path” looks like in every case. It depends quite a bit on the nature of the contributor. But there are some common themes that I’ve found effective:

First, provide good, concise documentation both on the project’s purpose and vision and how to get started easily and keep it up to date. For projects with a front-end web component, I try to decouple the front end parts from the backend services so that people can yarn install && yarn start their way to success. Being able to see the project in action quickly (and not getting stuck on some mundane getting started step) is key in maintaining initial interest.

Second, provide a set of good starter issues (sometimes called “good first bugs”) for people to work on. Generally these would be non-critical-path type issues that have straightforward instructions to resolve and fix. Again, the idea here is to give people a sense of quick progress and resolution, a “yes I can actually do this” sort of feeling. But be careful not to let a contributor get stuck here! These bugs take a disproportionate amount of effort to file and mentor compared to their actual value — the key is to progress the contributor to the next level once it’s clear they can handle the basics involved in solving such an issue (checking out the source code, applying a fix, submitting a patch, etc). Otherwise you’re going to feel frustrated and wonder why you’re on an endless treadmill of writing up trivial bugs.

Third, once a contributor has established themselves by fixing a few of these simple issues, I try to get to know them a little better. Send them an email, learn where they’re from, invite them to chat on the project channel if they can. At the same time, this is an opportunity to craft a somewhat larger piece of work (a sort of mini-project) that they can do, tailored to the interests. For example, a new contributor on the Mission Control has recently been working on adding Jest tests to the project — I provided some basic guidance of things to look at, but did not dictate exactly how to perform the task. They figured that out for themselves.

As time goes by, you just continue this process. Depending on the contributor, they may start coming up with their own ideas for how a project might be improved or they might still want to follow your lead (or that of the team), but at the least I generally see an improvement in their self-directedness and confidence after a period of sustained contribution. In either case, the key to success remains the same: sustained and positive communication and sharing of goals and aspirations, making sure that both parties are getting something positive out of the experience. Where possible, I try to include contributors in team meetings. Where there’s an especially close working relationship (e.g. Google Summer of Code). I try to set up a weekly one on one. Regardless, I make reviewing code, answering questions, and providing suggestions on how to move forward a top priority (i.e. not something I’ll leave for a few days). It’s the least I can do if someone is willing to take time out to contribute to my project.

If this seems similar to the best practices for how members of a team should onboard each other and work together, that’s not really a coincidence. Obviously the relationship is a little different because we’re not operating with a formal managerial structure and usually the work is unpaid: I try to bear that mind and make double sure that contributors are really getting some useful skills and habits that they can take with them to future jobs and other opportunities, while also emphasizing that their code contributions are their own, not Mozilla’s. So far it seems to have worked out pretty well for all concerned (me, Mozilla, and the contributors).

Categorieën: Mozilla-nl planet

Daniel Stenberg: HTTP/3 Explained

Mozilla planet - mo, 26/11/2018 - 13:40

I’m happy to tell that the booklet HTTP/3 Explained is now ready for the world. It is entirely free and open and is available in several different formats to fit your reading habits. (It is not available on dead trees.)

The book describes what HTTP/3 and its underlying transport protocol QUIC are, why they exist, what features they have and how they work. The book is meant to be readable and understandable for most people with a rudimentary level of network knowledge or better.

These protocols are not done yet, there aren’t even any implementation of these protocols in the main browsers yet! The book will be updated and extended along the way when things change, implementations mature and the protocols settle.

If you find bugs, mistakes, something that needs to be explained better/deeper or otherwise want to help out with the contents, file a bug!

It was just a short while ago I mentioned the decision to change the name of the protocol to HTTP/3. That triggered me to refresh my document in progress and there are now over 8,000 words there to help.

The entire HTTP/3 Explained contents are available on github.

If you haven’t caught up with HTTP/2 quite yet, don’t worry. We have you covered for that as well, with the http2 explained book.

Categorieën: Mozilla-nl planet

Daniel Pocock: UN Forum on Business and Human Rights

Mozilla planet - mo, 26/11/2018 - 12:42

This week I'm at the UN Forum on Business and Human Rights in Geneva.

What is the level of influence that businesses exert in the free software community? Do we need to be more transparent about it? Does it pose a risk to our volunteers and contributors?

Categorieën: Mozilla-nl planet

The Servo Blog: This Week In Servo 119

Mozilla planet - mo, 26/11/2018 - 01:30

In the past 3 weeks, we merged 243 PRs in the Servo organization’s repositories.

Planning and Status

Our roadmap is available online, including the overall plans for 2018.

This week’s status updates are here.

Notable Additions
  • avadacatavra created the infrastructure for implementing the Resource Timing and Navigation Timing web standards.
  • mandreyel added support for tracking focused documents in unique top-level browsing contexts.
  • SimonSapin updated many crates to the Rust 2018 edition.
  • ajeffrey improved the Magic Leap port in several ways.
  • nox implemented some missing WebGL extensions.
  • eijebong fixed a bug with fetching large HTTP responses.
  • cybai added support for firing the rejectionhandled DOM event.
  • Avanthikaa and others implemented additional oscillator node types.
  • jdm made scrolling work more naturally on backgrounds of scrollable content.
  • SimonSapin added support for macOS builds to the Taskcluster CI setup.
  • paulrouget made rust-mozjs support long-term reuse without reinitializing the library.
  • paulrouget fixed a build issue breaking WebVR support on android devices.
  • nox cleaned up a lot of texture-related WebGL code.
  • ajeffrey added a laser pointer in the Magic Leap port for interacting with web content.
  • pyfisch made webdriver composition events trigger corresponding DOM events.
  • nox reduced the amount of copying required when using HTML images for WebGL textures.
  • vn-ki implemented the missing JS contrstructor for HTML audio elements.
  • Manishearth added support for touch events on Android devices.
  • SimonSapin improved the treeherder output for the new Taskcluster CI.
  • jdm fixed a WebGL regression breaking non-three.js content on Android devices.
  • paulrouget made Servo shut down synchronously to avoid crashes on Oculus devices.
  • Darkspirit regenerated several important files used for HSTS and network requests.
New Contributors

Interested in helping build a web browser? Take a look at our curated list of issues that are good for new contributors!

Categorieën: Mozilla-nl planet

Mozilla Open Innovation Team: Overscripted Web: a Mozilla Data Analysis Challenge

Mozilla planet - to, 22/11/2018 - 17:26

Help us explore the unseen JavaScript and what this means for the Web

<figcaption>Photo by Markus Spiske on Unsplash</figcaption>

What happens while you are browsing the Web? Mozilla wants to invite data and computer scientists, students and interested communities to join the “Overscripted Web: a Data Analysis Challenge”, and help explore JavaScript running in browsers and what this means for users. We gathered a rich dataset and we are looking for exciting new observations, patterns and research findings that help to better understand the Web. We want to bring the winners to speak at MozFest, our annual festival for the open Internet held in London.

The Dataset

Two cohorts of Canadian Undergraduate interns worked on data collection and subsequent analysis. The Mozilla Systems Research Group is now open sourcing a dataset of publicly available information that was collected by a Web crawl in November 2017. This dataset is currently being used to help inform product teams at Mozilla. The primary analysis from the students focused on:

  • Session replay analysis: when do websites replay your behavior in the site
  • Eval and dynamically created function calls
  • Cryptojacking: websites using user’s computers to mine cryptocurrencies are mainly video streaming sites

Take a look on Mozilla’s Hacks blog for a longer description of the analysis.

The Data Analysis Challenge

We see great potential in this dataset and believe that our analysis has only scratched the surface of the insights it can offer. We want to empower the community to use this data to better understand what is happening on the Web today, which is why Mozilla’s Systems Research Group and Open Innovation team partnered together to launch this challenge.

We have looked at how other organizations enable and speed up scientific discoveries through collaboratively analyzing large datasets. We’d love to follow this exploratory path: We want to encourage the crowd to think outside the proverbial box, get creative, get under the surface. We hope participants get excited to dig into the JavaScript executions data and come up with new observations, patterns, research findings.

To guide thinking, we’re dividing the Challenge into three categories:

  1. Tracking and Privacy
  2. Web Technologies and the Shape of the Modern Web
  3. Equality, Neutrality, and Law

You will find all of the necessary information to join on the Challenge website. The submissions will close on August 31st and the winners will be announced on September 14th. We will bring the winners of the best three analyses (one per category) to MozFest, the world’s leading festival for the open internet movement, taking place in London from October 26th to the 28th 2018. We will cover their airfare, hotel, admission/registration, and if necessary visa fees in accordance to the official rules. We may also invite the winners to do 15-minute presentations of their findings.

We are looking forward to the diverse and innovative approaches from the data science community and we want to specifically encourage young data scientists and students to take a stab at this dataset. It could be the basis for your final university project and analyzing it can grow your data science skills and build your resumé (and GitHub profile!). The Web gets more complex by the minute, keeping it safe and open can only happen if we work together. Join us!

Overscripted Web: a Mozilla Data Analysis Challenge was originally published in Mozilla Open Innovation on Medium, where people are continuing the conversation by highlighting and responding to this story.

Categorieën: Mozilla-nl planet

Mozilla VR Blog: Updating the WebXR Viewer for iOS 12 / ARKit 2.0

Mozilla planet - wo, 21/11/2018 - 21:30
Over the past few months, we're continuing to leverage the features of ARKit on iOS to enhance the WebXR Viewer app and explore ideas and issues with WebXR. One big question with WebXR on modern AR and VR platforms is how to best leverage the platform to provide a frictionless experience while also supporting the advanced capabilities users will expect, in a safe and platform independent way. Updating the WebXR Viewer for iOS 12 / ARKit 2.0

Last year, we created an experimental version of an API for WebXR and built the WebXR Viewer iOS app to allow ourselves and others to experiment with WebXR on iOS using ARKit. In the near future, the WebXR Device API will be finalized and implementations will be begin to appear; we're already working on a new Javascript library that allows the WebXR Viewer to expose the new API, and expect to ship an update with the official API shortly after the webxr polyfill is updated to match the final API.

We recently released an update to the WebXR Viewer that that fixes some small bugs and updates the app to iOS 12 and ARKit 2.0 (we haven't exposed all of ARKit 2.0 yet, but expect to over the next coming months). Beyond just bug fixes, two features of the new app highlight interesting questions for WebXR related to privacy, friction and platform independence.

First, Web browsers can decrease friction for users moving from one AR experience to another by managing the underlying platform efficiently and not shutting it down completely between sessions, but care needs to be taken to not to expose data to applications that might surprise users.

Second, some advanced features imagined for WebXR are not (yet) available in a cross platform way, such as shareable world maps or persistent anchors. These capabilities are core to experiences users will expect, such as persistent content in the world or shared experiences between multiple co-located people.

In both cases, it is unclear what the right answer is.

Frictionless Experience and User Privacy

Hypothesis: regardless of how the underlying platform is used, when a new WebXR web page is loaded, it should only get information about the world that that would be available if it was loaded for the first time and not see existing maps or anchors from previous pages.

Consider the image (and video) below. The image shows the results of running the "World Knowledge" sample, and spending a few minutes walking from the second floor of a house, down the stairs to the main floor, around and down the stairs to the basement, and then back up and out the front door into the yard. Looking back at the house, you can see small planes for each stair, the floor and some parts of the walls (they are the translucent green polygons). Even after just a few minutes of running ARKit, a surprising amount of information can be exposed about the interior of a space.

Updating the WebXR Viewer for iOS 12 / ARKit 2.0

If the same user visits another web page, the browser could choose to restart ARKit or not. Restarting results in a high-friction user experience: all knowledge of the world is lost, requiring the user to scan their environment to reinitialize the underlying platform. Not restarting, however, might expose information to the new web page that is surprising to the user. Since the page is visited while outside the house, a user might not expect is to have access to details of the interior.

In the WebXR Viewer, we do not reinitialize ARKit for each page. We made the decision that if a page is reloaded without visiting a different XR page, we leave ARKit running and all world knowledge is retained. This allows pages to be reloaded without completely restarting the experience. When a new WebXR page is visited, we keep ARKit running, but destroy all ARKit anchors and world knowledge (i.e., ARKit ARAnchors, such as ARPlaneAnchors) that are further than some threshold distance from the user (3 meters, by default, in our current implementation).

In the video below, we demonstrate this behavior. When the user changes from "World Knowledge" sample to the "Hit Test" sample, internally we destroy most of the anchors. When the user changes back to the "World Knowledge" sample, we again destroy most of the anchors. You can see at the end of the video that only the nearby planes still exist (the plane under the user and some of the planes on the front porch). Further planes (inside the house, in the case) are gone. (Visiting non-XR pages does not count as visiting another page, although we also shut down ARKit after a short time, to save battery, if the browser is not on an XR page, which destroys all world knowledge as well).

Updating the WebXR Viewer for iOS 12 / ARKit 2.0

While this is a relatively simplistic approach to this tradeoff between friction and privacy, issues like these need to be considered when implementing WebXR inside a browser. Modern AR and VR platforms (such as Microsoft's Hololens or Magic Leap's ML1) are capable of synthesizing and exposing highly detailed maps of the environment, and retaining significant information over time. In these platforms, the world space model is retained over time and exposed to apps, so even if the browser restarts the underlying API for each visited page, the full model of the space is available unless the browser makes an explicit choice to not expose it to the web page.

Consider, for example, a user walking a similar path for a similarly short time in the above house while wearing a Microsoft Hololens. In this case, a map of the same environment is shown below.

Updating the WebXR Viewer for iOS 12 / ARKit 2.0

This image (captured with Microsoft's debugging tools, while the user is sitting at a computer in the basement of the house, shown as the sphere and green view volume) is significantly more detailed that the ARKit planes. And it would be retained, improved and shared with all apps in this space as the user continues to wear and use the Hololens.

In both cases, the ARKit planes and Hololens maps were captured based on just a few minutes of walking in this house. Imagine the level of detail that might be available after extended use.

Platform-specific Capabilities

Hypothesis: advanced capabilities such as World Mapping, that are needed for user experiences that require persistence and sharing content, will need cross-platform analogs to the platform silos currently available if the platform-independent character of the web is to extend to AR and VR.

ARKit 2.0 introduces the possibility of retrieving the current model of the world (the so-called ARWorldMap used by ARKit for tracking planes and anchors in the world. The map can then be saved and/or shared with others, enabling both persistent and multi-user AR experiences.

In this version of the WebXR Viewer, we want to explore some ideas for persistent and shared experiences, so we added session.getWorldMap() and session.setWorldMap(map) commands to an active AR session (these can be seen in the "Persistence" sample, a small change to the "World Knowledge" sample above).

These capabilities raise questions of user-privacy. The ARKit's ARWorldMap is an opaque binary ARKit data struction, and may contain a surprising amount of data about the space that could extracted by determined application developers (the format is undocumented). Because of this, we leverage the existing privacy settings in the WebXR Viewer, and allow apps to retrieve the world map if (and only if) the user has given the page access to "world knowledge".

On the other hand, the WebXR Viewer allows a page to provide an ARWorldMap to ARKit and try and use it for relocalization with no heightened permissions. In theory, such an action could allow a malicious web app to try and "probe" the world by having the browser test if the user is in a certain location. In practice, such an attack seems infeasible: loading a map resets ARKit (a highly disruptive and visible action) and relocalizing the phone against a map takes an indeterminate amount of time regardless of whether the relocalization eventually succeeds or not.

While implementing these commands was trivial, exposing this capability raises a fundamental question for the design of WebXR (beyond questions of permissions and possible threats). Specifically, how might such capabilities eventually work in a cross-platform way, given that each XR platform is implementing these capabilities differently?

We have no answer for this question. For example, some devices, such as Hololens, allow spaces to be saved and shared, much like ARKit. But other platforms opt to only share Anchors, or do not (yet) allow sharing at all. Over time, we hope some common ground might emerge. Google has implemented their ARCore Cloud Anchors on both ARKit and ARCore; perhaps a similar approach could be take that is more open and independent of one companies infrastructure, and could thus be standardized across many platforms.

Looking Forward

These issues are two of many issues that are being discussed and considered by the Immersive Web Community Group as we work on the initial WebXR Device API specification. If you want to see the full power of the various XR platforms exposed and available on the Web, done in a way that preserves the open, accessible and safe character of the Web, please join the discussion and help us ensure the success of the XR Web.

Categorieën: Mozilla-nl planet

Mozilla GFX: WebRender newsletter #31

Mozilla planet - wo, 21/11/2018 - 16:18

Greetings! I’ll introduce WebRender’s 31st newsletter with a few words about batching.

Efficiently submitting work to GPUs isn’t as straightforward as one might think. It is not unusual for a CPU renderer to go through each graphic primitive (a blue filled circle, a purple stroked path, and image, etc.) in z-order to produce the final rendered image. While this isn’t the most efficient way, greater care needs to be taken in optimizing the inner loop of the algorithm that renders each individual object than in optimizing the overhead of alternating between various types of primitives. GPUs however, work quite differently, and the cost of submitting small workloads is often higher than the time spent executing them.

I won’t go into the details of why GPUs work this way here, but the big takeaway is that it is best to not think of a GPU API draw call as a way to draw one thing, but rather as a way to submit as many items of the same type as possible. If we implement a shader to draw images, we get much better performance out of drawing many images in a single draw call than submitting a draw call for each image. I’ll call a “batch” any group of items that is rendered with a single drawing command.

So the solution is simply to render all images in a draw call, and then all of the text, then all gradients, right? Well, it’s a tad more complicated because the submission order affects the result. We don’t want a gradient to overwrite text that is supposed to be rendered on top of it, so we have to maintain some guarantees about the order of the submissions for overlapping items.

In the 29th newsletter intro I talked about culling and the way we used to split the screen into tiles to accelerate discarding hidden primitives. This tiling system was also good at simplifying the problem of batching. In order to batch two primitives together we need to make sure that there is no primitive of a different type in between. Comparing all primitives on screen against every other primitive would be too expensive but the tiling scheme reduced this complexity a lot (we then only needed to compare primitives assigned to the same tile).

In the culling episode I also wrote that we removed the screen space tiling in favor of using the depth buffer for culling. This might sound like a regression for the performance of the batching code, but the depth buffer also introduced a very nice property: opaque elements can be drawn in any order without affecting correctness! This is because we store the z-index of each pixel in the depth buffer, so if some text is hidden by an opaque image we can still render the image before the text and the GPU will be configured to automatically discard the pixels of the text that are covered by the image.

In WebRender this means we were able to separate primitives in two groups: the opaque ones, and the ones that need to perform some blending. Batching opaque items is trivial since we are free to just put all opaque items of the same type in their own batch regardless of their painting order. For blended primitives we still need to check for overlaps but we have less primitives to consider. Currently WebRender simply iterates over the last 10 blended primitives to see if there is a suitable batch with no other type of primitive overlapping in between and defaults to starting a new batch. We could go for a more elaborate strategy but this has turned out to work well so far since we put a lot more effort into moving as many primitives as possible into the opaque passes.

In another episode I’ll describe how we pushed this one step further and made it possible to segment primitives into the opaque and non-opaque parts and further reduce the amount of blended pixels.

Notable WebRender and Gecko changes
  • Henrik added reftests for the ImageRendering propertiy: (1), (2) and (3).
  • Bobby changed the way pref changes propagate through WebRender.
  • Bobby improved the texture cache debug view.
  • Bobby improved the texture cache eviction heuristics.
  • Chris fixed the way WebRender activation interacts with the progressive feature rollout system.
  • Chris added a marionette test running on a VM with a GPU.
  • Kats and Jamie experimented with various solution to a driver bug on some Adreno GPUs.
  • Kvark removed some smuggling of clipIds for clip and reference frame items in Gecko’s displaylist building code. This fixed a few existing bugs: (1), (2) and (3).
  • Matt and Jeff added some new telemetry and analyzed the results.
  • Matt added a debugging indicator that moves when a frame takes too long to help with catching performance issues.
  • Andrew landed his work on surface sharing for animated images which fixed most of the outstanding performance issues related to animated images.
  • Andrew completed animated image frame recycling.
  • Lee fixed a bug with sub-pixel glyph positioning.
  • Glenn fixed a crash.
  • Glenn fixed another crash.
  • Glenn reduced the need for full world rect calculation during culling to make it easier to do clustered culling.
  • Nical switched device coordinates to signed integers in order to be able to meet some of the needs of blob image recoordination and simplify some code.
  • Nical added some debug checks to avoid global OpenGL states from the embedder causing issues in WebRender.
  • Sotaro fixed an intermittent timeout.
  • Sotaro fixed a crash on Android related to SurfaceTexture.
  • Sotaro improved the frame synchronization code.
  • Sotaro cleaned up the frame synchronization code some more.
  • Timothy ported img AltFeedback items to WebRender.
Ongoing work
  • Glenn is about to land a major piece of his tiled picture caching work which should solve a lot of the remaining issues with pages that generate too much GPU work.
  • Matt, Dan and Jeff keep investigating into CPU performance, a lot which revolving around many memory copies generated by rustc when moving structures on the stack.
  • Doug is investigating talos performance issues with document splitting.
  • Nical is making progress on improving the invalidation of tiled blob images during scrolling.
  • Kvark keeps catching smugglers in gecko’s displaylist building code.
  • Kats and Jamie are hunting driver bugs on Android.
Enabling WebRender in Firefox Nightly 

In about:config, set the pref “gfx.webrender.all” to true and restart the browser.

Reporting bugs

The best place to report bugs related to WebRender in Firefox is the Graphics :: WebRender component in bugzilla.
Note that it is possible to log in with a github account.

Categorieën: Mozilla-nl planet

The Firefox Frontier: Translate the Web

Mozilla planet - wo, 21/11/2018 - 15:56

Browser extensions make Google Translate even easier to use. With 100+ languages at the ready, Google Translate is the go-to translation tool for millions of people around the world. But … Read more

The post Translate the Web appeared first on The Firefox Frontier.

Categorieën: Mozilla-nl planet

Mozilla Privacy Blog: The EU Terrorist Content Regulation – a threat to the ecosystem and our users’ rights

Mozilla planet - wo, 21/11/2018 - 11:08

In September the European Commission proposed a new regulation that seeks to tackle the spread of ‘terrorist’ content on the internet. As we’ve noted already, the Commission’s proposal would seriously undermine internet health in Europe, by forcing companies to aggressively suppress user speech with limited due process and user rights safeguards. Here we unpack the proposal’s shortfalls, and explain how we’ll be engaging on it to protect our users and the internet ecosystem.  

As we’ve highlighted before, illegal content is symptomatic of an unhealthy internet ecosystem, and addressing it is something that we care deeply about. To that end, we recently adopted an addendum to our Manifesto, in which we affirmed our commitment to an internet that promotes civil discourse, human dignity, and individual expression. The issue is also at the heart of our recently published Internet Health Report, through its dedicated section on digital inclusion.

At the same time lawmakers in Europe have made online safety a major political priority, and the Terrorist Content regulation is the latest legislative initiative designed to tackle illegal and harmful content on the internet. Yet, while terrorist acts and terrorist content are serious issues, the response that the European Commission is putting forward with this legislative proposal is unfortunately ill-conceived, and will have many unintended consequences. Rather than creating a safer internet for European citizens and combating the serious threat of terrorism in all its guises, this proposal would undermine due process online; compel the use of ineffective content filters; strengthen the position of a few dominant platforms while hampering European competitors; and, ultimately, violate the EU’s commitment to protecting fundamental rights.

Many elements from the proposal are worrying, including:

  • The definition of ‘terrorist’ content is extremely broad, opening the door for a huge amount of over-removal (including the potential for discriminatory effect) and the resulting risk that much lawful and public interest speech will be indiscriminately taken down;
  • Government-appointed bodies, rather than independent courts, hold the ultimate authority to determine illegality, with few safeguards in place to ensure these authorities act in a rights-protective manner;
  • The aggressive one hour timetable for removal of content upon notification is barely feasible for the largest platforms, let alone the many thousands of micro, small and medium-sized online services whom the proposal threatens;
  • Companies could be forced to implement ‘proactive measures’ including upload filters, which, as we’ve argued before, are neither effective nor appropriate for the task at hand; and finally,
  • The proposal risks making content removal an end in itself, simply pushing terrorist off the open internet rather than tackling the underlying serious crimes.

As the European Commission acknowledges in its impact assessment, the severity of the measures that it proposes could only ever be justified by the serious nature of terrorism and terrorist content. On its face, this is a plausible assertion. However, the evidence base underlying the proposal does not support the Commission’s approach.  For as the Commission’s own impact assessment concedes, the volume of ‘terrorist’ content on the internet is on a downward trend, and only 6% of Europeans have reported seeing terrorist content online, realities which heighten the need for proportionality to be at the core of the proposal. Linked to this, the impact assessment predicts that an estimated 10,000 European companies are likely to fall within this aggressive new regime, even though data from the EU’s police cooperation agency suggests terrorist content is confined to circa 150 online services.

Moreover, the proposal conflates online speech with offline acts, despite the reality that the causal link between terrorist content online, radicalisation, and terrorist acts is far more nuanced. Within the academic research around terrorism and radicalisation, no clear and direct causal link between terrorist speech and terrorist acts has been established (see in particular, research from UNESCO and RAND). With respect to radicalisation in particular, the broad research suggests exposure to radical political leaders and socio-economic factors are key components of the radicalisation process, and online speech is not a determinant. On this basis, the high evidential bar that is required to justify such a serious interference with fundamental rights and the health of the internet ecosystem is not met by the Commission. And in addition, the shaky evidence base demands that the proposal be subject to far greater scrutiny than it has been afforded thus far.

Besides these concerns, it is saddening that this new legislation is likely to create a legal environment that will entrench the position of the largest commercial services that have the resources to comply, undermining the openness on which a healthy internet thrives. By setting a scope that covers virtually every service that hosts user content, and a compliance bar that only a handful of companies are capable of reaching, the new rules are likely to engender a ‘retreat from the edge’, as smaller, agile services are unable to bear the cost of competing with the established players. In addition, the imposition of aggressive take-down timeframes and automated filtering obligations is likely to further diminish Europe’s standing as a bastion for free expression and due process.

Ultimately, the challenge of building sustainable and rights-protective frameworks for tackling terrorism is a formidable one, and one that is exacerbated when the internet ecosystem is implicated. With that in mind,  we’ll continue to highlight how the nuanced interplay between hosting services, terrorist content, and terrorist acts mean this proposal requires far more scrutiny, deliberation, and clarification. At the very least, any legislation in this space must include far greater rights protection, measures to ensure that suppression of online content doesn’t become an end in itself, and a compliance framework that doesn’t make the whole internet march to the beat of a handful of large companies.

Stay tuned.

The post The EU Terrorist Content Regulation – a threat to the ecosystem and our users’ rights appeared first on Open Policy & Advocacy.

Categorieën: Mozilla-nl planet

Chris AtLee: PyCon Canada 2018

Mozilla planet - ti, 20/11/2018 - 20:20

I've very happy to have had the opportunity to attend and speak at PyCon Canada here in Toronto last week.

PyCon has always been a very well organized conference. There are a wide range of talks available, even on topics not directly related to Python. I've attended previous PyCon events in the past, but never the Canadian one!

My talk was titled How Mozilla uses Python to Build and Ship Firefox. The slides are available here if you're interested. I believe the sessions were recorded, but they're not yet available online. I was happy with the attendance at the session, and the questions during and after the talk.

As part of the talk, I mentioned how Release Engineering is a very distributed team. Afterwards, many people had followup questions about how to work effectively with remote teams, which gave me a great opportunity to recommend John O'Duinn's new book, Distributed Teams.

Some other highlights from the conference:

  • CircuitPython: Python on hardware I really enjoyed learning about CircuitPython, and the work that Adafruit is doing to make programming and electronics more accessible.

  • Using Python to find Russian Twitter troll tweets aimed at Canada A really interesting dive into 3 million tweets that FiveThirtyEight made available for analysis.

  • PEP 572: The Walrus Operator My favourite quote from the talk: "Dictators are people too!" If you haven't followed Python governance, Guido stepped down as BDFL (Benevolent Dictator for Life) after the PEP was resolved. Dustin focused much of his talk about how we in the Python community, and more generally in tech, need to treat each other better.

  • Who's There? Building a home security system with Pi & Slack A great example of how you can get started hacking on home automation with really simple tools.

  • Froilán Irzarry's Keynote talk on the second day was really impressive.

  • You Don't Need That! Design patterns in Python My main takeaway from this was that you shouldn't try and write Python code as if it were Java or C++ :) Python has plenty of language features built-in that make many classic design patterns unnecessary or trivial to implement.

  • Numpy to PyTorch Really neat to learn about PyTorch, and leveraging the GPU to accelerate computation.

  • Flying Python - A reverse engineering dive into Python performance Made me want to investigate Balrog performance, and also look at ways we can improve Python startup time. Some neat tips about examining disassembled Python bytecode.

  • Working with Useless Machines Hilarious talk about (ab)using IoT devices.

  • Gathering Related Functionality: Patterns for Clean API Design I really liked his approach for creating clean APIs for things like class constructors. He introduced a module called variants which lets you write variants of a function / class initializer to support varying types of parameters. For example, a common pattern is to have a function that takes either a string path to a file, or a file object. Instead of having one function that supports both types of arguments, variants allows you to make distinct functions for each type, but in a way that makes it easy to share underlying functionality and also not clutter your namespace.

Categorieën: Mozilla-nl planet

Hacks.Mozilla.Org: LPCNet: DSP-Boosted Neural Speech Synthesis

Mozilla planet - ti, 20/11/2018 - 17:51


LPCNet is a new project out of Mozilla’s Emerging Technologies group — an efficient neural speech synthesiser with reduced complexity over some of its predecessors. Neural speech synthesis models like WaveNet have already demonstrated impressive speech synthesis quality, but their computational complexity has made them hard to use in real-time, especially on phones. In a similar fashion to the RNNoise project, our solution with LPCNet is to use a combination of deep learning and digital signal processing (DSP) techniques.

LPCNet samples screenshot

Figure 1: Screenshot of a demo player that demonstrates the quality of LPCNet-synthesized speech.

LPCNet can help improve the quality of text-to-speech (TTS), low bitrate speech coding, time stretching, and more. You can hear the difference for yourself in our LPCNet demo page, where LPCNet and WaveNet speech are generated with the same complexity. The demo also explains the motivations for LPCNet, shows what it can achieve, and explores its possible applications.

You can find an in-depth explanation of the algorithm used in LPCNet in this paper.

The post LPCNet: DSP-Boosted Neural Speech Synthesis appeared first on Mozilla Hacks - the Web developer blog.

Categorieën: Mozilla-nl planet

Mozilla Privacy Blog: Mozilla files comments on NTIA’s proposed American privacy framework

Mozilla planet - ti, 20/11/2018 - 17:32

Countries around the world are considering how to protect their citizens’ data – but there continues to be a lack of comprehensive privacy protections for American internet users. But that could change. The National Telecommunications and Information Administration (NTIA) recently proposed an outcome-based framework to consumer data privacy, reflecting internationally accepted principles for privacy and data protection. Mozilla believes that the NTIA framework represents a good start to address many of these challenges, and we offered our thoughts to help Americans realize the same protections enjoyed by users in other countries around the world (you can see all the comments that were received at the NTIA’s website).

Mozilla has always been committed to strong privacy protections, user controls, and security tools in our policies and in the open source code of our products. We are pleased that the NTIA has embraced similar considerations in its framework, including the need for user control over the collection and use of information; minimization in the collection, storage, and the use of data; and security safeguards for personal information. While we generally support these principles, we also encourage the NTIA to pursue a more granular set of outcomes to provide more guidance for covered entities.

To supplement the proposed framework, our submission encourages the NTIA to adopt the following additional recommendations to protect user privacy:

  1. Include the explicit right to object to the processing of personal data as a core component of reasonable user control.
  2. Mandate the use of security and role-based access controls and protections against unlawful disclosures.
  3. Close the current gap in FTC oversight to cover telecommunications carriers and major non-profits that handle significant amounts of personal information.
  4. Expand FTC authority to provide the agency with the ability to make rules and impose civil penalties to deter future violations of consumer privacy.
  5. Provide the FTC with more resources and staff to better address threats in a rapidly evolving field.

In its request for comment, the NTIA stated that it believes that the United States should lead on privacy. The framework outlined by the agency represents a promising start to those efforts, and we are encouraged that the NTIA has sought the input of a broad variety of stakeholders at this pivotal juncture. But if the U.S. plans to lead on privacy, it must invest accordingly and provide the FTC with the legal tools and resources to demonstrate that commitment. Ultimately, this will lead to long-term benefits for users and internet-based businesses, providing greater certainty for data-driven entities and flexibility to address future threats.

The post Mozilla files comments on NTIA’s proposed American privacy framework appeared first on Open Policy & Advocacy.

Categorieën: Mozilla-nl planet

Hacks.Mozilla.Org: Decentralizing Social Interactions with ActivityPub

Mozilla planet - ti, 20/11/2018 - 16:14

In the Dweb series, we are covering projects that explore what is possible when the web becomes decentralized or distributed. These projects aren’t affiliated with Mozilla, and some of them rewrite the rules of how we think about a web browser. What they have in common: These projects are open source and open for participation, and they share Mozilla’s mission to keep the web open and accessible for all.

Social websites first got us talking and sharing with our friends online, then turned into echo-chamber content silos, and finally emerged in their mature state as surveillance capitalist juggernauts, powered by the effluent of our daily lives online. The tail isn’t just wagging the dog, it’s strangling it. However, there just might be a way forward that puts users back in the driver seat: A new set of specifications for decentralizing social activity on the web. Today you’ll get a helping hand into that world from from Darius Kazemi, renowned bot-smith and Mozilla Fellow.

– Dietrich Ayala

ActivityPub logo

Introducing ActivityPub

Hi, I’m Darius Kazemi. I’m a Mozilla Fellow and decentralized web enthusiast. In the last year I’ve become really excited about ActivityPub, a W3C standard protocol that describes ways for different social network sites (loosely defined) to talk to and interact with one another. You might remember the heyday of RSS, when a user could subscribe to almost any content feed in the world from any number of independently developed feed readers. ActivityPub aims to do for social network interactions what RSS did for content.

Architecture

ActivityPub enables a decentralized social web, where a network of servers interact with each other on behalf of individual users/clients, very much like email operates at a macro level. On an ActivityPub compliant server, individual user accounts have an inbox and an outbox that accept HTTP GET and POST requests via API endpoints. They usually live somewhere like https://social.example/users/dariusk/inbox and https://social.example/users/dariusk/outbox, but they can really be anywhere as long as they are at a valid URI. Individual users are represented by an Actor object, which is just a JSON-LD file that gives information like username and where the inbox and outbox are located so you can talk to the Actor. Every message sent on behalf of an Actor has the link to the Actor’s JSON-LD file so anyone receiving the message can look up all the relevant information and start interacting with them.

A simple server to send ActivityPub messages

One of the most popular social network sites that uses ActivityPub is Mastodon, an open source community-owned and ad-free alternative to social network services like Twitter. But Mastodon is a huge, complex project and not the best introduction to the ActivityPub spec as a developer. So I started with a tutorial written by Eugen Rochko (the principal developer of Mastodon) and created a partial reference implementation written in Node.js and Express.js called the Express ActivityPub server.

The purpose of the software is to serve as the simplest possible starting point for developers who want to build their own ActivityPub applications. I picked what seemed to me like the smallest useful subset of ActivityPub features: the ability to publish an ActivityPub-compliant feed of posts that any ActivityPub user can subscribe to. Specifically, this is useful for non-interactive bots that publish feeds of information.

To get started with Express ActivityPub server in a local development environment, install

git clone https://github.com/dariusk/express-activitypub.git cd express-activitypub/ npm i

In order to truly test the server it needs to be associated with a valid, https-enabled domain or subdomain. For local testing I like to use ngrok to test things out on one of the temporary domains that they provide. First, install ngrok using their instructions (you have to sign in but there is a free tier that is sufficient for local debugging). Next run:

ngrok http 3000

This will show a screen on your console that includes a domain like abcdef.ngrok.io. Make sure to note that down, as it will serve as your temporary test domain as long as ngrok is running. Keep this running in its own terminal session while you do everything else.

Then go to your config.json in the express-activitypub directory and update the DOMAIN field to whatever abcdef.ngrok.io domain that ngrok gave you (don’t include the http://), and update USER to some username and PASS to some password. These will be the administrative password required for creating new users on the server. When testing locally via ngrok you don’t need to specify the PRIVKEY_PATH or CERT_PATH.

Next run your server:

node index.js

Go to https://abcdef.ngrok.io/admin (again, replace the subdomain) and you should see an admin page. You can create an account here by giving it a name and then entering the admin user/pass when prompted. Try making an account called “test” — it will give you a long API key that you should save somewhere. Then open an ActivityPub client like Mastodon’s web interface and try following @test@abcdef.ngrok.io. It should find the account and let you follow!

Back on the admin page, you’ll notice another section called “Send message to followers” — fill this in with “test” as the username, the hex key you just noted down as the password, and then enter a message. It should look like this:

Screenshot of form

Screenshot of form

Hit “Send Message” and then check your ActivityPub client. In the home timeline you should see your message from your account, like so:

Post in Mastodon mobile web view

Post in Mastodon mobile web view

And that’s it! It’s not incredibly useful on its own but you can fork the repository and use it as a starting point to build your own services. For example, I used it as the foundation of an RSS-to-ActivityPub conversion service that I wrote (source code here). There are of course other services that could be built using this. For example, imagine a replacement for something like MailChimp where you can subscribe to updates for your favorite band, but instead of getting an email, everyone who follows an ActivityPub account will get a direct message with album release info. Also it’s worth browsing the predefined Activity Streams Vocabulary to see what kind of events the spec supports by default.

Learn More

There is a whole lot more to ActivityPub than what I’ve laid out here, and unfortunately there aren’t a lot of learning resources beyond the specs themselves and conversations on various issue trackers.

If you’d like to know more about ActivityPub, you can of course read the ActivityPub spec. It’s important to know that while the ActivityPub spec lays out how messages are sent and received, the different types of messages are specified in the Activity Streams 2.0 spec, and the actual formatting of the messages that are sent is specified in the Activity Streams Vocabulary spec. It’s important to familiarize yourself with all three.

You can join the Social Web Incubator Community Group, a W3C Community Group, to participate in discussions around ActivityPub and other social web tech standards. They have monthly meetings that you can dial into that are listed on the wiki page.

And of course if you’re on an ActivityPub social network service like Mastodon or Pleroma, the #ActivityPub hashtag there is always active.

The post Decentralizing Social Interactions with ActivityPub appeared first on Mozilla Hacks - the Web developer blog.

Categorieën: Mozilla-nl planet

Ludovic Hirlimann: CR Capitole du Libre 2018

Mozilla planet - ti, 20/11/2018 - 12:36

Cette année, comme les années précédentes j’ai “organisé” la présence de Mozilla au Capitole du libre. je dis organisé entre guillemet car au dernier moment l’ami Rzr nous a rejoint pour nous aider à tenir le stand et présenter ce que Mozilla fait en matière d’IOT de nos jours. Nous avons donc pu tenir le stand et allez donner nos présentations sans que le stand soit vide.
46424111_611794942571650_6030231115485151232_o.jpg
La présence de nombreux goodies sur le stand plus la plante et les lumières d’IOT ont fait qu’il y a eut une affluence – qui sans l’avoir compté approchait celle des années précédentes – sauf durant la conférence de Tristant, en effet tout le monde l’écoutait. En termes de stand nous présentions Common Voice – et a notre grand surprise beaucoup de visiteurs connaissaient déjà. Nous présentions aussi la présence de Focus et de Firefox sur mobile; Deux questions récurrentes sont revenues, comment trouver Firefox sur F-droid et pourquoi il n’y avait pas de lien de téléchargement F-droid sur nos pages officielles.

J’aimerais remercier El doctor pour les stickers Common Voice, teoli pour les t-shirts MDN, Sylvestre et Pascal pour les nombreux auto-collant Firefox et Firefox nightly. Les organisateurs et leurs sponsors, de nous avoir fait confiance à la fois au niveau du village du libre et pour les conférences. Les bénévoles sans qui y aurait rien eut. Enfin un grand merci à Linkid qui a joué avec mes enfants sur le stand de la LAN party durant tout le week-end.

En note plus personnel j’ai pu assister à une conférence super intéressante de Vincent sur l’import d’un fond photographique sur la galaxie Wiki. J'ai revu un copain de Fac, que j'avais perdu depuis 20 ans, ça m'a fait bien plaisr de te revoir Christian. Enfin je suis très content d'avoir pu participer à la campagne de financement de nos-oignons.

Categorieën: Mozilla-nl planet

Andy McKay: BC going electric

Mozilla planet - ti, 20/11/2018 - 09:00

Today the BC Government announced that they will be planning to introduce legislation that:

"All new cars and trucks sold in B.C. in the year 2040 will have to be zero-emission vehicles"

CBC

I'm unreasonably interested and excited by this. There is a simple fact that we need to do this and car companies will not do it, they are too invested in the existing carbon economy. My first thought is that 2040 is way too far off, but this really isn't a simple change and the political and societal desire to change is not strong enough to force a sudden change yet.

Other jurisdictions are making similar noises, but haven't introduced legislation yet. Two that talk about it more are the UK and California. There are some interesting questions in BC :

  • BC is big. Twice the size of California and over four times the size of the UK. It's full of a lot of nothing. You can go long ways without encountering anything. This problem is common to most of Canada, but the current range of electric vehicles is more obvious when its 400km to the next charging station.
  • BC gets really cold up north. Colder than California or the UK. This affects electric batteries and charging. Again this problem affects all of Canada.
  • Site C isn't perfect, but we'll need the electricity.
  • Remind me why we should have a pipeline to ship oil from Alberta out again?

Yes I am focusing on electric vehicles here, because that's the majority of non oil burning cars out there. Notably Vancouver has the first hydrogen fuel station in Canada. We might need hydrogen in other parts of BC, or perhaps there will be another answer by then, but for the moment it looks like we are looking at an electric future.

Also Tesla is pretty amazing.

Anyways, I hope this legislation happens and if so it sets us down an interesting path. A slow one, but there are some real problems to solve on the way.

Categorieën: Mozilla-nl planet

Pages