mozilla

Mozilla Nederland LogoDe Nederlandse
Mozilla-gemeenschap

Ian Bicking: The Firefox Experiments I Would Have Liked To Try

Mozilla planet - ma, 04/03/2019 - 07:00

I have been part of the Firefox Test Pilot team for several years. I had a long list of things I wanted to build. Some I didn’t personally want to build, but I thought they were interesting ideas. I didn’t get very far through this list at all, and now that Test Pilot is being retired I am unlikely to get to them in the future.

Given this I feel I have to move this work out of my head, and publishing a list of ideas seems like an okay way to do that. Many of these ideas were inspired by something I saw in the wild, sometimes a complete product (envy on my part!), or the seed of an idea embedded in some other product.

The experiments are a spread: some are little features that seem potentially useful. Others are features seen elsewhere that show promise from user research, but we could only ship them with confidence if we did our own analysis. Some of these are just ideas for how to explore an area more deeply, without a clear product in mind.

Test Pilot’s purpose was to find things worth shipping in the browser, which means some of these experiments aren’t novel, but there is an underlying question: would people actually use it? We can look at competitors to get ideas, but we have to ship something ourselves if we want to analyze the benefit.

Table of contents:

Sticky Reader Mode

mockup of Sticky Reader Mode

Give Reader Mode in Firefox a preference to make it per-domain sticky. E.g. if I use Reader Mode on nytimes.com and then if I visit an article on nytimes.com in the future it’ll automatically convert to reader mode. (The nytimes.com homepage would not be a candidate for that mode.)

I made an experiment in sticky-reader-mode, and I think it works really nicely. It changes the browsing experience significantly, and most importantly it doesn’t require frequent proactive engagement to change behavior. Lots of these proposed ideas are tools that require high engagement by the user, and if you don’t invoke the tool then they do nothing. In practice no one (myself included) remembers to invoke these tools. Once you click the preference on a site Sticky Reader Mode then you are opted in to this new experience with no further action required.

There are a bunch of similar add-ons. Sticky Reader Mode works a bit better than most because of its interface, and it will push you directly into Reader Mode without rendering the normal page. But it does this by using APIs that are not public to normal WebExtensions. As a result it can’t be shipped outside Test Pilot, and can’t go in addons.mozilla.org. So… just trust me, it’s great.

Recently I’ve come upon Brave Speed Reader which is similar, but without per-site opt-in, and using machine learning to identify articles.

Cloud Browser

mockup of a Cloud Browser

Run a browser/user-agent in the cloud and use a mobile view as a kind of semantic or parsed view on that user agent (the phone would just control the browser that is hosted on the cloud). At its simplest we just take the page, simplify it in a few ways, and send it on - similar to what Opera Mini does. The approach lends itself to a variety of task-oriented representations of remote content.

When I first wrote this down I had just stared at my phone while it took 30 seconds to show me a 404 page. The browser probably knew after a couple seconds that it was a 404 but it acted as a rendering engine and not a user agent, so the browser insisted on faithfully rendering the useless not found page.

Obviously running a full browser instance in the cloud is resource hungry and finicky but I think we could ignore those issues while testing. Those are hard but solved operational issues.

Prior art: Opera Mini does some of this. Puffin is specifically cloud rendering for mobile. Light Point does the same for security reasons.

I later encountered brow.sh which is another interesting take on this (specifically with html.brow.sh).

This is a very big task, but I still believe there’s tremendous potential in it. Most of my concepts are not mobile-based, in part because I don’t like mobile, I don’t like myself when using a mobile device, and it’s not something I want to put my energy into. But I still like this idea.

Modal Page Actions

mockup of Modal Page Actions

This was tangentially inspired by Vivaldi’s Image Properties, not because of the interface, but thinking about how to fit high-information inspection tools into the browser.

The idea: instead of context menus, page actions, or other interaction points that are part of the “chrome”, implement one overlay interface: the do-something-with-this-page interface. Might also be do-something-with-this-element (e.g. replacing the 7 image-related context menu items: View Image, Copy Image, Copy Image Location, Save Image As, Email Image, Set As Desktop Background, and View Image Info).

The interface would be an overlay onto the page, similar to what happens when you start Screenshots:

Screenshots interface

Everything that is now in the Page Action menu (the ... in the URL bar), or in the context menu, would be available here. Some items might have a richer interface, e.g., Send Tab To Device would show the devices directly instead of using a submenu. Bookmarking would include some inline UI for managing the resulting bookmark, and so on.

There was some pushback because of the line of death – that is, the idea all trusted content must clearly originate from the browser chrome, and not the content area. I do not believe in the Line of Death, it’s something users could use to form trust, but I don’t believe they do use it (further user research required).

The general pattern is inspired by mobile interfaces which are typically much more modal than desktop interfaces. Modal interfaces have gotten a bad rap, I think somewhat undeserved: modal interfaces are also interfaces that guide you through processes, or ask you to explicitly dismiss the interface. It’s not unreasonable to expect someone to finish what they start.

Find+1

mockup of Find + 1

We have find-in-page but what about find-in-anything-linked-from-this-page?

Hit Cmd-Shift-F and you get an interface to do that. All the linked pages will be loaded in the background and as you search we show snippets of matching pages. Clicking on a snippet opens or focuses the tab and goes to where the search term was found.

I started experimenting in find-plus-one and encountered some challenges: hidden tabs aren’t good workers, loading pages in the background takes a lot of grinding in Firefox, and most links on pages are stupid (e.g., I don’t want to search your Careers page). An important building block would be a way to identify the important (non-navigational) parts of a page. Maybe lighter-weight ways to load pages (in other projects I’ve used CSP injection). The Copy Keeper concept did come about while I experimented with this.

A simpler implementation of this might simply do a text search of all your open tabs, which would be technically simpler and mostly an exercise in making a good representation of the results.

Your Front Page

mockup of Your Front Page

Create a front page of news from the sites you already visit. Like an RSS reader, but prepopulated with your history. This creates an immediate well-populated experience.

My initial thought was to use ad hoc parsers for popular news sites, and at run an experiment with just a long whitelist of news providers.

I got the feedback: why not just use RSS? Good question: I thought RSS was kind of passé, but I hadn’t looked for myself. I went on to do some analysis of RSS, and found it available for almost all news sites. The autodetection (<link rel=alternate>) is not as widely available, and it requires manual searching to find many feeds. Still RSS is a good way to get an up-to-date list of articles and their titles. Article content isn’t well represented and other article metadata is inaccurate or malformed (e.g., there are no useful tags). So using RSS would be very reasonable discovery mechanism, but an “RSS reader” doesn’t seem like a good direction on the current web.

Page Archive

This is bringing back old functionality from Page Shot, a project of mine which morphed into Firefox Screenshots: save full DOM copies of pages. What used to be fairly novel is now well represented by several projects (e.g., WebMemex or World Brain Memex).

Unfortunately I have never been able to really make this kind of tool part of my own day-to-day behavior, and I’ve become skeptical it can work for a general populace. But maybe there’s a way to package up this functionality that is more accessible, or happens more implicitly. I forked a version of Page Shot as pagearchive a while ago, with this in mind, but I haven’t (and likely won’t) come back to it.

Personal Historical Archive

This isn’t really a product idea, but instead an approach to developing products.

One can imagine many tools that directly interact or learn from the content of your browsing. There is both a privacy issue here and a privacy opportunity: looking at this data is creepy, but if the tools live in your user agent (that belongs to you and hosts your information locally) then it’s not so creepy.

But it’s really hard to make experiments on this because you need a bunch of data. If you build a tool that starts watching your browsing then it will only slowly build up interesting information. The actual information that is already saved in browser history is interesting, but in my experience it is too limited and of poor quality. For instance, it is quite hard to build up a navigational path from the history when you use multiple tabs.

A better iterative development approach would be one where you have a static set of all the information you might want, and you can apply tools to that information. If you find something good then later you can add new data collection to the browser, secure in the knowledge that it’s going to find interesting things.

I spent quite a bit of effort on this, and produced `personal-history-archive. It’s something I still want to come back to. It’s a bit of a mess, because at various times it was retrofitted to collect historical information, or collect it on an ongoing basis, or collected it when driven by a script. I also tried to build tools in parallel for doing analysis on the resulting database.

This is also a byproduct of experimentation with machine learning. I wanted to apply things I was learning to browser data, but the data I wanted wasn’t there. I spent all my time collecting and cleaning data, and ended up spending only a small amount of time analyzing the data. I suspect I’m not the only one who has done this.

Navigational Breadcrumbs

mockup of Navigational Breadcrumbs

When I click on a link I lose the reminder of why I clicked on it. What on the previous page led me to click on this? Was I promised something? Are there sibling links that I might want to continue to directly instead of going back and selecting another link?

This tool would give you additional information about the page you are on, how you got there, and given where you came from, where you might go next. Would this be a sidebar? Overlay content? In a popup? I’m not sure.

Example: using this, if I click on a link from Reddit I will be able to see the title of the Reddit post (which usually doesn’t match the document title), and a link to comments on the page. If I follow a link from Twitter, I’ll be able to see the Tweet I came from.

This could be interesting paired with link preview (like a tentative forward). Maybe the mobile browser Linkbubbles (now integrated into Brave) has some ideas to offer.

Technically this will use some of the techniques from Personal History Archive, which tracks link sources.

This is based on the train of thought I wrote down in an HN comment – itself a response to Freeing the Web from the Browser.

I want to try this still, and have started a repo crossnav but haven’t put anything there yet. I think even some naive approaches could work, just trying to detect the category of link and the related links (e.g., on Reddit the category is other submissions, and the related links are things like comments).

Copy Keeper

mockup of Copy Keeper

A notebook/logbook that is filled in every time you copy from a web page. When you copy it records (locally):

  • Text of selection
  • HTML of selection
  • Screenshot of the block element around the selection
  • Text around selection
  • Page URL and nearest anchor/id
  • Page title
  • Datetime

This overloads “copy” to mean “remember”.

Clips would be searchable, and could be moved back to the clipboard in different forms (text, HTML, image, bibliographical reference, source URL). Maybe clips would be browsable in a sidebar (maybe the sidebar has to be open for copies to be collected), or clips could be browsed in a normal tab (Library-style).

I created a prototype in copy-keeper. I thought it was interesting and usable, though whether it would actually get any use in practice I don’t know. It’s one of those tools that seems handy but requires effort, and as a result doesn’t get used.

Change Scout

mockup of Change Scout

(Wherein I both steal a name from another team, and turn it into a category…)

Change Scout will monitor a page for you, and notify you when it changes. Did someone edit the document? Was there activity on an issue? Did an article get updated? Put Change Scout to track it and it will tell you what changes and when.

It would monitor the page inside the browser, so it would have access to personalized and authenticated content. A key task would be finding ways to present changes in an interesting and compact way. In another experiment I tried some very simple change detection tools, and mostly end up frustrated (small changes look very large to naive algorithms).

Popup Tab Switcher

Tab Switcher mockup

We take the exact UI of the Side View popup, but make it a tab switcher. “Recent Tabs” are the most recently focused tabs (weighted somewhat by how long you were on the tab), and then there’s the complete scrollable list. Clicking on an item simply focuses that tab. You can close tabs without focusing them.

I made a prototype in tab-switchr. In it I also added some controls to close tabs, which was very useful for my periodic tab cleanups. Given that it was a proactive tool, I surprised myself by using it frequently. There’s work in Firefox to improve this, unrelated to anything I’ve done. It reminds me a bit of various Tree-Style Tabs, which I both like because they make it easier to see my tabs, and dislike because I ultimately am settled on normal top-tabs. The popup interface is less radical but still provides many of the benefits.

I should probably clean this up a little and publish it.

Personal Podcast

Create your own RSS feed.

  • When you are on a page with some audio source, you can add the audio to your personal feed
  • When on an article, you can generate an audio version that will be added to the feed
  • You get an RSS feed with a random token to make it private (I don’t think podcast apps handle authentication well, but this requires research)
  • Maybe you can just send/text the link to add it to your preferred podcast app
  • If apps don’t accept RSS links very well, maybe something more complicated would be required. An app that just installs an RSS feed? We want to avoid the feed accidentally ending up in podcast directories.
Bookmark Manager

There’s a lot of low-rated bookmark managers in addons.mozilla.org and the Chrome Extension store. Let’s make our own low-rated bookmark manager!

But seriously, this would anticipate updates to the Library and built-in bookmark manager, which are deficient.

Some resources/ideas: Comment with a few gripes Google’s bookmark manager Bookmark section on addons.mozilla.org Bookmark organizers on addons.mozilla.org * Relevant WebExtension APIs

Extended Library

mockup of the Extended Library

The “Library” in Firefox is the combination history and bookmark browser you get if you use “Show all bookmarks” or “Show all history”.

In this idea we present the user with a record of their assets, wherever they are.

This is like a history view (and would be built from history), but would use heuristics to pick out certain kinds of things: docs you’ve edited, screenshots you’ve taken, tickets you’ve opened, etc. We’d be trying hard to find long-lived documents in your history, instead of transitional navigation, articles, things you’ve gotten to from public indexes, etc.

Automatically determining what should be tagged as a “library item” would be the hard part. But I think having an organic view of these items, regardless of underlying service, would be quite valuable. The browser has access to all your services, and it’s easy to forget what service hosts the thing you are thinking about.

Text Mobile Screenshot

mockup of Text Mobile Screenshot

This tool will render the tab in a mobile factor (using the devtools responsive design mode), take a full-page screenshot, and text the image and URL to a given number. Probably it would only support texting to yourself.

I’ve looked into this some, and getting the mobile view of a page is not entirely obvious and requires digging around deep in the browser. Devtools does some complicated stuff to display the mobile view. The rest is basic UI flows and operational support.

Email Readable

Emails the Reader Mode version of a site to yourself. In our research, people love to store things in Email, so why not?

Though it lacks the simplicity of this concept, Email Tabs contains this basic functionality. Email This does almost exactly this.

Your History Everywhere

An extension that finds and syncs your history between browsers (particularly between Chrome and Firefox).

This would use the history WebExtension APIs. Maybe we could create a Firefox Sync client in Chrome. Maybe it could be a general way to move things between browsers. Actual synchronization is hard, but creating read-only views into the data in another browser profile is much easier.

Obviously there’s lots of work to synchronize this data between Firefox properties, and knowing the work involved this isn’t easy and often involves close work with the underlying platform. Without full access to the platform (like on Chrome) we’ll have to find ways to simplify the problem in order to make it feasible.

My Homepage

Everyone (with an FxA account) gets there own homepage on the web. It’s like Geocities! Or maybe closer to github.io.

But more seriously, it would be programmatically accessible simple static hosting. Not just for you to write your own homepage, but an open way for applications to publish user content, without those applications themselves turning into hosting platforms. We’d absorb all the annoyances of hosting content (abuse, copyright, quotas, ops, financing) and let open source developers focus on enabling interesting content generation experiences for users on the open web.

Here’s a general argument why I think this would be a useful thing for us to do. And another from Les Orchard.

Studying what Electron does for people

This is a proposal for user research:

Electron apps are being shipped for many services, including services that don’t require any special system integration. E.g., Slack doesn’t require anything that a web browser can’t do. Spotify maybe catches some play/pause keys, but is very close to being a web site. Yet there is perceived value in having an app.

The user research would focus on cases where the Electron app doesn’t have any/many special permissions. What gives the app value over the web page?

The goal would be to understand the motivations and constraints of users, so we could consider ways to make the in-browser experience equally pleasant to the Electron app.

App quick switcher

Per my previous item: why do I have an IRCCloud app? Why do people use Slack apps? Maybe it’s just because they want to be able to switch into and out of those apps quickly.

A proposed product solution: add a shortcut to any specific (pinned?) tab. Might be autocreated. Using the shortcut when the app is already selected will switch you back to your previous-selected tab. Switching to the tab without the shortcut will display a gentle reminder that the shortcut exists (so you can train yourself to start using it).

To make it a little more fancy, I thought we might also be able to do a second related “preview” shortcut. This would let you peek into the window. I’m not sure what “peeking” means. Maybe we just show a popup with a screenshot of that other window.

Maybe this should all just overload ⌘1/2/3 (maybe shift-⌘1/etc for peeking). Note these shortcuts do not currently have memory – you can switch to the first tab with ⌘1, but you can’t switch back.

This is one suggested solution to Whatever Electron does for people.

I started some work in quick-switch-extension, but keyboard shortcuts were a bit wonky, and I couldn’t figure out useful additional functionality that would make it fun. Firefox (Nightly?) now has Ctrl-Tab functionality that takes you to recent tabs, mitigating this problem (though it is not nearly as predictable as what I propose here).

Just Save

Just Save saves a page. It’s like a bookmark. Or a remembering. Or an archive. Or all of those all at once.

Just Save is a one-click operation, though a popup does show up (similar in function to Pocket) that would allow some additional annotation of your saved page.

We save: 1. Link 2. Title 3. Standard metadata 4. Screenshot 5. Frozen version of page 6. Scroll position 7. The tab history 8. Remember the other open tabs, so if some of them are saved we offer later relations between them 9. Time the page was saved 10. Query terms that led to the page

It’s like bookmarks, but purely focused on saving, while bookmarks do double-duty as a navigational tool. The tool encourages after-the-fact discovery and organization, not at-the-time-of-save choices.

And of course there’s a way to find and manage your saved pages. This idea needs more exploration of why you would return to a page or piece of information, and thus what we’d want to expose and surface from your history. We’ve done research, but it’s really just a start.

Open Search Combined Search

We have several open search providers. How many exist out there? How many could we find in history?

In theory Open Search is an API where a user could do personalized search across many properties, though I’m not sure if any sufficient number of sites has enabled it.

Notes Commander

It’s Notes, but with slash commands.

I other words it’s a document, but if you complete a line that begins with a / then it will try to execute that command, appending or overwriting from that point.

So for instance /timestamp just replaces itself with a timestamp.

Maybe /page inserts the currently active tab. /search foo puts search results into the document, but as editable (and followable) links. /page save freezes the page as one big data link, and inserts that link into the note.

It’s a little like Slack, but in document form, and with the browser as the context instead of a messaging platform. It’s a little like a notebook programming interface, but less structured and more document-like.

The ability to edit the output of commands is particularly interesting to me, and represents the kind of ad hoc information organizing that we all do regularly.

I experimented some with this in Notes, and got it working a little bit, but working with CKEditor (that Notes is built on) was just awful and I couldn’t get anything to work well. Notes also has a very limited set of supported content (no images or links), which was problematic. Maybe it’s worth doing it from scratch (with ProseMirror or Slate?)

After I tried to mock this up, I realized that the underlying model is much too unclear in my mind. What’s this for? When is it for? What would a list of commands look like?

Another thing I realized while attempting a mockup is that there should be a rich but normalized way to represent pages and URLs and so forth. Often you’ll be referring to URLs of pages that are already open. You may want to open sets of pages, or see immediately which URLs are already open in a tab. A frozen version of a page should be clearly linked to the source of that page, which of course could be an open tab. There’s a lot of pieces to fit together here, both common nouns and verbs, all of which interact with the browser session itself.

Automater

Automation and scripting for your browser: make demonstrations for your browser, give it a name, and you have a repeatable script.

The scripts will happen in the browser itself, not via any backend or scraping tool. In case of failed expectations or changed sites, the script will halt and tell the user.

Scripts could be as simple as “open a new tab pointing to this page every weekday at 9am”, or could involve clipping information, or just doing a navigational pattern before presenting the page to a user.

There’s a huge amount of previous work in this area. I think the challenge here is to create something that doesn’t look like a programming language displayed in a table.

Sidekick

Sidekick is a sidebar interface to anything, or everything, contextually. Some things it might display:

  • Show you the state of your clipboard
  • Show you how you got to the current tab (similar to Navigational Breadcrumbs)
  • Show you other items from the search query that kicked off the current tab
  • Give quick navigation to nearby pages, given the referring page (e.g., the next link, or next set of links)
  • Show you buttons to activate other tabs you are likely to switch to from the current tab
  • Show shopping recommendations or other content-aware widgets
  • Let you save little tidbits (text, links, etc), like an extended clipboard or notepad
  • Show notifications you’ve recently received
  • Peek into other tabs, or load them inline somewhat like Side View
  • Checklists and todos
  • Copy a bunch of links into the sidebar, then treat them like a todo/queue

Possibly it could be treated like an extensible widget holder.

From another perspective: this is like a continuous contextual feature recommender. I.e., it would try to answer the question: what’s the feature you could use right now?

Timed Repetition

Generally in order to commit something to long-term memory you must revisit information later, ideally long enough that it’s a struggle.

Is anything we see in a browser worth committing to long-term memory? Sometimes it feels like nothing is worth remembering, but that’s a kind of nihilism based on the shitty aspects of typical web browsing behavior.

The interface would require some positive assertion: I want to know this. Probably you’d want to highlight the thing you’d “know”. Then, later, we’d want to come up with some challenge. We don’t need a “real” test that is verified by the browser, instead we simply need to ask some related question, then the user can say if they got it right or not (or remembered it or not).

Reader Mode improvements

Reader mode is a bit spartan. Maybe it could be a bit nicer:

  • Pick up some styles or backgrounds from the hosting site
  • Display images or other media differently or more prominently
  • Add back some markup or layout that Readability erases
  • Apply to some other kinds of sites that aren’t articles (e.g., a video site)
  • A multicolumn version like McReadability
Digest Mode

Inspired by Full Hacker News (comments): take a bunch of links (typically articles) and concatenate their content into one page.

Implicitly this requires Reader Mode parsing of the pages, though that is relatively cheap for “normal” articles. Acquiring a list of pages is somewhat less clear. Getting a list of pages is a kind of news/RSS question. Taking a page like Hacker News and figuring out what the “real” links are is another approach that may be interesting. Lists of related links are everywhere, yet hard to formally define.

This would work very nicely with complementary text summarization.

Open question: is this actually an interesting or useful way to consume information?

Firefox for X

There’s an underlying concept here worth explaining:

Feature develop receives a lot of skepticism. And it’s reasonable: there’s a lot of conceit in a feature, especially embedded in a large product. Are people going to use a product or not because of some little feature? Or maybe the larger challenge: can some feature actually change behavior? Every person has their own thing going on, people aren’t interested in our theories, and really not that many people are interested in browsers. Familiar functionality – the back button, bookmarks, the URL bar, etc. – are what they expect, what they came for, and what they will gravitate to. Everything I’ve written so far in this list are things people won’t actually use.

A browser is particularly problematic because it’s so universal. It’s for sites and apps and articles. It’s for the young and the elderly, the experienced and not. It’s used for serious things, it’s used for concentration, and it’s used for dumb things and to avoid concentrating. How can you build a feature for everyone, targeting anything they might do? And if you build something, how can a person trust a new feature is really for them, not some other person? People are right to be skeptical of the new!

But we also know that most people regularly use more than one browser. Some people use Chrome for personal stuff, and Firefox for work. Some people do the exact opposite. Some people do their banking and finance in a specific browser. Some use a specific browser just for watching videos.

Which browser a person uses for which task is seemingly random. Maybe they were told to use a specific browser for one task, and then the other browser became the fallback. Maybe they once heard somewhere once that one browser was more secure. Maybe flash seemed broken on one browser when they were watching a video, and now a pattern has been set.

This has long seemed like an opportunity to me. Market a browser that actually claims to be the right browser for some of these purposes! Firefox has Developer Edition and it’s been reasonably successful.

This offers an opportunity for both Mozilla and Firefox users to agree on purpose. What is Firefox for? Everything! Is this feature meant for you? Unlikely! In a purpose-built browser both sides can agree what it’s trying to accomplish.

This idea often gets poo-pooed for how much work it is, but I think it’s simpler than it seems. Here’s what a “new browser” means:

  • Something you can find and download from its own page or site
  • It’s Firefox, but uses its own profile, keeping history/etc separate from other browser instances (including Firefox)
  • It has its own name and icon, and probably a theme to make it obvious what browser you are in
  • It comes with some browser extensions and prefs changed, making it more appropriate for the proposed use case

The approach is heavy on marketing and build tools, and light on actual browser engineering.

I also have gotten frequent feedback that Multi-Account Containers should solve all these use cases, but that gets everything backwards. People already understand multiple browsers, and having completely new entry points to bring people to Firefox is a feature, not a bug.

Sadly I think the time for this has passed, maybe in the market generally or maybe just for Mozilla. It would have been a very different approach to the browser.

Some of us in the Test Pilot team had some good brainstorming around actual concepts too, which is where I actually get excited about the ideas:

Firefox Study

For students, studying.

  • Integrate note-taking tools
  • Create project and class-based organizational tools, helping to organize tabs, bookmarks, and notes
  • Tools to document and organize deadlines
  • Citation generators

I don’t know what to do with online lectures and video, but it feels like there’s some meaningful improvements to be done in that space. Video-position-aware notetaking tools?

I think the intentionality of opening a browser to study is a good thing. iPads are somewhat popular in education, and I suspect part of that is having a device that isn’t built around multitasking, and using an iPad means stepping away from regular computing.

Firefox Media

To watch videos. This requires very few features, but benefits from just being a separate profile, history, and icon.

There’s a small number of features that might be useful:

  • Cross-service search (like Can I Stream.it or JustWatch)
  • Search defaults to video search
  • Cross-service queue
  • Quick service-based navigation

I realize it’s a lot like Roku in an app.

Firefox for Finance

This is really just about security.

Funny story: people say they value security very highly. But if Mozilla wants to make changes in Firefox that increase security but break some sites – particularly insecure sites – people will then stop using Firefox. They value security highly, but still just below anything at all breaking. This is very frustrating for us.

At the same time, I kind of get it. I’m dorking around on the web and I click through to some dumb site, and I get a big ol’ warning or a blank page or some other weirdness. I didn’t even care about the page or its security, and here my browser is trying to make me care.

That’s true some of the time, but not others. If you are using Firefox for Finance, or Firefox Super Secure, or whatever we might call it, then you really do care.

There’s a second kind of security implied here as well: security from snooping eyes and on shared computers. Firefox Master Password is a useful feature here. Generally there’s an opportunity for secure data at rest.

This is also a vehicle for education in computer security, with an audience that we know is interested.

Firefox Low Bandwidth

Maybe we work with proxy services. Or just do lots of content blocking. In this browser we let content break (and give a control to load the full content), so long as you start out compact.

  • Cache content that isn’t really supposed to be cached
  • Don’t load some kinds of content
  • Block fonts and other seemingly-unimportant content
  • Monitoring tools to see where bandwidth usage is going
Firefox for Kids

Sadly making things for kids is hard, because you are obliged to do all sorts of things if you claim to target children, but you don’t have to do anything if kids just happen to use your tool.

There is an industry of tools in this area that I don’t fully understand, and I’d want to research before thinking about a feature list. But it seems like it comes down to three things:

  • Blocking problematic content
  • Encouraging positive content
  • Monitoring tools for parents

There’s something very uninspiring about that list, it feels like it’s long on negativity and short on positive engagement. Coming up with an answer to that is not a simple task.

Firefox Calm

Inspired by a bunch of things:

What would a calm Firefox experience look like? Or maybe it would be better to think about a calm presentation of the web. At some point I wrote out some short pitches:

  • Read without distraction: Read articles like they are articles, not interactive (and manipulative) experiences.
  • Stay focused on one thing at a time: Instead of a giant list of tabs and alerts telling you what we aren’t doing, automatically focus on the one thing you are doing right now.
  • Control your notifications: Instead of letting any site poke at you for any reason, notifications are kept to a minimum and batched.
  • Focused writing: When you need to focus on what you are saying, not what people are saying to you, enter focused writing mode.
  • Get updates without falling down a news hole: Avoid clickbait, don’t reload pages, just see updates from the sites you trust (relates to Your Front Page)
  • Pomodoro: let yourself get distracted… but only a little bit. The Pomodoro technique helps you switch between periods of focused work and letting yourself relax
  • Don’t even ask: Do you want notifications from the news site you visited once? Do you want videos to autoplay? Of course not, and we’ll stop even asking.
  • Suggestion-free browsing: Every page you look at isn’t an invitation to tell you what you should look at next. Remove suggested content, and do what YOU want to do next. (YouTube example)
Concluding thoughts

Not just the conclusion of this list, the conclusion of my work in this area…

Some challenges in the design process:

  1. Asking someone to do something new is hard, and unlikely to happen. My previous post (The Over-engaged Knowledge Worker) relates to this tension.
  2. … and yet a “problem” isn’t enough to get someone to do something either.
  3. If someone is consciously and specifically doing some task, then there’s an opportunity.
  4. Creating wholistic solutions is unwelcome, unintuitively each thing that adds to the size of a solution diminishes from the breadth of problems the solution can solve.
  5. … and yet, abstract solutions without any clear suggestion of what they solve aren’t great either!
  6. Figuring out how to package functionality is a big deal.
  7. Approaches that increase the density of information or choices are themselves somewhat burdensome.
  8. … and yet context-sensitive approaches are unpredictable and distracting compared to consistent (if dense) functionality.
  9. I still believe there’s a wealth of material in the content of the pages people encounter. But it’s irregular and hard to understand, it takes concerted and long-term effort to do something here.
  10. Lots of the easy stuff, the roads well traveled, are still hard for a lot of people. Maybe this can be fixed by optimizing current UI… but I think there’s still room for novel improvements to old ideas.
  11. User research is a really great place to start, but it’s not very prescriptive. It’s mostly problem-finding, not solution-finding.
  12. There’s some kinds of user research I wish I had access to, specifically really low level analysis of behavior. What’s in someone’s mind when they open a new tab, or reuse one? In what order do they scan the UI? What are mental models of a URL, of pages and how they change, in what order to people compose (mentally and physically) things they want to share… it feels like it can go on forever, and there would be a ton of detail in the results, but given all the other constraints these insights feel important.
  13. There’s so many variables in an experiment, that it’s hard to know what failures really means. Every experiment that offers a novel experience involves several choices, and any one choice can cause the experiment to fail.

As Test Pilot comes to an end, I do find myself asking: is there room for qualitative improvements in desktop browser UI? Desktop computing is waning. User expectations of a browser are calcified. The only time people make a choice is when something breaks, and the only way to win is to not break anything and hope you competitor does break things.

So, is there room for improvement? Of course there is! The millions of hours spent every day in Firefox alone… this is actually important. Yes, a lot of things are at a local maximum, and we can A/B test little tweaks to get some suboptimal parts to their local maximum. But I do not believe in any way that the browsers we know are the optimal container. The web is bigger than browsers, bigger than desktop or mobile or VR, and a user agent can do unique things beyond any site or app.

And yet…

Categorieën: Mozilla-nl planet

Daniel Stenberg: alt-svc in curl

Mozilla planet - zo, 03/03/2019 - 16:45

The RFC 7838 was published already in April 2016. It describes the new HTTP header Alt-Svc, or as the title of the document says HTTP Alternative Services.

HTTP Alternative Services

An alternative service in HTTP lingo is a quite simply another server instance that can provide the same service and act as the same origin as the original one. The alternative service can run on another port, on another host name, on another IP address, or over another HTTP version.

An HTTP server can inform a client about the existence of such alternatives by returning this Alt-Svc header. The header, which has an expiry time, tells the client that there’s an optional alternative to this service that is hosted on that host name, that port number using that protocol. If that client is a browser, it can connect to the alternative in the background and if that works out fine, continue to use that host for the rest of the time that alternative is said to work.

In reality, this header becomes a little similar to the DNS records SRV or URI: it points out a different route to the server than what the A/AAAA records for it say.

The Alt-Svc header came into life as an attempt to help out with HTTP/2 load balancing, since with the introduction of HTTP/2 clients would suddenly use much more persistent and long-living connections instead of the very short ones used for traditional HTTP/1 web browsing which changed the nature of how connections are done. This way, a system that is about to go down can hint the clients on how to continue using the service, elsewhere.

Alt-Svc: h2="backup.example.com:443"; ma=2592000; HTTP upgrades

Once that header was published, the by then already existing and deployed Google QUIC protocol switched to using the Alt-Svc header to hint clients (read “Chrome users”) that “hey, this service is also available over gQUIC“. (Prior to that, they used their own custom alternative header that basically had the same meaning.)

This is important because QUIC is not TCP. Resources on the web that are pointed out using the traditional HTTPS:// URLs, still imply that you connect to them using TCP on port 443 and you negotiate TLS over that connection. Upgrading from HTTP/1 to HTTP/2 on the same connection was “easy” since they were both still TCP and TLS. All we needed then was to use the ALPN extension and voila: a nice and clean version negotiation.

To upgrade a client and server communication into a post-TCP protocol, the only official way to it is to first connect using the lowest common denominator that the HTTPS URL implies: TLS over TCP, and only once the server tells the client what more there is to try, the client can go on and try out the new toys.

For HTTP/3, this is the official way for HTTP servers to tell users about the availability of an HTTP/3 upgrade option.

curl

I want curl to support HTTP/3 as soon as possible and then as I’ve mentioned above, understanding Alt-Svc is a key prerequisite to have a working “bootstrap”. curl needs to support Alt-Svc. When we’re implementing support for it, we can just as well support the whole concept and other protocol versions and not just limit it to HTTP/3 purposes.

curl will only consider received Alt-Svc headers when talking HTTPS since only then can it know that it actually speaks with the right host that has the authority enough to point to other places.

Experimental

This is the first feature and code that we merge into curl under a new concept we do for “experimental” code. It is a way for us to mark this code as: we’re not quite sure exactly how everything should work so we allow users in to test and help us smooth out the quirks but as a consequence of this we might actually change how it works, both behavior and API wise, before we make the support official.

We strongly discourage anyone from shipping code marked experimental in production. You need to explicitly enable this in the build to get the feature. (./configure –enable-alt-svc)

But at the same time we urge and encourage interested users to test it out, try how it works and bring back your feedback, criticism, praise, bug reports and help us make it work the way we’d like it to work so that we can make it land as a “normal” feature as soon as possible.

Ship

The experimental alt-svc code has been merged into curl as of commit 98441f3586 (merged March 3rd 2019) and will be present in the curl code starting in the public release 7.64.1 that is planned to ship on March 27, 2019. I don’t have any time schedule for when to remove the experimental tag but ideally it should happen within just a few release cycles.

alt-svc cache

The curl implementation of alt-svc has an in-memory cache of known alternatives. It can also both save that cache to a text file and load that file back into memory. Saving the alt-svc cache to disk allows it to survive curl invokes and to truly work the way it was intended. The cache file stores the expire timestamp per entry so it doesn’t matter if you try to use a stale file.

curl –alt-svc

Caveat: I now talk about how a feature works that I’ve just above said might change before it ships. With the curl tool you ask for alt-svc support by pointing out the alt-svc cache file to use. Or pass a “” (empty name) to make it not load or save any file. It makes curl load an existing cache from that file and at the end, also save the cache to that file.

curl also already since a long time features fancy connection options such as –resolve and –connect-to, which both let a user control where curl connects to, which in many cases work a little like a static poor man’s alt-svc. Learn more about those in my curl another host post.

libcurl options for alt-svc

We start out the alt-svc support for libcurl with two separate options. One sets the file name to the alt-svc cache on disk (CURLOPT_ALTSVC), and the other control various aspects of how libcurl should behave in regards to alt-svc specifics (CURLOPT_ALTSVC_CTRL).

I’m quite sure that we will have reason to slightly adjust these when the HTTP/3 support comes closer to actually merging.

Categorieën: Mozilla-nl planet

Cameron Kaiser: Another choice for Intel TenFourFox users

Mozilla planet - zo, 03/03/2019 - 01:08
Waaaaaaay back when, I parenthetically mentioned in passing an anonymous someone(tm) trying to resurrect the then-stalled Intel port. Since then we now have a periodically updated unofficial and totally unsupported mainline Intel version, but it wasn't actually that someone who was working on it. That someone now has a release, too.

@OlgaTPark's Intel TenFourFox fork is a bit unusual in that it is based on 45.9 (yes, back before the FPR releases began), so it is missing later updates in the FPR series. On the other hand, it does support Tiger (mainline Intel TenFourFox requires at least 10.5), it additionally supports several features not supported by TenFourFox, i.e., by enabling Mozilla features in some of its operating system-specific flavours that are disabled in TenFourFox for reasons of Tiger compatibility, and also includes support for H.264 video with ffmpeg.

H.264 video has been a perennial request which I've repeatedly nixed for reasons of the MPEG LA threatening to remove and purée the genitals of those who would use its patents without a license, and more to the point using ffmpeg in Firefox and TenFourFox probably would have violated the spirit, if not the letter, of the Mozilla Public License. Currently, mainline Firefox implements H.264 using operating system support and the Cisco decoder as an external plugin component. Olga's scheme does much the same thing using a separate component called the FFmpeg Enabler, so it should be possible to implement the glue code in mainline TenFourFox, "allowing" the standalone, separately-distributed enabler to patch in the library and thus sidestepping at least the Mozilla licensing issue. The provided library is a fat dylib with PowerPC and Intel support and the support glue is straightforward enough that I may put experimental support for this mechanism in FPR14.

(Long-time readers will wonder why there is MP3 decoding built into TenFourFox, using minimp3 which itself borrows code from ffmpeg, if I have these objections. There are three simple reasons: MP3 patents have expired, it was easy to do, and I'm a big throbbing hypocrite. One other piece of "OlgaFox" that I'll backport either for FPR13 final or FPR14 is a correctness fix for our MP3 decoder which apparently doesn't trip up PowerPC, but would be good for Intel users.)

Ordinarily I don't like forks using the same name, even if I'm no longer maintaining the code, so that I can avoid receiving spurious support requests or bug reports on code I didn't write. For example, I asked the Oysttyer project to change names from TTYtter after I had ceased maintaining it so that it was clearly recognized they were out on their own, and they graciously did. In this case, though it might be slightly confusing, I haven't requested my usual policy because it is clearly and (better be) widely known that no Intel version of TenFourFox, no matter what version or what features, is supported by me.

On the other hand, if someone used Olga's code as a basis for, say, a 10.5-specific PowerPC fork of TenFourFox enabling features supported in that OS (a la the dearly departed AuroraFox), I would have to insist that the name be changed so we don't get people on Tenderapp with problem reports about it. Fortunately, Olga's release uses the names TenFiveFox and TenSixFox for those operating system-specific versions, and I strongly encourage anyone who wants to do such a Leopard-specific port to follow suit.

Releases can be downloaded from Github, and as always, there is no support and no promises of updates. Do not send support questions about this or any Intel build of TenFourFox to Tenderapp.

Categorieën: Mozilla-nl planet

Mozilla Addons Blog: March’s featured extensions

Mozilla planet - vr, 01/03/2019 - 20:23

Firefox Logo on blue background

Pick of the Month: Bitwarden – Free Password Manager

by 8bit Solutions LLC
Store your passwords securely (via encrypted vaults) and sync across devices.

“Works great, looks great, and it works better than it looks.”

Featured: Save Page WE

by DW-dev
Save complete pages or just portions as a single HTML file.

“Good for archiving the web!”

Featured: Terms of Service; Didn’t Read

by Abdullah Diaa, Hugo, Michiel de Jong
A clever tool for cutting through the gibberish of common ToS contracts you encounter around the web.

“Excellent time and privacy saver! Let’s face it, no one reads all the legalese in the ToS of each site used.”

Featured: Feedbro

by Nodetics
An advanced reader for aggregating all of your RSS/Atom/RDF sources.

“The best of its kind. Thank you.”

Featured: Don’t Touch My Tabs!

by Jeroen Swen
Don’t let clicked links take control of your current tab and load content you didn’t ask for.

“Hijacking ads! Deal with it now!”

Featured: DuckDuckGo Privacy Essentials

by DuckDuckGo
Search with enhanced security—tracker blocking, smarter encryption, private search, and other privacy perks.

“Perfect extension for blocking trackers while not breaking webpages.”

If you’d like to nominate an extension for featuring, please send it to amo-featured [at] mozilla [dot] org for the board’s consideration. We welcome you to submit your own add-on!

The post March’s featured extensions appeared first on Mozilla Add-ons Blog.

Categorieën: Mozilla-nl planet

Will Kahn-Greene: Bleach: stepping down as maintainer

Mozilla planet - vr, 01/03/2019 - 15:00
What is it?

Bleach is a Python library for sanitizing and linkifying text from untrusted sources for safe usage in HTML.

I'm stepping down

In October 2015, I had a conversation with James Socol that resulted in me picking up Bleach maintenance from him. That was a little over 3 years ago. In that time, I:

  • did 12 releases
  • improved the tests; switched from nose to pytest, added test coverage for all supported versions of Python and html5lib, added regression tests for xss strings in OWASP Testing Guide 4.0 appendix
  • worked with Greg to add browser testing for cleaned strings
  • improved documentation; added docstrings, added lots of examples, added automated testing of examples, improved copy
  • worked with Jannis to implement a security bug disclosure policy
  • improved performance (Bleach v2.0 released!)
  • switched to semver so the version number was more meaningful
  • did a rewrite to work with the extensive html5lib API changes
  • spent a couple of years dealing with the regressions from the rewrite
  • stepped up as maintainer for html5lib and did a 1.0 release
  • added support for Python 3.6 and 3.7

I accomplished a lot.

A retrospective on OSS project maintenance

I'm really proud of the work I did on Bleach. I took a great project and moved it forward in important and meaningful ways. Bleach is used by a ton of projects in the Python ecosystem. You have likely benefitted from my toil.

While I used Bleach on projects like SUMO and Input years ago, I wasn't really using Bleach on anything while I was a maintainer. I picked up maintenance of the project because I was familiar with it, James really wanted to step down, and Mozilla was using it on a bunch of sites--I picked it up because I felt an obligation to make sure it didn't drop on the floor and I knew I could do it.

I never really liked working on Bleach. The problem domain is a total fucking pain-in-the-ass. Parsing HTML like a browser--oh, but not exactly like a browser because we want the output of parsing to be as much like the input as possible, but as safe. Plus, have you seen XSS attack strings? Holy moly! Ugh!

Anyhow, so I did a bunch of work on a project I don't really use, but felt obligated to make sure it didn't fall on the floor, that has a pain-in-the-ass problem domain. I did that for 3+ years.

Recently, I had a conversation with Osmose that made me rethink that. Why am I spending my time and energy on this?

Does it further my career? I don't think so. Time will tell, I suppose.

Does it get me fame and glory? No.

Am I learning while working on this? I learned a lot about HTML parsing. I have scars. It's so crazy what browsers are doing.

Is it a community through which I'm meeting other people and creating friendships? Sort of. I like working with James, Jannis, and Greg. But I interact and work with them on non-Bleach things, too, so Bleach doesn't help here.

Am I getting paid to work on it? Not really. I did some of the work on work-time, but I should have been using that time to improve my skills and my career. So, yes, I spent some work-time on it, but it's not a project I've been tasked with to work on. For the record, I work on Socorro which is the Mozilla crash-ingestion pipeline. I don't use Bleach on that.

Do I like working on it? No.

Seems like I shouldn't be working on it anymore.

I moved Bleach forward significantly. I did a great job. I don't have any half-finished things to do. It's at a good stopping point. It's a good time to thank everyone and get off the stage.

What happens to Bleach?

I'm stepping down without working on what comes next. I think Greg is going to figure that out.

Thank you!

Jannis was a co-maintainer at the beginning because I didn't want to maintain it alone. Jannis stepped down and Greg joined. Both Jannis and Greg were a tremendous help and fantastic people to work with. Thank you!

Sam Snedders helped me figure out a ton of stuff with how Bleach interacts with html5lib. Sam was kind enough to deputize me as a temporary html5lib maintainer to get 1.0 out the door. I really appreciated Sam putting faith in me. Conversations about the particulars of HTML parsing--I'll miss those. Thank you!

While James wasn't maintaining Bleach anymore, he always took the time to answer questions I had. His historical knowledge, guidance, and thoughtfulness were crucial. James was my manager for a while. I miss him. Thank you!

There were a handful of people who contributed patches, too. Thank you!

Thank your maintainers!

My experience from 20 years of OSS projects is that many people are in similar situations: continuing to maintain something because of internal obligations long after they're getting any value from the project.

Take care of the maintainers of the projects you use! You can't thank them enough for their time, their energy, their diligence, their help! Not just the big successful projects, but also the one-person projects, too.

Shout-out for PyCon 2019 maintainers summit

Sumana mentioned that PyCon 2019 has a maintainers summit. That looks fantastic! If you're in the doldrums of maintaining an OSS project, definitely go if you can.

Changes to this blog post

Update March 2, 2019: I completely forgot to thank Sam Snedders which is a really horrible omission. Sam's the best!

Categorieën: Mozilla-nl planet

Niko Matsakis: Async-await status report

Mozilla planet - vr, 01/03/2019 - 06:00

I wanted to post a quick update on the status of the async-await effort. The short version is that we’re in the home stretch for some kind of stabilization, but there remain some significant questions to overcome.

Announcing the implementation working group

As part of this push, I’m happy to announce we’ve formed a async-await implementation working group. This working group is part of the whole async-await effort, but focused on the implementation, and is part of the compiler team. If you’d like to help get async-await over the finish line, we’ve got a list of issues where we’d definitely like help (read on).

If you are interested in taking part, we have an “office hours” scheduled for Tuesday (see the compiler team calendar) – if you can show up then on Zulip, it’d be ideal! (But if not, just pop in any time.)

Who are we stabilizing for?

I mentioned that there remain significant questions to overcome before stabilization. I think the most root question of all is this one: Who is the audience for this stabilization?

The reason that question is so important is because it determines how to weigh some of the issues that currently exist. If the point of the stabilization is to start promoting async-await as something for widespread use, then there are issues that we probably ought to resolve first – most notably, the await syntax, but also other things.

If, however, the point of stabilization is to let ‘early adopters’ start playing with it more, then we might be more tolerant of problems, so long as there are no backwards compatibility concerns.

My take is that either of these is a perfectly fine answer. But if the answer is that we are trying to unblock early adopters, then we want to be clear in our messaging, so that people don’t get turned off when they encounter some of the bugs below.

OK, with that in place, let’s look in a bit more detail.

Implementation issues

One of the first things that we did in setting up the implementation working group is to do a complete triage of all existing async-await issues. From this, we found that there was one very firm blocker, #54716. This issue has to do the timing of drops in an async fn, specifically the drop order for parameters that are not used in the fn body. We want to be sure this behaves analogously with regular functions. This is a blocker to stabilization because it would change the semantics of stable code for us to fix it later.

We also uncovered a number of major ergonomic problems. In a follow-up meeting (available on YouTube), cramertj and I also drew up plans for fixing these bugs, though these plans have not yet been writting into mentoring instructions. These issues include all focus around async fns that take borrowed references as arguments – for example, the async fn syntax today doesn’t support more than one lifetime in the arguments, so something like async fn foo(x: &u32, y: &u32) doesn’t work.

Whether these ergonomic problems are blockers, however, depends a bit on your perspective: as @cramertj says, a number of folks at Google are using async-await today productively despite these limitations, but you must know the appropriate workarounds and so forth. This is where the question of our audience comes into play. My take is that these issues are blockers for “async fn” being ready for “general use”, but probably not for “early adopters”.

Another big concern for me personally is the maintenance story. Thanks to the hard work of Zoxc and cramertj, we’ve been able to standup a functional async-await implementation very fast, which is awesome. But we don’t really have a large pool of active contributors working on the async-await implementation who can help to fix issues as we find them, and this seems bad.

The syntax question

Finally, we come to the question of the await syntax. At the All Hands, we had a number of conversations on this topic, and it became clear that we do not presently have consensus for any one syntax. We did a lot of exploration here, however, and enumerated a number of subtle arguments in favor of each option. At this moment, @withoutboats is busily trying to write-up that exploration into a document.

Before saying anything else, it’s worth pointing out that we don’t actually have to resolve the await syntax in order to stabilize async-await. We could stabilize the await!(...) macro syntax for the time being, and return to the issue later. This would unblock “early adopters”, but doesn’t seem like a satisfying answer if our target is the “general public”. If we were to do this, we’d be drawing on the precedent of try!, where we first adopted a macro and later moved that support to native syntax.

That said, we do eventually want to pick another syntax, so it’s worth thinking about how we are going to do that. As I wrote, the first step is to complete an overall summary that tries to describe the options on the table and some of the criteria that we can use to choose between them. Once that is available, we will need to settle on next steps.

Resolving hard questions

I am looking at the syntax question as a kind of opportunity – one of the things that we as a community frequently have to do is to find a way to resolve really hard questions without a clear answer. The tools that we have for doing this at the moment are really fairly crude: we use discussion threads and manual summary comments. Sometimes, this works well. Sometimes, amazingly well. But other times, it can be a real drain.

I would like to see us trying to resolve this sort of issue in other ways. I’ll be honest and say that I don’t entirely know what those are, but I know they are not open discussion threads. For example, I’ve found that the #rust2019 blog posts have been an incredibly effective way to have an open conversation about priorities without the usual ranchor and back-and-forth. I’ve been very inspired by systems like vTaiwan, which enable a lot of public input, but in a structured and collaborative form, rather than an “antagonistic” one. Similarly, I would like to see us perhaps consider running more experiments to test hypotheses about learnability or other factors (but this is something I would approach with great caution, as I think designing good experiments is very hard).

Anyway, this is really a topic for a post of its own. In this particular case, I hope that we find that enumerating in detail the arguments for each side leads us to a clear conclusion, perhaps some kind of “third way” that we haven’t seen yet. But, thinking ahead, it’d be nice to find ways to have these conversations that take us to that “third way” faster.

Closing notes

As someone who has not been closely following async-await thus far, I’m super excited by all I see. The feature has come a ridiculously long way, and the remaining blockers all seem like things we can overcome. async await is coming: I can’t wait to see what people build with it.

Cross-posted to internals here.

Categorieën: Mozilla-nl planet

Mozilla Open Innovation Team: Sharing our Common Voices

Mozilla planet - do, 28/02/2019 - 19:26

Mozilla releases the largest the largest to-date public domain transcribed dataset of human voices available for use, including 18 different languages, adding up to almost 1,400 hours of recorded voice data from more than 42,000 contributors.

From the onset, our vision for Common Voice has been to build the world’s most diverse voice dataset, optimized for building voice technologies. We also made a promise of openness: we would make the high quality, transcribed voice data that was collected publicly available to startups, researchers, and anyone interested in voice-enabled technologies.

Today, we’re excited to share our first multi-language dataset with 18 languages represented, including English, French, German and Mandarin Chinese (Traditional), but also for example Welsh and Kabyle. Altogether, the new dataset includes approximately 1,400 hours of voice clips from more than 42,000 people.

With this release, the continuously growing Common Voice dataset is now the largest ever of its kind, with tens of thousands of people contributing their voices and original written sentences to the public domain (CC0). Moving forward, the full dataset will be available for download on the Common Voice site.

Data Qualities

The Common Voice dataset is unique not only in its size and licence model but also in its diversity, representing a global community of voice contributors. Contributors can opt-in to provide metadata like their age, sex, and accent so that their voice clips are tagged with information useful in training speech engines.

This is a different approach than for other publicly available datasets, which are either hand-crafted to be diverse (i.e. equal number of men and women) or the corpus is as diverse as the “found” data (e.g. the TEDLIUM corpus from TED talks is ~3x men to women).

More Common Voices: from 3 to 22 languages in 8 months

Since we enabled multi-language support in June 2018, Common Voice has grown to be more global and more inclusive. This has surpassed our expectations: Over the last eight months, communities have enthusiastically rallied around the project, launching data collection efforts in 22 languages with an incredible 70 more in progress on the Common Voice site.

As a community-driven project, people around the world who care about having a voice dataset in their language have been responsible for each new launch — some are passionate volunteers, some are doing this as part of their day jobs as linguists or technologists. Each of these efforts require translating the website to allow contributions and adding sentences to be read.

Our latest additions include Dutch, Hakha-Chin, Esperanto, Farsi, Basque, and Spanish. In some cases, a new language launch on Common Voice is the beginning of that language’s internet presence. These community efforts are proof that all languages — not just ones that can generate high revenue for technology companies — are worthy of representation.

We’ll continue working with these communities to ensure their voices are represented and even help make voice technology for themselves. In this spirit, we recently joined forces with the Deutsche Gesellschaft für Internationale Zusammenarbeit (GIZ) and co-hosted an ideation hackathon in Kigali to create a speech corpus for Kinyarwanda, laying the foundation for local technologists in Rwanda to develop open source voice technologies in their own language.

Improvements in the contribution experience, including optional profiles

The Common Voice Website is one of our main vehicles for building voice data sets that are useful for voice-interaction technology. The way it looks today is the result of an ongoing process of iteration. We listened to community feedback about the pain points of contributing while also conducting usability research to make contribution easier, more engaging, and fun.

People who contribute not only see progress per language in recording and validation, but also have improved prompts that vary from clip to clip; new functionality to review, re-record, and skip clips as an integrated part of the experience; the ability to move quickly between speak and listen; as well as a function to opt-out of speaking for a session.

We also added the option to create a saved profile, which allows contributors to keep track of their progress and metrics across multiple languages. Providing some optional demographic profile information also improves the audio data used in training speech recognition accuracy.

<figcaption>Common Voice started as a proof of concept prototype and has been collaboratively iterated over the past year</figcaption>Empower decentralized product innovation: a marathon rather than a sprint

Mozilla aims to contribute to a more diverse and innovative voice technology ecosystem. Our goal is to both release voice-enabled products ourselves, while also supporting researchers and smaller players. Providing data through Common Voice is one part of this, as are the open source Speech-to-Text and Text-to-Speech engines and trained models through project DeepSpeech, driven by our Machine Learning Group.

We know this will take time, and we believe releasing early and working in the open can attract the involvement and feedback of technologists, organisations, and companies that will make these projects more relevant and robust. The current reality for both projects is that they are still in their research phase, with DeepSpeech making strong progress toward productization.

To date, with data from Common Voice and other sources, DeepSpeech is technically capable to convert speech to text with human accuracy and “live”, i.e. in realtime as the audio is being streamed. This allows transcription of lectures, phone conversations, television programs, radio shows, and and other live streams all as they are happening.

The DeepSpeech engine is already being used by a variety of non-Mozilla projects: For example in Mycroft, an open source voice based assistant; in Leon, an open-source personal assistant; in FusionPBX, a telephone switching system installed at and serving a private organization to transcribe phone messages. In the future Deep Speech will target smaller platform devices, such as smartphones and in-car systems, unlocking product innovation in and outside of Mozilla.

For Common Voice, our focus in 2018 was to build out the concept, make it a tool for any language community to use, optimise the website, and build a robust backend (for example, the accounts system). Over the coming months we will focus efforts on experimenting with different approaches to increase the quantity and quality of data we are able to collect, both through community efforts as well as new partnerships.

Our overall aim remains: Providing more and better data to everyone in the world who seeks to build and use voice technology. Because competition and openness are healthy for innovation. Because smaller languages are an issue of access and equity. Because privacy and control matters, especially over your voice.

Sharing our Common Voices was originally published in Mozilla Open Innovation on Medium, where people are continuing the conversation by highlighting and responding to this story.

Categorieën: Mozilla-nl planet

Hacks.Mozilla.Org: Implications of Rewriting a Browser Component in Rust

Mozilla planet - do, 28/02/2019 - 15:10

The previous posts in this Fearless Security series examine memory safety and thread safety in Rust. This closing post uses the Quantum CSS project as a case study to explore the real world impact of rewriting code in Rust.

The style component is the part of a browser that applies CSS rules to a page. This is a top-down process on the DOM tree: given the parent style, the styles of children can be calculated independently—a perfect use-case for parallel computation. By 2017, Mozilla had made two previous attempts to parallelize the style system using C++. Both had failed.

Quantum CSS resulted from a need to improve page performance. Improving security is a happy byproduct.

Rewrites code to make it faster; also makes it more secure

There’s a large overlap between memory safety violations and security-related bugs, so we expected this rewrite to reduce the attack surface in Firefox. In this post, I will summarize the potential security vulnerabilities that have appeared in the styling code since Firefox’s initial release in 2002. Then I’ll look at what could and could not have been prevented by using Rust.

Over the course of its lifetime, there have been 69 security bugs in Firefox’s style component. If we’d had a time machine and could have written this component in Rust from the start, 51 (73.9%) of these bugs would not have been possible. While Rust makes it easier to write better code, it’s not foolproof.

Rust

Rust is a modern systems programming language that is type- and memory-safe. As a side effect of these safety guarantees, Rust programs are also known to be thread-safe at compile time. Thus, Rust can be a particularly good choice when:

✅ processing untrusted input safely.
✅ introducing parallelism to improve performance.
✅ integrating isolated components into an existing codebase.

However, there are classes of bugs that Rust explicitly does not address—particularly correctness bugs. In fact, during the Quantum CSS rewrite, engineers accidentally reintroduced a critical security bug that had previously been patched in the C++ code, regressing the fix for bug 641731. This allowed global history leakage via SVG image documents, resulting in bug 1420001. As a trivial history-stealing bug, this is rated security-high. The original fix was an additional check to see if the SVG document was being used as an image. Unfortunately, this check was overlooked during the rewrite.

While there were automated tests intended to catch :visited rule violations like this, in practice, they didn’t detect this bug. To speed up our automated tests, we temporarily turned off the mechanism that tested this feature—tests aren’t particularly useful if they aren’t run. The risk of re-implementing logic errors can be mitigated by good test coverage (and actually running the tests). There’s still a danger of introducing new logic errors.

As developer familiarity with the Rust language increases, best practices will improve. Code written in Rust will become even more secure. While it may not prevent all possible vulnerabilities, Rust eliminates an entire class of the most severe bugs.

Quantum CSS Security Bugs

Overall, bugs related to memory, bounds, null/uninitialized variables, or integer overflow would be prevented by default in Rust. The miscellaneous bug I referenced above would not have been prevented—it was a crash due to a failed allocation.

Security bugs by category

All of the bugs in this analysis are related to security, but only 43 received official security classifications. (These are assigned by Mozilla’s security engineers based on educated “exploitability” guesses.) Normal bugs might indicate missing features or problems like crashes. While undesirable, crashes don’t result in data leakage or behavior modification. Official security bugs can range from low severity (highly limited in scope) to critical vulnerability (might allow an attacker to run arbitrary code on the user’s platform).

There’s a significant overlap between memory vulnerabilities and severe security problems. Of the 34 critical/high bugs, 32 were memory-related.

Security rated bug breakdown

Comparing Rust and C++ code

Bug 955914 is a heap buffer overflow in the GetCustomPropertyNameAt function. The code used the wrong variable for indexing, which resulted in interpreting memory past the end of the array. This could either crash while accessing a bad pointer or copy memory to a string that is passed to another component.

The ordering of all CSS properties (both longhand and custom) is stored in an array, mOrder. Each element is either represented by its CSS property value or, in the case of custom properties, by a value that starts at eCSSProperty_COUNT (the total number of non-custom CSS properties). To retrieve the name of a custom property, first, you have to retrieve the custom property value from mOrder, then access the name at the corresponding index of the mVariableOrder array, which stores the custom property names in order.

Vulnerable C++ code: void GetCustomPropertyNameAt(uint32_t aIndex, nsAString& aResult) const { MOZ_ASSERT(mOrder[aIndex] >= eCSSProperty_COUNT); aResult.Truncate(); aResult.AppendLiteral("var-"); aResult.Append(mVariableOrder[aIndex]);

The problem occurs at line 6 when using aIndex to access an element of the mVariableOrder array. aIndex is intended for use with the mOrder array not the mVariableOrder array. The corresponding element for the custom property represented by aIndex in mOrder is actually mOrder[aIndex] - eCSSProperty_COUNT.

Fixed C++ code: void Get CustomPropertyNameAt(uint32_t aIndex, nsAString& aResult) const { MOZ_ASSERT(mOrder[aIndex] >= eCSSProperty_COUNT); uint32_t variableIndex = mOrder[aIndex] - eCSSProperty_COUNT; aResult.Truncate(); aResult.AppendLiteral("var-"); aResult.Append(mVariableOrder[variableIndex]); } Equivalent Rust code

While Rust is similar to C++ in some ways, idiomatic Rust uses different abstractions and data structures. Rust code will look very different from C++ (see below for details). First, let’s consider what would happen if we translated the vulnerable code as literally as possible:

fn GetCustomPropertyNameAt(&self, aIndex: usize) -> String { assert!(self.mOrder[aIndex] >= self.eCSSProperty_COUNT); let mut result = "var-".to_string(); result += &self.mVariableOrder[aIndex]; result }

The Rust compiler would accept the code, since there is no way to determine the length of vectors before runtime. Unlike arrays, whose length must be known, the Vec type in Rust is dynamically sized. However, the standard library vector implementation has built-in bounds checking. When an invalid index is used, the program immediately terminates in a controlled fashion, preventing any illegal access.

The actual code in Quantum CSS uses very different data structures, so there’s no exact equivalent. For example, we use Rust’s powerful built-in data structures to unify the ordering and property name data. This allows us to avoid having to maintain two independent arrays. Rust data structures also improve data encapsulation and reduce the likelihood of these kinds of logic errors. Because the code needs to interact with C++ code in other parts of the browser engine, the new GetCustomPropertyNameAt function doesn’t look like idiomatic Rust code. It still offers all of the safety guarantees while providing a more understandable abstraction of the underlying data.

tl;dr;

Due to the overlap between memory safety violations and security-related bugs, we can say that Rust code should result in fewer critical CVEs (Common Vulnerabilities and Exposures). However, even Rust is not foolproof. Developers still need to be aware of correctness bugs and data leakage attacks. Code review, testing, and fuzzing still remain essential for maintaining secure libraries.

Compilers can’t catch every mistake that programmers can make. However, Rust has been designed to remove the burden of memory safety from our shoulders, allowing us to focus on logical correctness and soundness instead.

Categorieën: Mozilla-nl planet

The Mozilla Blog: Sharing our Common Voices – Mozilla releases the largest to-date public domain transcribed voice dataset

Mozilla planet - do, 28/02/2019 - 12:17

Mozilla crowdsources the largest dataset of human voices available for use, including 18 different languages, adding up to almost 1,400 hours of recorded voice data from more than 42,000 contributors.

From the onset, our vision for Common Voice has been to build the world’s most diverse voice dataset, optimized for building voice technologies. We also made a promise of openness: we would make the high quality, transcribed voice data that was collected publicly available to startups, researchers, and anyone interested in voice-enabled technologies.

Today, we’re excited to share our first multi-language dataset with 18 languages represented, including English, French, German and Mandarin Chinese (Traditional), but also for example Welsh and Kabyle. Altogether, the new dataset includes approximately 1,400 hours of voice clips from more than 42,000 people.

With this release, the continuously growing Common Voice dataset is now the largest ever of its kind, with tens of thousands of people contributing their voices and original written sentences to the public domain (CC0). Moving forward, the full dataset will be available for download on the Common Voice site.

 

Data Qualities

The Common Voice dataset is unique not only in its size and licence model but also in its diversity, representing a global community of voice contributors. Contributors can opt-in to provide metadata like their age, sex, and accent so that their voice clips are tagged with information useful in training speech engines.

This is a different approach than for other publicly available datasets, which are either hand-crafted to be diverse (i.e. equal number of men and women) or the corpus is as diverse as the “found” data (e.g. the TEDLIUM corpus from TED talks is ~3x men to women).

More Common Voices: from 3 to 22 languages in 8 months

Since we enabled multi-language support in June 2018, Common Voice has grown to be more global and more inclusive. This has surpassed our expectations: Over the last eight months, communities have enthusiastically rallied around the project, launching data collection efforts in 22 languages with an incredible 70 more in progress on the Common Voice site.

As a community-driven project, people around the world who care about having a voice dataset in their language have been responsible for each new launch — some are passionate volunteers, some are doing this as part of their day jobs as linguists or technologists. Each of these efforts require translating the website to allow contributions and adding sentences to be read.

Our latest additions include Dutch, Hakha-Chin, Esperanto, Farsi, Basque, and Spanish. In some cases, a new language launch on Common Voice is the beginning of that language’s internet presence. These community efforts are proof that all languages—not just ones that can generate high revenue for technology companies—are worthy of representation.

We’ll continue working with these communities to ensure their voices are represented and even help make voice technology for themselves. In this spirit, we recently joined forces with the Deutsche Gesellschaft für Internationale Zusammenarbeit (GIZ) and co-hosted an ideation hackathon in Kigali to create a speech corpus for Kinyarwanda, laying the foundation for local technologists in Rwanda to develop open source voice technologies in their own language.

Improvements in the contribution experience, including optional profiles

The Common Voice Website is one of our main vehicles for building voice data sets that are useful for voice-interaction technology. The way it looks today is the result of an ongoing process of iteration. We listened to community feedback about the pain points of contributing while also conducting usability research to make contribution easier, more engaging, and fun.

People who contribute not only see progress per language in recording and validation, but also have improved prompts that vary from clip to clip; new functionality to review, re-record, and skip clips as an integrated part of the experience; the ability to move quickly between speak and listen; as well as a function to opt-out of speaking for a session.

We also added the option to create a saved profile, which allows contributors to keep track of their progress and metrics across multiple languages. Providing some optional demographic profile information also improves the audio data used in training speech recognition accuracy.

 

Common Voice started as a proof of concept prototype and has been collaboratively iterated over the past year

Empower decentralized product innovation: a marathon rather than a sprint

Mozilla aims to contribute to a more diverse and innovative voice technology ecosystem. Our goal is to both release voice-enabled products ourselves, while also supporting researchers and smaller players. Providing data through Common Voice is one part of this, as are the open source Speech-to-Text and Text-to-Speech engines and trained models through project DeepSpeech, driven by our Machine Learning Group.

We know this will take time, and we believe releasing early and working in the open can attract the involvement and feedback of technologists, organisations, and companies that will make these projects more relevant and robust. The current reality for both projects is that they are still in their research phase, with DeepSpeech making strong progress toward productization.

To date, with data from Common Voice and other sources, DeepSpeech is technically capable to convert speech to text with human accuracy and “live”, i.e. in realtime as the audio is being streamed. This allows transcription of lectures, phone conversations, television programs, radio shows, and and other live streams all as they are happening.

The DeepSpeech engine is already being used by a variety of non-Mozilla projects: For example in Mycroft, an open source voice based assistant; in Leon, an open-source personal assistant; in FusionPBX, a telephone switching system installed at and serving a private organization to transcribe phone messages. In the future Deep Speech will target smaller platform devices, such as smartphones and in-car systems, unlocking product innovation in and outside of Mozilla.

For Common Voice, our focus in 2018 was to build out the concept, make it a tool for any language community to use, optimise the website, and build a robust backend (for example, the accounts system). Over the coming months we will focus efforts on experimenting with different approaches to increase the quantity and quality of data we are able to collect, both through community efforts as well as new partnerships.

Our overall aim remains: Providing more and better data to everyone in the world who seeks to build and use voice technology. Because competition and openness are healthy for innovation. Because smaller languages are an issue of access and equity. Because privacy and control matters, especially over your voice.

The post Sharing our Common Voices – Mozilla releases the largest to-date public domain transcribed voice dataset appeared first on The Mozilla Blog.

Categorieën: Mozilla-nl planet

Mozilla GFX: WebRender newsletter #41

Mozilla planet - do, 28/02/2019 - 10:48

Welcome to episode 41 of WebRender’s newsletter.

WebRender is a GPU based 2D rendering engine for web written in Rust, currently powering Mozilla’s research web browser Servo and on its way to becoming Firefox‘s rendering engine.

Today’s highlights are two big performance improvements by Kvark and Sotaro. I’ll let you follow the links below if you are interested in the technical details.
I think that Sotaro’s fix illustrates well the importance of progressively rolling out this type of project a hardware/OS configuration at a time, giving us the time and opportunity to observe and address each configuration’s strengths and quirks.

Notable WebRender and Gecko changes
  • Kvark rewrote the mixed blend mode rendering code, yielding great performance improvements on some sites.
  • Kats fixed another clipping problem affecting blurs.
  • Kats fixed scaling of blurs.
  • Glenn fixed a clip mask regression.
  • Glenn added some picture cache testing infrastructure.
  • Nical landed a series of small CPU optimizations.
  • Nical reduced the cost of hashing and copying font instances.
  • Nical changed how the tiling origin of blob images is computed.
  • Sotaro greatly improved the performance of picture caching on Windows with Intel GPUs.
  • Sotaro improved the performance of canvas rendering.
  • Sotaro fixed empty windows with GDK_BACKEND=wayland.
  • Sotaro fixed empty popups with GDK_BACKEND=wayland.
  • Jamie improved the performance of texture uploads on Adreno GPUs.
Enabling WebRender in Firefox Nightly

In about:config, enable the pref gfx.webrender.all and restart the browser.

Reporting bugs

The best place to report bugs related to WebRender in Firefox is the Graphics :: WebRender component in bugzilla.

Note that it is possible to log in with a github account.

Using WebRender in a Rust project

WebRender is available as a standalone crate on crates.io (documentation)

Categorieën: Mozilla-nl planet

Emily Dunham: When searching an error fails

Mozilla planet - do, 28/02/2019 - 09:00
When searching an error fails

This blog has seen a dearth of posts lately, in part because my standard post formula is “a public thing had a poorly documented problem whose solution seems worth exposing to search engines”. In my present role, the tools I troubleshoot are more often private or so local that the best place to put such docs has been an internal wiki or their own READMEs.

This change of ecosystem has caused me to spend more time addressing a different kind of error: Those which one really can’t just Google.

Sometimes, especially if it’s something that worked fine on another system and is mysteriously not working any more, the problem can be forehead-slappingly obvious in retrospect. Here are some general steps to catch an “oops, that was obvious” fix as soon as possible.

Find the command that yielded the error

First, I identify what tool I’m trying to use. Ops tools are often an amalgam of several disparate tools glued together by a script or automation. This alias invokes that call to SSH, this internal tool wraps that API with an appropriate set of arguments by ascertaining them from its context. If I think that SSH, or the API, is having a problem, the first troubleshooting step is to figure out exactly what my toolchain fed into it. Then I can run that from my own terminal, and either observe a more actionable error or have something that can be compared against some reliable documentation.

Wrappers often elide some or all of the actual error messages that they receive. I ran into this quite recently when a multi-part shell command run by a script was silently failing, but running the ssh portion of that command in isolation yielded a helpful and familiar error that prompted me to add the appropriate key to my ssh-agent, which in turn allowed the entire script to run properly.

Make sure the version “should” work

Identifying the tool also lets me figure out where that tool’s source lives. Finding the source is essential for the next troubleshooting steps that I take.:

$ which toolname $ toolname -version #

I look for hints about whether the version of the tool that I’m using is supposed to be able to do the thing I’m asking it to do. Sometimes my version of the tool might be too new. This can be the case when the dates on all the docs that suggest it’s supposed to work the way it’s failing are more than a year or so old. If I suspect I might be on too new a version, I can find a list of releases near the tool’s source and try one from around the date of the docs.

More often, my version of a custom tool has fallen behind. If the date of the docs claiming the tool should work is recent, and the date of my local version is old, updating is an obvious next step.

If the tool was installed in a way other than my system package manager, I also check its README for hints about the versions of any dependencies it might expect, and make sure that it has those available on the system I’m running it from.

Look for interference from settings

Once I have something that seems like the right version of the tool, I check the way its README or other docs looked as of the installed version, and note any config files that might be informing its behavior. Some tooling cares about settings in an individual file; some cares about certain environment variables; some cares about a dotfile nearby on the file system; some cares about configs stored somewhere in the homedir of the user invoking it. Many heed several of the above, usually prioritizing the nearest (env vars and local settings) over the more distant (system-wide settings).

Check permissions

Issues where the user running a script has inappropriate permissions are usually obvious on the local filesystem, but verifying that you’re trying to do a task as a user allowed to do it is more complicated in the cloud. Especially when trying to do something that’s never worked before, it can be helpful to attempt to do the same task as your script manually through the cloud service’s web interface. If it lets you, you narrow down the possible sources of the problem; if it fails, it often does so with a far more human-friendly message than when you get the same failure through an API.

Trace the error through the source

I know where the error came from, I have the right versions of the tool and its dependencies, no settings are interfering with the tool’s operation, and permissions are set such that the tool should be able to succeed. When all this normal, generic troubleshooting has failed, it’s time to trace the error through the tool’s source.

This is straightforward when I’m fortunate enough to have a copy of that source: I pick some string from the error message that looks like it’ll always be the same for that particular error, and search it in the source. If there are dozens of hits, either the tool is aflame with technical debt or I picked a bad search string.

Locating what ran right before things broke leads to the part of the source that encodes the particular assumptions that the program makes about its environment, which can sometimes point out that I failed to meet one. Sometimes, I find that the error looked unfamiliar because it was actually escalated from some other program wrapped by the tool that showed it to me, in which case I restart this troubleshooting process from the beginning on that tool.

Sometimes, when none of the aforementioned problems is to blame, I discover that the problem arose from a mismatch between documentation and the program’s functionality. In these cases, it’s often the docs that were “right”, and the proper solution is to point out the issue to the tool’s developers and possibly offer a patch. When the code’s behavior differs from the docs’ claims, a patch to one or the other is always necessary.

Categorieën: Mozilla-nl planet

The Rust Programming Language Blog: Announcing Rust 1.33.0

Mozilla planet - do, 28/02/2019 - 01:00

The Rust team is happy to announce a new version of Rust, 1.33.0. Rust is a programming language that is empowering everyone to build reliable and efficient software.

If you have a previous version of Rust installed via rustup, getting Rust 1.33.0 is as easy as:

$ rustup update stable

If you don't have it already, you can get rustup from the appropriate page on our website, and check out the detailed release notes for 1.33.0 on GitHub.

What's in 1.33.0 stable

The two largest features in this release are significant improvements to const fns, and the stabilization of a new concept: "pinning."

const fn improvements

With const fn, you can now do way more things! Specifically:

  • irrefutable destructuring patterns (e.g. const fn foo((x, y): (u8, u8)) { ... })
  • let bindings (e.g. let x = 1;)
  • mutable let bindings (e.g. let mut x = 1;)
  • assignment (e.g. x = y) and assignment operator (e.g. x += y) expressions, even where the assignment target is a projection (e.g. a struct field or index operation like x[3] = 42)
  • expression statements (e.g. 3;)

You're also able to call const unsafe fns inside a const fn, like this:

const unsafe fn foo() -> i32 { 5 } const fn bar() -> i32 { unsafe { foo() } }

With these additions, many more functions in the standard library are able to be marked as const. We'll enumerate those in the library section below.

Pinning

This release introduces a new concept for Rust programs, implemented as two types: the std::pin::Pin<P> type, and the Unpin marker trait. The core idea is elaborated on in the docs for std::pin:

It is sometimes useful to have objects that are guaranteed to not move, in the sense that their placement in memory does not change, and can thus be relied upon. A prime example of such a scenario would be building self-referential structs, since moving an object with pointers to itself will invalidate them, which could cause undefined behavior.

A Pin<P> ensures that the pointee of any pointer type P has a stable location in memory, meaning it cannot be moved elsewhere and its memory cannot be deallocated until it gets dropped. We say that the pointee is "pinned".

This feature will largely be used by library authors, and so we won't talk a lot more about the details here. Consult the docs if you're interested in digging into the details. However, the stabilization of this API is important to Rust users generally because it is a significant step forward towards a highly anticipated Rust feature: async/await. We're not quite there yet, but this stabilization brings us one step closer. You can track all of the necessary features at areweasyncyet.rs.

Import as _

You can now import an item as _. This allows you to import a trait's impls, and not have the name in the namespace. e.g.

use std::io::Read as _; // Allowed as there is only one `Read` in the module. pub trait Read {}

See the detailed release notes for more details.

Library stabilizations

Here's all of the stuff that's been made const:

Additionally, these APIs have become stable:

See the detailed release notes for more details.

Cargo features

Cargo should now rebuild a crate if a file was modified during the initial build.

See the detailed release notes for more.

Crates.io

As previously announced, coinciding with this release, crates.io will require that you have a verified email address to publish. Starting at 2019-03-01 00:00 UTC, if you don't have a verified email address and run cargo publish, you'll get an error.

This ensures we can comply with DMCA procedures. If you haven't heeded the warnings cargo printed during the last release cycle, head on over to crates.io/me to set and verify your email address. This email address will never be displayed publicly and will only be used for crates.io operations.

Contributors to 1.33.0

Many people came together to create Rust 1.33.0. We couldn't have done it without all of you. Thanks!

Categorieën: Mozilla-nl planet

The Firefox Frontier: When an internet emergency strikes

Mozilla planet - wo, 27/02/2019 - 19:33

Research shows that we spend more time on phones and computers than with friends. This means we’re putting out more and more information for hackers to grab. It’s better to … Read more

The post When an internet emergency strikes appeared first on The Firefox Frontier.

Categorieën: Mozilla-nl planet

Mozilla Addons Blog: Design and create themes for Firefox

Mozilla planet - wo, 27/02/2019 - 18:00

Last September, we announced the next major evolution in themes for Firefox. With the adoption of static themes, you can now go beyond customizing the header of the browser and easily modify the appearance of the browser’s tabs and toolbar, and choose to distribute your theme publicly or keep it private for your own personal use. If you would like to learn about how to take advantage of these new features or are looking for an updated tutorial on how to create themes, you have come to the right place!

Designing themes doesn’t have to be complicated. The theme generator on AMO allows users to create a theme within minutes. You may enter hex, rgb, or rgba values or use the color selector to pick your preferred colors for the header, toolbar, and text. You will also need to provide an image which will be aligned to the top-right. It may appear to be simple, and that’s because it is!

If you want to test what your theme will look like before you submit it to AMO, the extension Firefox Color will enable you to preview changes in real-time, add multiple images, make finer adjustments, and more. You will also be able to export the theme you create on Firefox Color.

If you want to create a more detailed theme, you can use the static theme approach to create a theme XPI and make further modifications to the new tab background, sidebar, icons, and more. Visit the theme syntax and properties page for further details.

When your theme is generated, visit the Developer Hub to upload it for signing. The process of uploading a theme is similar to submitting an extension. If you are using the theme generator, you will not be required to upload a packaged file. In any case, you will need to decide whether you would like to share your design with the world on addons.mozilla.org, self-distribute it, or keep it for yourself. To keep a theme for yourself or to self-distribute, be sure to select “On your own” when uploading your theme.

Whether you are creating and distributing themes for the public or simply creating themes for private enjoyment, we all benefit by having an enhanced browsing experience. With the theme generator on AMO and Firefox Color, you can easily create multiple themes and switch between them.

The post Design and create themes for Firefox appeared first on Mozilla Add-ons Blog.

Categorieën: Mozilla-nl planet

Frédéric Wang: Review of Igalia's Web Platform activities (H2 2018)

Mozilla planet - wo, 27/02/2019 - 00:00

This blog post reviews Igalia’s activity around the Web Platform, focusing on the second semester of 2018.

Projects MathML

During 2018 we have continued discussions to implement MathML in Chromium with Google and people interested in math layout. The project was finally launched early this year and we have encouraging progress. Stay tuned for more details!

Javascript

As mentioned in the previous report, Igalia has proposed and developed the specification for BigInt, enabling math on arbitrary-sized integers in JavaScript. We’ve continued to land patches for BigInt support in SpiderMonkey and JSC. For the latter, you can watch this video demonstrating the current support. Currently, these two support are under a preference flag but we hope to make it enable by default after we are done polishing the implementations. We also added support for BigInt to several Node.js APIs (e.g. fs.Stat or process.hrtime.bigint).

Regarding “object-oriented” features, we submitted patches private and public instance fields support to JSC and they are pending review. At the same time, we are working on private methods for V8

We contributed other nice features to V8 such as a spec change for template strings and iterator protocol, support for Object.fromEntries, Symbol.prototype.description, miscellaneous optimizations.

At TC39, we maintained or developed many proposals (BigInt, class fields, private methods, decorators, …) and led the ECMAScript Internationalization effort. Additionally, at the WebAssembly Working Group we edited the WebAssembly JS and Web API and early version of WebAssembly/ES Module integration specifications.

Last but not least, we contributed various conformance tests to test262 and Web Platform Tests to ensure interoperability between the various features mentioned above (BigInt, Class fields, Private methods…). In Node.js, we worked on the new Web Platform Tests driver with update automation and continued porting and fixing more Web Platform Tests in Node.js core.

We also worked on the new Web Platform Tests driver with update automation, and continued porting and fixing more Web Platform Tests in Node.js core. Outside of core, we implemented the initial JavaScript API for llnode, a Node.js/V8 plugin for the LLDB debugger.

Accessibility

Igalia has continued its involvement at the W3C. We have achieved the following:

We are also collaborating with Google to implement ATK support in Chromium. This work will make it possible for users of the Orca screen reader to use Chrome/Chromium as their browser. During H2 we began implementing the foundational accessibility support. During H1 2019 we will continue this work. It is our hope that sufficient progress will be made during H2 2019 for users to begin using Chrome with Orca.

Web Platform Predictability

On Web Platform Predictability, we’ve continued our collaboration with AMP to do bug fixes and implement new features in WebKit. You can read a review of the work done in 2018 on the AMP blog post.

We have worked on a lot of interoperability issues related to editing and selection thanks to financial support from Bloomberg. For example when deleting the last cell of a table some browsers keep an empty table while others delete the whole table. The latter can be problematic, for example if users press backspace continuously to delete a long line, they can accidentally end up deleting the whole table. This was fixed in Chromium and WebKit.

Another issue is that style is lost when transforming some text into list items. When running execCommand() with insertOrderedList/insertUnorderedList on some styled paragraph, the new list item loses the original text’s style. This behavior is not interoperable and we have proposed a fix so that Firefox, Edge, Safari and Chrome behave the same for this operation. We landed a patch for Chromium. After discussion with Apple, it was decided not to implement this change in Safari as it would break some iOS rich text editor apps, mismatching the required platform behavior.

We have also been working on CSS Grid interoperability. We imported Web Platform Tests into WebKit (cf bugs 191515 and 191369 and at the same time completing the missing features and bug fixes so that browsers using WebKit are interoperable, passing 100% of the Grid test suite. For details, see 191358, 189582, 189698, 191881, 191938, 170175, 191473 and 191963. Last but not least, we are exporting more than 100 internal browser tests to the Web Platform test suite.

CSS

Bloomberg is supporting our work to develop new CSS features. One of the new exciting features we’ve been working on is CSS Containment. The goal is to improve the rendering performance of web pages by isolating a subtree from the rest of the document. You can read details on Manuel Rego’s blog post.

Regarding CSS Grid Layout we’ve continued our maintenance duties, bug triage of the Chromium and WebKit bug trackers, and fixed the most severe bugs. One change with impact on end users was related to how percentages row tracks and gaps work in grid containers with indefinite size, the last spec resolution was implemented in both Chromium and WebKit. We are finishing the level 1 of the specification with some missing/incomplete features. First we’ve been working on the new Baseline Alignment algorithm (cf. CSS WG issues 1039, 1365 and 1409). We fixed related issues in Chromium and WebKit. Similarly, we’ve worked on Content Alignment logic (see CSS WG issue 2557) and resolved a bug in Chromium. The new algorithm for baseline alignment caused an important performance regression for certain resizing use cases so we’ve fixed them with some performance optimization and that landed in Chromium.

We have also worked on various topics related to CSS Text 3. We’ve fixed several bugs to increase the pass rate for the Web Platform test suite in Chromium such as bugs 854624, 900727 and 768363. We are also working on a new CSS value ‘break-spaces’ for the ‘white-space’ property. For details, see the CSS WG discussions: issue 2465 and pull request. We implemented this new property in Chromium under a CSSText3BreakSpaces flag. Additionally, we are currently porting this implementation to Chromium’s new layout engine ‘LayoutNG’. We have plans to implement this feature in WebKit during the second semester.

Multimedia
  • WebRTC: The libwebrtc branch is now upstreamed in WebKit and has been tested with popular servers.
  • Media Source Extensions: WebM MSE support is upstreamed in WebKit.
  • We implemented basic support for <video> and <audio> elements in Servo.
Other activities Web Engines Hackfest 2018

Last October, we organized the Web Engines Hackfest at our A Coruña office. It was a great event with about 70 attendees from all the web engines, thank you to all the participants! As usual, you can find more information on the event wiki including link to slides and videos of speakers.

TPAC 2018

Again in October, but this time in Lyon (France), 12 people from Igalia attended TPAC and participated in several discussions on the different meetings. Igalia had a booth there showcasing several demos of our last developments running on top of WPE (a WebKit port for embedded devices). Last, Manuel Rego gave a talk on the W3C Developers Meetup about how to contribute to CSS.

This.Javascript: State of Browsers

In December, we also participated with other browser developers to the online This.Javascript: State of Browsers event organized by ThisDot. We talked more specifically about the current work in WebKit.

New Igalians

We are excited to announce that new Igalians are joining us to continue our Web platform effort:

  • Cathie Chen, a Chinese engineer with about 10 years of experience working on browsers. Among other contributions to Chromium, she worked on the new LayoutNG code and added support for list markers.

  • Caio Lima a Brazilian developer who recently graduated from the Federal University of Bahia. He participated to our coding experience program and notably worked on BigInt support in JSC.

  • Oriol Brufau a recent graduate in math from Barcelona who is also involved in the CSSWG and the development of various browser engines. He participated to our coding experience program and implemented the CSS Logical Properties and Values in WebKit and Chromium.

Coding Experience Programs

Last fall, Sven Sauleau joined our coding experience program and started to work on various BigInt/WebAssembly improvements in V8.

Conclusion

We are thrilled with the web platform achievements we made last semester and we look forward to more work on the web platform in 2019!

Categorieën: Mozilla-nl planet

Mozilla VR Blog: Jingle Smash: Performance Work

Mozilla planet - di, 26/02/2019 - 21:25
 Performance Work

This is part 5 of my series on how I built Jingle Smash, a block smashing WebVR game

Performance was the final step to making Jingle Smash, my block tumbling VR game, ready to ship. WebVR on low-end mobile devices like the Oculus Go can be slow, but with a little work we can at least get over a consistent 30fps, and usually 45 or above. Here's the steps I used to get Jingle Smash working well.

Merge Geometry

I learned from previous demos like my Halloween game that the limiting factor on a device like the Oculus Go isn't texture memory or number of polygons. No, the limiting factor is draw calls. In general we should try to keep draw calls under 100, preferably a lot under.

One of the easiest way to reduce draw calls is to combine multiple objects into one. If two objects have the same material (even if they use different UVs for the textures), then you can combine their geometry into one object. However, this is generally only effective for geometry that won't change.

In Jingle Smash the background is composed of multiple cones and spheres that make up the trees and hills. They don't move so they are a good candidate for geometry merging. Each color of cone trees uses the same texture and material so I was able to combine them all into a single object per color. Now 9 draw calls become two.

const tex = game.texture_loader.load('./textures/candycane.png') tex.wrapS = THREE.RepeatWrapping tex.wrapT = THREE.RepeatWrapping tex.repeat.set(8,8) const background = new THREE.Group() const candyCones = new THREE.Geometry() candyCones.merge(new THREE.ConeGeometry(1,10,16,8).translate(-22,5,0)) candyCones.merge(new THREE.ConeGeometry(1,10,16,8).translate(22,5,0)) candyCones.merge(new THREE.ConeGeometry(1,10,16,8).translate(7,5,-30)) candyCones.merge(new THREE.ConeGeometry(1,10,16,8).translate(-13,5,-20)) background.add(new THREE.Mesh(candyCones,new THREE.MeshLambertMaterial({ color:'white', map:tex,}))) const greenCones = new THREE.Geometry() greenCones.merge(new THREE.ConeGeometry(1,5,16,8).translate(-15,2,-5)) greenCones.merge(new THREE.ConeGeometry(1,5,16,8).translate(-8,2,-28)) greenCones.merge(new THREE.ConeGeometry(1,5,16,8).translate(-8.5,0,-25)) greenCones.merge(new THREE.ConeGeometry(1,5,16,8).translate(15,2,-5)) greenCones.merge(new THREE.ConeGeometry(1,5,16,8).translate(14,0,-3)) background.add(new THREE.Mesh(greenCones,new THREE.MeshLambertMaterial({color:'green', map:tex,})))

The hills also use only a single material (white with lambert reflectance) so I combined them into a single object as well.

const dome_geo = new THREE.Geometry() //left dome_geo.merge(new THREE.SphereGeometry(6).translate(-20,-4,0)) dome_geo.merge(new THREE.SphereGeometry(10).translate(-25,-5,-10)) //right dome_geo.merge(new THREE.SphereGeometry(10).translate(30,-5,-10)) dome_geo.merge(new THREE.SphereGeometry(6).translate(27,-3,2)) //front dome_geo.merge(new THREE.SphereGeometry(15).translate(0,-6,-40)) dome_geo.merge(new THREE.SphereGeometry(7).translate(-15,-3,-30)) dome_geo.merge(new THREE.SphereGeometry(4).translate(7,-1,-25)) //back dome_geo.merge(new THREE.SphereGeometry(15).translate(0,-6,40)) dome_geo.merge(new THREE.SphereGeometry(7).translate(-15,-3,30)) dome_geo.merge(new THREE.SphereGeometry(4).translate(7,-1,25)) background.add(new THREE.Mesh(dome_geo,new THREE.MeshLambertMaterial({color:'white'}))) Texture Compression

The next big thing I tried was texture compression. Before I started this project I thought texture compression enabled textures to be uploaded to the GPU faster and take up less RAM, so the init time would be reduced but drawing speed would be un-affected. How wrong I was!

Texture compression is a very special form of compression that makes the texture images fast to decompress. They are stored compressed in GPU memory then decompressed when accessed. This means less memory must be accessed so memory download becomes faster at the cost of doing decompression. However, GPUs have special hardware for decompression so that part becomes free.

Second, the texture compression formats are specifically designed to fit well into GPU core caches and be able to decompress just a portion of a texture at a time. In some cases this can reduce drawing time by an order of magnitude.

Texture compression is clearly a win, but it does have a downside. The formats are designed to be fast to decompress at the cost of being very slow to do the initial compression. And I don't mean two or three times slower. It can take many minutes to compress a texture in some of the newer formats. This means texture compression must be done offline, and can't be used for textures generated on the fly like I did for most of the game.

So, sadly, texture compression wouldn't help me much here. The big sky image with clouds could benefit but almost nothing else will. Additonally, every GPU supports different formats so I'd have to compress the image multiple times. WebGL2 introduces some new common formats that are supported on most GPUs, but currently ThreeJS doesn't use WebGL2.

In any case, when I tried compressing the sky it essentially made no difference, and I didn't know why. I started measure different parts of my game loop and discovered that the rendering time was only a fraction of my loop. My game is slow because of CPU stuff, not GPU, so I stopped worrying about Texture Compression for this project.

Raycasting

I noticed while playing my game that peformance would be reduced whenever I pointed the ornament slingshot towards the floor. I thought that was very odd, so I did some more measurements. It turns out I was wasting many milliseconds, on raycasting. I knew my raycasting wasn't as fast as it could be, buy why would it be slower pointed at the floor when it shouldn't intersect anything but the snow?

The default ThreeJS Raycaster is recursive. It will loop through every object in the scene from the root you provide to the intersectObject() function. Alternatively you can turn off recursion and it will check just the object passed in.

I use the Raycaster in my Pointer abstraction which is designed to be useful for all sorts of applications, so it recurses through the whole tree. More importantly, it starts at the scene, so it is recursing through the entire scene graph. I did provide a way to filter objects from being selected, but that doesn't affect the recursion, just the returned list.

Think of it like this: the scene graph is like a tree. By default the raycaster has to look at every branch and every leaf on the entire tree, even if I (the programmer) know that the possible targets are only in one part of the tree. What I needed was a way to tell the raycaster which entire branches could be safely skipped: like the entire background.

Raycaster doesn't provide a way to customize its recursive path but since ThreeJS is open source so I just made a copy and added a property called recurseFilter. This is a function that the raycaster calls on every branch with the current Object3D. It should return false if the raycaster can skip that branch.

For Jingle Smash I used the filter like this:

const pointer_opts = { //Pointer searches everything in the scene by default //override this to match just certain things intersectionFilter: ((o) => o.userData.clickable), //eliminate from raycasting recursion to make it faster recurseFilter: (o) => { if(o.userData && o.userData.skipRaycast === true) return false return true }, ... // rest of the options }

Now I can set userData.skipRaycast to true on anything I want. For Jingle Smash I skip raycasting on the camera, the sky sphere, the slingshot itself, the particle system used for explosions, the lights, and everything in the background (hills and cones). These changes dropped the cost of raycasting from sometimes over 50 milliseconds to always less than 1 ms. The end result was at least 10fps improvement.

Future work

I'm pretty happy with how the game turned out. In the future the main change I'd like to make is to find a faster physics engine. WebAssembly is enabling more of the C/C++/Rust physics engines to compiled for the web, so I will be able to switch to one of those at some point in the future.

The standard WebXR boilerplate I've been using for the past six months is starting to show it's age. I plan to rewrite it from scratch to better handle common use cases, and integrate the raycaster hack. It will also switch to be fully ES6 module compliant now that browsers support it everywhere and ThreeJS itself and some of it's utils are being ported to modules. (check out the jsm directory of the ThreeJS examples).

Categorieën: Mozilla-nl planet

Mozilla Open Innovation Team: Sustainable tech development needs local solutions: Voice tech ideation in Kigali

Mozilla planet - di, 26/02/2019 - 18:13

Mozilla and GIZ co-host ideation hackathon in Kigali to create a speech corpus for Kinyarwanda and to lay the foundation for local voice-recognition applications.

Developers, researchers and startups around the globe working on voice-recognition technology face one problem alike: A lack of freely available voice data in their respective language to train AI-powered Speech-to-Text engines.

Although machine-learning algorithms like Mozilla’s Deep Speech are in the public domain, training data is limited. Most of the voice data used by large corporations is not available to the majority of people, expensive to obtain or simply non-existent for languages not globally spread. The innovative potential of this technology is widely untapped. In providing open datasets, we aim to take away the onerous tasks of collecting and annotating data, which eventually reduces one of the main barriers to voice-based technologies and makes front-runner innovations accessible to more entrepreneurs. This is one of the major drivers behind our project Common Voice.

Common Voice is our crowdsourcing initiative and platform to collect and verify voice data and to make it publicly available. But to get more people involved from around the world and to speed up the process of getting to data sets large enough for training purposes, we rely on partners — like-minded commercial and non-commercial organizations with an interest to make technology available and useful to all.

Complementary expertise and shared innovation goals

In GIZ (Deutsche Gesellschaft für Internationale Zusammenarbeit) we are fortunate to have found an ally who, like us, believes that having access to voice data opens up a space for an infinite number of new applications. Voice recognition is well suited to reach people living in oral cultures and those who do not master a widespread language such as English or French. With voice interaction available in their own language we may provide millions of people access to information and ultimately make technology more inclusive.

When we learned about GIZ’s “Team V” which brings together digital enthusiasts from GIZ and Mainlevel Consulting to explore voice interaction and mechanisms for collecting voice data in local languages — an effort supported by GIZ’s internal innovation fund — the opportunity to leverage complementary strengths became just too obvious.

<figcaption>Hackathon goal: Developing incentive mechanisms for Rwandan’s to contribute to the collection of open voice data in Kinyarwanda (credit: Daniel Brumund, Mainlevel Consulting/GIZ)</figcaption>

Eventually we started working on a concrete collaboration that would combine Mozilla’s expertise in voice-enabled technology and data collection with GIZ’s immense regional experience and reach working with local organizations, public authorities and private businesses across various sectors. This resulted in an initial hackathon in Kigali, Rwanda, with the goal of unleashing the participants creativity to unlock novel means of collecting speech corpora for Kinyarwanda, a language spoken by at least 12 million people in Rwanda and surrounding regions.

Sustainable technology development needs local solutions

The hackathon took place on 12–13 February at kLab, a local innovation hub supported by the Rwandan government. 40 teams had applied with their novel incentive mechanisms for voice data collection, proving that AI and machine learning are of great interest to the Rwandan tech community. We invited 5 teams with the most promising approaches that took into account local opportunities not foreseen by the Common Voice team.

<figcaption>Antoine Sebera, Chief Government Innovation Officer of the Rwanda Information Society Association, opening the hackathon (credit: Daniel Brumund, Mainlevel Consulting/GIZ)</figcaption>

The event began with a rousing call to action for the participants by Antoine Sebera, Chief Government Innovation Officer of the Rwanda Information Society Association, a governmental agency responsible for putting Rwanda’s ambitious digital strategy into practice. GIZ then outlined the goals and evaluation criteria* of the hackathon, which was critical in setting the direction of the entire process. (*The developed solutions were evaluated against the following criteria: user centricity, incentive mechanism, feasibility, ease-of-use, potential to scale and sustainability.)

<figcaption>Kelly Davis, Head of Mozilla’s Machine Learning Group, explaining the design and technology behind Deep Speech and Common Voice (credit: Daniel Brumund, Mainlevel Consulting/GIZ)</figcaption>

Kelly Davis, Head of Mozilla’s Machine Learning Group followed giving an overview of the design and motivations behind Deep Speech and Common Voice, that could quickly be adapted to Kinyarwanda.During the two-day event, the selected teams refined their initial ideas and took them to the street, fine-tuning them through interviews with potential contributors and partners. By visiting universities, language institutions, and even the city’s public transit providers (really!) they put their solutions to the test.

Winner of the hackathon was an idea uniquely Rwandese: With Umuganda 2.0 the team updated the concept of “Umuganda”, a regular national community work day taking place every last Saturday of the month, to the digital age. Building on the Common Voice website, the participants would collect voice data during monthly Umuganda sessions at universities, tech hubs or community spaces. The idea also taps into the language pride of Rwandans. User research led by GIZ with students, help workers and young Rwandan working on language or technology has shown that speaking and preserving Kinyarwanda in a digital context is seen as very important and a great motivation to contribute to the collection of voice data.

<figcaption>Fine-tuning concepts (credit: Daniel Brumund, Mainlevel Consulting/GIZ)</figcaption>

For jury members Olaf Seidel, Head of the GIZ project “Digital Solutions for Sustainable Development” in Rwanda, George Roter, Director Mozilla Open Innovation Programs, Kelly Davis, and Gilles Richard Mulihano, Chief Innovation Officer at the local software developer ComzAfrica, the idea also resonated because of its easy scalability throughout Rwanda. Moreover, it could be adapted to other projects and regions relying on collective efforts to build common infrastructures of the digital world — something GIZ is keenly interested in. Umuganda 2.0 shows that we need culturally appropriate solutions to lower barriers to make front-runner innovations accessible to more entrepreneurs.

Next steps

GIZ and the winning team are now working towards a first real-life test at a local university during next month’s Umuganda on March 30. It is the aim of this session to test if the spirit of Umuganda and the collection of voice data really go well together, what motivates people to take part and how we can make voice data collection during the community event fun and interesting. And last but not least, how many hours of voice data can be collected during such an event to determine if the outcome justifies the means.

<figcaption>Hackathon participants with mentors from GIZ and Mozilla (credit: Daniel Brumund, Mainlevel Consulting/GIZ)</figcaption>

GIZ, with its deep connections to local communities in numerous countries, was a perfect partner for Mozilla in this endeavor, and we hope to — in fact look forward to — repeat this success elsewhere. In a long-term vision, Mozilla and GIZ aim to continue this promising cooperation building on our shared visions and objectives for a positive digital future. Allowing access to a wide range of services no matter which language you speak, is no doubt a powerful first step.

Alex Klepel, Kelly Davis (Mozilla) and Lea Gimpel (GIZ)

Sustainable tech development needs local solutions: Voice tech ideation in Kigali was originally published in Mozilla Open Innovation on Medium, where people are continuing the conversation by highlighting and responding to this story.

Categorieën: Mozilla-nl planet

Hacks.Mozilla.Org: Announcing a New Management Structure for Ecma TC39

Mozilla planet - di, 26/02/2019 - 16:52

Author’s note: You might have noticed that the name of the author appears in this article in the third person. Hi, I’m an engineer at Mozilla working on the Firefox DevTools server. I’m also a TC39 representative. I don’t usually write about myself in the 3rd person.

 

In 2019, Ecma’s TC39 (the standardizing body behind JavaScript/ECMAScript)  will be trying something new. The committee has grown in the last few years. As a result, the requirements of running the meeting have grown. To give you an idea of the scale — between 40 and 60 delegates (and sometimes more) meet 6 times a year to discuss proposed changes to the ECMAScript specification. Since we have grown so much, we will be changing our management structure. We will move away from single-chair and vice-chair roles to a flat hierarchy with three chairs sharing the responsibility.

In keeping with this new approach, we’re excited to announce new co-chairs Aki Braun (PayPal), Brian Terlson (Microsoft) and Yulia Startsev (Mozilla). Myles Borins (Google), and Till Schneidereit (Mozilla) will join to help facilitate the meetings. We’ll experiment with this structure this year, and then reflect on what we learn. This new structure allows us to iterate on running meetings so that we can be more efficient as a group.

Thanks to our previous chair and vice-chairs Rex Jaeschke, Leo Balter (Bocoup), and Dan Ehrenberg (Igalia), for their fantastic work to date.

If you are interested in the specification process, we invite you to take a look at our contribution documentation, and current proposals. If you want to talk JS or just hang out, feel free to join us in #tc39 on http://freenode.irc.org . New to IRC? https://freenode.net/kb/answer/chat .

Categorieën: Mozilla-nl planet

Mozilla Open Policy & Advocacy Blog: Mozilla Asks Supreme Court to Protect App Development

Mozilla planet - di, 26/02/2019 - 16:26

Mozilla, joined by Medium, Etsy, Mapbox, Patreon, and Wikimedia, filed a friend of the court brief in Oracle v. Google asking the Supreme Court to review the Federal Circuit court’s holding that Google committed copyright infringement by reusing Oracle’s APIs. The court’s order found that the APIs for the Java platform are protected by copyright and can’t be used by others except with Oracle’s paid permission.

We disagree. Let’s say a manufacturer produces a toaster and publishes the dimensions of the slots so bakers know exactly what size loaf will fit. Bakers can sell bread knowing it will fit in everyone’s toasters; manufacturers can make new toasters knowing they will fit people’s bread; and everyone can make toast regardless of which bakery they frequent.

Should other toaster manufacturers be prohibited from using those square dimensions for their own toasters? Of course not. No one has ever bought a toaster and a loaf of bread and needed to ask themselves if they’d fit together. Yet this is what the Federal Circuit’s ruling could do to software programming, and the ability of different pieces of code or software or hardware to talk to each other. The result is ownership not only of the toaster (the Java platform) but also of the dimensions of the toast (the Java APIs).

This outcome is bad for competition and innovation, and makes no sense for copyright, which exists to promote creativity, not to stand in the way of common sense business practices.

Incorporating the Java APIs in other platforms allows Java developers to quickly and easily make their apps available to more consumers on more platforms without having to learn entirely new systems and code new versions, and makes it easier for users to migrate from one platform to another. This is standard practice in the software world, and indeed, much of the internet’s compatibility and interoperability is based on the ease with which platforms, browsers, and other common technologies can reimplement the functionality of core technologies.

We hope the Supreme Court will agree with us on the importance of this issue for software competition and innovation and agree to hear the case.

Mozilla – Google v Oracle Amicus Brief

The post Mozilla Asks Supreme Court to Protect App Development appeared first on Open Policy & Advocacy.

Categorieën: Mozilla-nl planet

QMO: Firefox 66 Beta 10 Testday Results

Mozilla planet - di, 26/02/2019 - 10:27

Hello Mozillians!

As you may already know, last Friday February 22nd – we held a new Testday event, for Firefox 66 Beta 10.

Thank you all for helping us make Mozilla a better place: Kamila kamciatek.

Results:

– several test cases executed for “Scroll Anchoring” .

Thanks for another successful testday!

Categorieën: Mozilla-nl planet

Pagina's