mozilla

Mozilla Nederland LogoDe Nederlandse
Mozilla-gemeenschap

Abonneren op feed Mozilla planet
Planet Mozilla - http://planet.mozilla.org/
Bijgewerkt: 1 maand 5 dagen geleden

Andrew Halberstadt: The Cost of Fragmented Communication

di, 21/05/2019 - 21:04

Mozilla recently announced that we are planning to de-commission irc.mozilla.org in favour of a yet to be determined solution. As a long time user and supporter of IRC, this decision causes me some melancholy, but I 100% believe that it is the right call. Moreover, having had an inside glimpse at the process to replace it, I’m supremely confident whatever is chosen will be the best option for Mozilla’s needs.

I’m not here to explain why deprecating IRC is a good idea. Other people have already done so much more eloquently than I ever could have. I’m also not here to push for a specific replacement. Arguing over chat applications is like arguing over editors or version control. Yes, there are real and important differences from one application to the next, but if there’s one thing we’re spoiled for in 2019 it’s chat applications. Besides, so much time has been spent thinking about the requirements, there’s little anyone could say on the matter that hasn’t already been considered for hours.

This post is about an unrelated, but adjacent issue. An issue that began when mozilla.slack.com first came online, an issue that will likely persist long after irc.mozilla.org rides off into the sunset. An issue I don’t think is brought up enough, and which I’m hoping to start some discussion on now that communication is on everyone’s mind. I’m talking about using two communication platforms at once. For now Slack and IRC, soon to be Slack and something else.

Different platform, same problem.

Categorieën: Mozilla-nl planet

Hacks.Mozilla.Org: Firefox 67: Dark Mode CSS, WebRender, and more

di, 21/05/2019 - 16:32

Firefox 67 is available today, bringing a faster and better JavaScript debugger, support for CSS prefers-color-scheme media queries, and the initial debut of WebRender in stable Firefox.

These are just the highlights. For complete information, see:

CSS Color Scheme Queries

New in Firefox 67, the prefers-color-scheme media feature allows sites to adapt their styles to match a user’s preference for dark or light color schemes, a choice that’s begun to appear in operating systems like Windows, macOS and Android. As an example of what this looks like in the real world, Bugzilla uses prefers-color-scheme to trigger a brand new dark theme if the user has set that preference.

A screenshot of Bugzilla showing both light and dark themes

The prefers-color-scheme media feature is currently supported in Firefox and Safari, with support in Chrome expected later this year.

Additionally, the revert keyword is now supported, making it possible to revert one or more CSS property values back to their original values specified by the user agent’s default styles (or by a custom user stylesheet if one is set). Defined in Cascading and Inheritance Level 4, revert is also supported by Safari.

WebRender’s Stable Debut

Nearly four years ago we started work on a new rendering architecture for Firefox with the goal of better utilizing modern graphics hardware. Today, we’re beginning to gradually enable WebRender for users on Windows 10 with qualified hardware. This marks the first time that WebRender has been enabled outside of Nightly and Beta builds of Firefox, and we hope to expand the supported platforms in future releases.

A drawing of a computer chip with 4 CPU cores and a GPU

You can read more about WebRender in The whole web at maximum FPS: How WebRender gets rid of jank.

More Capable DevTools

Firefox 67 and 68 Developer Edition bring enormous improvements to Firefox’s JavaScript Debugger. Discover faster load times, amazing support for source maps, more predictable breakpoints, brand new logpoints, and much more.

The DevTools Debugger inspecting an application that has spawned several WebWorker threads

We’ve collected the Debugger improvements in their own article: Faster, Smarter JavaScript Debugging in Firefox DevTools.

In addition to the Debugger, the Web Console saw numerous updates, including greater keyboard accessibility and support for importing modules into the current page.

We’ve also removed or deprecated a few seldom-used and experimental tools, including the Canvas Debugger, Shader Editor, Web Audio Inspector, and WebIDE.

Browser Features Side-by-Side Profiles

Firefox now defaults to using different profiles for each installed version, making it easier than ever to run multiple copies of Firefox side-by-side.

The macOS dock showing Firefox, Firefox Developer Edition, and Firefox Nightly all running simultaneously

In addition, the browser will warn you if you try to open a newer profile with an older version of Firefox, as such mismatches can occasionally lead to data loss. This protection can be bypassed with the new -allow-downgrade command line argument.

Enhanced Privacy Controls

Firefox 67 better protects your privacy online with new Content Blocking options to avoid known cryptominers and fingerprinters.

 Cryptominer and Fingerprinter blockingYou also have more control over your extensions, which can be prevented from running in private browsing windows.

Screenshot of uBlock Origin's settings with a banner reading "Allowed in Private Windows"This is the default for all newly installed extensions in Firefox 67, though your previously installed extensions will receive permission by default. You can adjust these permissions on a per-extension basis by visiting about:addons.

Easier Access to Firefox Accounts and Saved Passwords

We’re working hard to make Firefox Accounts more useful and discoverable this year, starting with a new default icon in the browser toolbar.

Screenshot of the new Firefox Accounts toolbar button and its associated menu

The new icon indicates whether or not you’re signed into a Firefox Account, and makes it easy to perform actions like sending tabs to other devices or manually triggering a sync. Like other toolbar buttons, you can freely move or hide the Firefox Account button according to your preferences.

Check out the many improvements to Firefox’s built-in password manager, including quicker access to your list of saved credentials. You can either click on the new “Logins and Passwords” item in the main menu, or the new “View Saved Logins” button in the login autocomplete popup.

Screenshots of the View Saved Logins popup during autocomplete, and the Logins and Passwords item in the main menu

This can be especially useful if you need to look up or edit a login outside of the normal autofill workflow. And, if you use Firefox Sync, you can access your saved passwords with the Firefox Lockbox app for Android or iOS.

Web Platform Features Support for legacy FIDO U2F APIs

We’ve enabled legacy FIDO U2F support to improve backwards compatibility with sites that have not yet upgraded to its standards-based successor, WebAuthn.

These APIs make it possible for websites to authenticate users with strong, hardware-backed authentication mechanisms like USB security keys or Windows Hello.

AV1 on Windows, Linux, and macOS

Firefox now supports AV1, a next-generation video codec, on all major desktop platforms. Also, playback on those platforms is now powered by dav1d, a faster and more efficient AV1 decoder developed by the VideoLAN and FFmpeg communities.

JavaScript: String.prototype.matchAll() and Dynamic Imports

Firefox joins Chrome in supporting the matchAll() String prototype method, which takes a regular expression and returns an iterator of all matching text, including capturing groups.

The import() function can now be used to dynamically load JavaScript modules, similarly to how the static import statement works. Now it’s possible to load modules conditionally or in response to user actions, though such imports are harder to reason about for build tools that use static analysis for optimizations like tree shaking.

And more awaits!

This release includes plenty of other fixes and enhancements not covered here, and lots more to come. So what are you waiting for? Download Firefox 67 today and let us know what you think!

The post Firefox 67: Dark Mode CSS, WebRender, and more appeared first on Mozilla Hacks - the Web developer blog.

Categorieën: Mozilla-nl planet

The Mozilla Blog: Latest Firefox Release is Faster than Ever

di, 21/05/2019 - 15:01

With the introduction of the new Firefox Quantum browser in 2017 we changed the look, feel, and performance of our core product. Since then we have launched new products to complement your experience when you’re using Firefox and serve you beyond the browser. This includes Facebook Container, Firefox Monitor and Firefox Send. Collectively, they work to protect your privacy and keep you safe so you can do the things you love online with ease and peace of mind. We’ve been delivering on that promise to you for more than twenty years by putting your security and privacy first in the building of products that are open and accessible to all.

Today’s new Firefox release continues to bring fast and private together right at the crossroads of performance and security. It includes improvements that continue to keep Firefox fast while giving you more control and assurance through new features that your personal information is safe while you’re online with us.

Download Firefox

To see how much faster Firefox is today take a look:



How did we make Firefox faster?

To make Firefox faster, we simply prioritized our performance management “to-do” list. We applied many of the same principles of time management just like you might prioritize your own urgent needs. For example, before you go on a road trip, you check for a full tank of gas, make sure you have enough oil, or have the right air pressure in your tires.

For this latest Firefox release, we adopted the well-known time management strategy of “procrastinate on purpose.” The result is that Firefox is better at performing tasks at the optimal time. Here’s how we reorganized our to-do list to make Firefox faster:

    • Deprioritize least commonly used features: We reviewed areas that we felt could be delayed and delivered on “painting” the page faster so you can browse quicker. This includes delaying set Timeout in order to prioritize scripts for things you need first while delaying others to help make the main scripts for Instagram, Amazon and Google searches execute 40-80% faster; scanning for alternative style sheets after page load; and not loading the auto-fill module unless there is an actual form to complete.
    • Suspend Idle Tabs: You shouldn’t feel guilty about opening a zillion tabs, but keeping all those tabs open uses your computer’s memory and slows down its performance. Firefox will now detect if your computer’s memory is running low, which we define as lower than 400MB, and suspend unused tabs that you haven’t used or looked at in a while. Rest assured if you decide you want to review that webpage, simply click on the tab, and it will reload where you left off.
    • Faster startup after customization: For users who have customized their browser with an add-on like a favorite theme, for example changing it to the seasons of the year, or utilizing one of the popular ad-blockers, we’ve made it so that the browser skips a bunch of unnecessary work during subsequent start-ups.
New Privacy Protections

Privacy has always been core to Mozilla’s mission, and the recent news and events have given people more reason to care about their privacy while online. In 2018, we launched privacy-focused features like opt-in Tracking Protection on the desktop, Tracking protection by default on iOS, and our popular Facebook Container Extension.

For today’s release we continue to bring you privacy features and set protections to help you feel safe online when you are with Firefox. Today’s privacy features include:

  • Blocking fingerprinting and cryptomining: In August 2018, we shared our adapted approach to anti-tracking to address growing consumer demand for features and services that respect online privacy. One of the three key areas we said we’d tackle was mitigating harmful practices like fingerprinting which builds a digital fingerprint that tracks you across the web, and cryptomining which uses the power of your computer’s CPU to generate cryptocurrency for someone else’s benefit. Based on recent testing of this feature in our pre-release channels last month, today’s Firefox release gives you the option to “flip a switch” in the browser and protect yourself from these nefarious practices.
          • To turn this feature on click on the small “i” icon in the address bar and under Content Blocking, click on the Custom gear on the right side. The other option is to go to your Preferences. Click on Privacy & Security on the left hand side. From there, users will see Content Blocking listed at the top. Select Custom and check “Cryptominers” and “Fingerprinters” so that they are both blocked.

Block cryptominers and fingerprinters

  • Personalize your Private Browsing Experience: Of the many types of privacy protections that Firefox offers, Private Browsing continues to be one of our most popular features. Private Browsing deletes cookies when you close the browser window and doesn’t track history. Plus, Private Browsing also blocks tracking cookies by default. Based on user feedback, we’re giving more controls for you to get the most out of their Private Browsing experience.
          • Saving Passwords –  Although you may enjoy what Private Browsing has to offer, you may still want some of the convenience from a typical Firefox experience. This included not having to type in passwords each time you visit a site. In today’s release, you can visit a site in Private Browsing without the hassle of typing in your password each time. Registering and saving passwords for a website in Private Browsing will work just as it does in normal mode.
          • Enable or Disable add-ons/web extensions – Starting with today’s release, you can now decide which extensions you want to enable or disable in Private Browsing. As part of installing an extension, Firefox will ask if it should be allowed to run in Private Browsing, with a default of Don’t Allow. For extensions you’ve installed before today’s release, you can go to your Add-Ons menu and enable or disable for Private Browsing by simply clicking on the extension you’d like to manage.

Manage existing add-ons

Additional features in today’s release:
        • Online accessibility for all – Mozilla has always strived to make the web easier to access for everyone. We’re excited to roll out a fully keyboard accessible browser toolbar in today’s release. To use this feature, simply press the “tab” or “arrow” keys to reach the buttons on the right end of the toolbar including their extension buttons, the toolbar button overflow panel and the main Firefox menu. This is just one more step forward in making access to the web easier for everyone, no matter what your abilities are. To learn about our work on accessibility, you can read more on our Internet Citizen blog.
        • WebRender Update – We will be shipping WebRender to a small group of users, specifically Windows 10 desktop users with NVIDIA graphics cards. Last year we talked about integrating WebRender, our next-generation GPU-based 2D rendering engine. WebRender will help make browsing the web feel faster, efficient, and smoother by moving core graphics rendering processes to the Graphics Processing Unit. We are starting with this group of users and plan to roll out this feature throughout the year. To learn more visit here.
        • Smoother video playback with today’s AV1 Update – AV1 is the new royalty-free video format jointly developed by Mozilla, Google, Microsoft, Amazon and others as part of the Alliance for Open Media (AOMedia). We first provided AV1 support by shipping the reference decoder in January’s Firefox release. Today’s Firefox release is updated to use the newer, higher-performance AV1 decoder known as dav1d. We have seen great growth in the use of AV1 even in just a few months, with our latest figures showing that 11.8% of video playback in Firefox Beta used AV1, up from 0.85% in February and 3% in March.

To see what else is new or what we’ve changed in today’s release, you can check out our release notes.

Check out and download the latest version of Firefox Quantum, available here.

 

The post Latest Firefox Release is Faster than Ever appeared first on The Mozilla Blog.

Categorieën: Mozilla-nl planet

The Firefox Frontier: No-Judgment Digital Definitions: What is Cryptocurrency?

di, 21/05/2019 - 15:00

Cryptocurrency, cryptomining. We hear these terms thrown around a lot these days. It’s a new way to invest. It’s a new way to pay. It’s a new way to be … Read more

The post No-Judgment Digital Definitions: What is Cryptocurrency? appeared first on The Firefox Frontier.

Categorieën: Mozilla-nl planet

The Firefox Frontier: Let Firefox help you block cryptominers from your computer

di, 21/05/2019 - 15:00

Is your computer fan spinning up for no apparent reason? Your electricity bill inexplicably high? Your laptop battery draining much faster than usual? It may not be all the Netflix … Read more

The post Let Firefox help you block cryptominers from your computer appeared first on The Firefox Frontier.

Categorieën: Mozilla-nl planet

The Firefox Frontier: How to block fingerprinting with Firefox

di, 21/05/2019 - 15:00

If you wonder why you keep seeing the same ad, over and over, the answer could be fingerprinting. Fingerprinting is a type of online tracking that’s different from cookies or … Read more

The post How to block fingerprinting with Firefox appeared first on The Firefox Frontier.

Categorieën: Mozilla-nl planet

The Firefox Frontier: Save and update passwords in Private Browsing with Firefox

di, 21/05/2019 - 15:00

Private browsing was invented 14 years ago, making it possible for users to close a browser window and erase traces of their online activity from their computers. Since then, we’ve … Read more

The post Save and update passwords in Private Browsing with Firefox appeared first on The Firefox Frontier.

Categorieën: Mozilla-nl planet

Mozilla GFX: Graphics Team ships WebRender MVP!

di, 21/05/2019 - 12:52

After many months of hard work and preparation, I’m pleased to announce the general availability of WebRender for selected Windows 10 devices. WebRender is a major rewrite of the Firefox rendering architecture using the same kind of GPU-based acceleration techniques used by games.

Until now, our browser rendering pipeline varied depending on the platform and OS. This has a number of drawbacks:

  • In some of those variations, rendering happens on the main CPU, which consumes resources that could otherwise be used to run the rest of the browser.
  • It is costly in terms of time and overhead to maintain and support multiple backends.

WebRender replaces this with a modern, unified architecture which consists of two major elements:

  • The representation of the page in the compositor is no longer a set of rasterized layers, but now an unrasterized display list.
  • The compositing and rasterization steps have been joined into a single GPU-powered rendering step.

For more details, refer to Lin Clark’s excellent Hacks article.

This design provides very fast throughput and eliminates the need for complicated heuristics to guess which parts of the website might change in future frames. Moreover, a single backend that we control means bringing hardware acceleration to more of our users: we run the same code across Windows, Mac, Linux, and Android, and we’re much better-equipped to work around driver bugs and avoid blacklisting. It also moves GPU work out of the content process which will let us have stricter sandboxing in the content process.

We’ve seen significant performance improvements on many websites already, but we’ve only scratched the surface of what’s possible with this architecture. Expect to see even more performance improvements as we begin to take full advantage of our architectural investment in WebRender.

Release Schedule

Since WebRender was quite a large undertaking, we decided to split up the release across a number of smaller targets. The aim of today’s release is to ship a WebRender MVP (minimum viable product) to one target; we plan to learn from that, and then gradually ship WebRender to additional platforms. This release of Firefox 67 will see us roll-out WebRender to users running Windows 10 on desktop machines with NVIDIA graphics cards. This currently represents approximately 4% of Firefox’s desktop population.

The go-live date for Firefox 67 is Tuesday, May 21st at 6am PST. WebRender will ship disabled by default. On May 27th, 25% of the qualified population will have WebRender enabled. We will then increase that rollout to 50% by Thursday, May 30th – assuming that everything is going smoothly. WebRender will then be enabled for 100% of the qualified population by the following week.

We will be keeping a close eye on various health metrics to ensure everything is working as expected. We also plan to conduct an A/B test to compare performance against non-WebRender-enabled Firefox instances in release. We have spent the past several months putting WebRender through its paces in Nightly and Beta and even conducted an experiment in  the Firefox 66 release to help uncover any potential bugs and issues we might face.

This milestone is a significant one, and we are excited to get here! A big congrats goes out to everyone on the Graphics team and all of those who have pitched in to help us get to this point. We are all looking forward to shipping WebRender to more users throughout the rest of this year.

FAQs So, is WebRender faster/better than non-WebRender?

When it came to performance, our main goal for the MVP was to avoid regressing performance or correctness vs. making it wildly better right away. That said, during our experiments in Nightly and Beta, we’ve observed that users with WebRender experience less jank, and that a number of website performance problems disappear. We will keep a close eye on our metrics as we ship throughout Firefox 67 and will report back on how things are looking in Release.

We are also continuing to prioritize performance work in WebRender. Picture caching 2.0,  various display list improvements, and document splitting are all enhancements that leverage our flexible, new architecture to speed things up further.

How did you decide which hardware you were going to target for the MVP?

There are a couple of reasons why we choose to ship WebRender first on Windows 10 machines with NVIDIA graphics cards. This is the platform where we currently have the best performance with WebRender, and we wanted to compete with our flagship graphics experience to prove that this was a better architecture. Also the percentage of Firefox users with this combination in release (approximately 4%) is a safe number such that we don’t introduce too much risk for this first milestone.

Feels like y’all have been talking about WebRender for a while now. Why has it taken so long?

In short: computers are hard.

WebRender started as an experiment in the Servo research project and as such, only needed to support a small number of use cases. Integrating WebRender into Gecko meant that we had to build on that and account for the multitude of variables possible to encounter in sites on the web today. As I am sure you can imagine, that has been no small feat.

Replacing the browser’s existing rendering engine is a task that required careful consideration, planning and thoughtfulness. We have basically been performing open heart surgery on a living, moving beast (that wasn’t put under anaesthesia).

The good news is that we have learned quite a bit throughout this process and are confident in our plans going forward when it comes to continuing to ship WebRender this year.

I am not on qualified hardware, but I really want to try WebRender! How can I help/test?

We currently have WebRender enabled in Nightly for:

  • Recent Intel and AMD GPUs on Windows 10 desktops
  • Linux users on Intel integrated GPUs with Mesa 18.2 or newer with screens smaller than 4K

To turn on WebRender, go to about:config, enable the gfx.webrender.all pref, and restart the browser.  Note: doing so may cause pages to render incorrectly, your browser to crash, or other problems.  Proceed with caution!

The best place to report bugs related to WebRender in Firefox is the Graphics :: WebRender component in Bugzilla.

Congrats on shipping the MVP! When are you shipping WebRender to more places?

Why, thank you! Our goal is to ship WebRender out to more users throughout the year while continuing to make it more performant. Our top priorities for the next few months will be the following:

  • Monitor the health of Firefox 67 and continue to work on some correctness bugs for Firefox 68
  • Ship WebRender to users running Windows 10 on desktop with AMD graphics cards in Firefox 68
  • We are also currently working on the foundational tasks for getting WebRender performing well on Android/GeckoView and deciding when we can turn that on in GeckoView Nightly
  • We also want to determine potential release targets for hardware with Intel graphics cards

We got together in Toronto in early April and created a high-level roadmap.

We will keep everyone posted as our plans develop.

Categorieën: Mozilla-nl planet

This Week In Rust: This Week in Rust 287

di, 21/05/2019 - 06:00

Hello and welcome to another issue of This Week in Rust! Rust is a systems language pursuing the trifecta: safety, concurrency, and speed. This is a weekly summary of its progress and community. Want something mentioned? Tweet us at @ThisWeekInRust or send us a pull request. Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub. If you find any errors in this week's issue, please submit a PR.

Updates from Rust Community News & Blog Posts Crate of the Week

This week we have two crates: memory-profiler, does what it says on the box. momo is a procedural macro that outlines generic conversions to reduce monomorphized code. Thanks to ehsanmok and llogiq for the suggestion!

Submit your suggestions and votes for next week!

Call for Participation

Always wanted to contribute to open-source projects but didn't know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

If you are a Rust project owner and are looking for contributors, please submit tasks here.

Updates from Rust Core

240 pull requests were merged in the last week

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

No RFCs were approved this week.

Final Comment Period

Every week the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now.

RFCs Tracking Issues & PRs New RFCs

No new RFCs were proposed this week.

Upcoming Events Africa Asia Europe North America South America

If you are running a Rust event please add it to the calendar to get it mentioned here. Please remember to add a link to the event too. Email the Rust Community Team for access.

Rust Jobs

Tweet us at @ThisWeekInRust to get your job offers listed here!

Quote of the Week

Just the presence of well integrated Algebraic Data Types (ADTs) makes an incredible amount of difference. They are used to represent errors in a meaningful and easy to understand way (Result<T>), are used to show that a function may or may not return a meaningful value without needing a garbage value (Option<T>), and the optional case can even be used to wrap a null pointer scenario in a safe way (Option<Ref<T>> being the closest to a literal translation I think).

That’s just one small feature that permeates the language. Whatever the opposite of a death-of-a-thousand-cuts is, Rust has it.

tomcatfish on the orange website

Thanks to PrototypeNM1 for the suggestion!

Please submit quotes and vote for next week!

This Week in Rust is edited by: nasa42, llogiq, and Flavsditz.

Discuss on r/rust.

Categorieën: Mozilla-nl planet

The Rust Programming Language Blog: The 2019 Rust Event Lineup

ma, 20/05/2019 - 02:00

We're excited for the 2019 conference season, which we're actually late in writing up. Some incredible events have already happened! Read on to learn more about all the events occurring around the world, past and future.

December 15-16, 2018: RustRush

Yes, RustRush was actually in 2018, but we didn't cover it in the 2018 event lineup so we're counting it in 2019! This was the first Rust event in Russia. You can watch the talk videos and follow the conference on Twitter.

March 29-30, 2019: Rust Latam

The Rust Latam Conference is Latin America's leading event about Rust. Their first event happened in Montevideo this year, and the videos are available to watch! Rust Latam plans to be a yearly event, so watch Twitter for information about next year's event.

April 20-23, 2019: RustCon Asia

RustCon Asia was the first Rust conference in Asia! The talk videos are already available on YouTube! Follow @RustConAsia on Twitter for future updates.

April 26-29, 2019: Oxidize

Oxidize was a conference specifically about using Rust on embedded devices that took place in Berlin. The videos are now available on YouTube, and follow @oxidizeconf on Twitter for future updates.

June 28-29, 2019: RustLab

RustLab is a new conference for this year that will be taking place in Florence, Italy. Their session and workshop lineup has been announced, and tickets are now available! Follow the conference on Twitter for the most up-to-date information.

August 22-23: RustConf

The official RustConf will again be taking place in Portland, OR, USA. Thursday is a day of trainings and Friday is the main day of talks. See Twitter for the latest announcements!

September 20-21: Colorado Gold Rust

Colorado Gold Rust is a new conference for this year, and is taking place in Denver, CO, USA. Their CFP and ticket sales are open now, and you can also follow them on twitter!

October 18-19: Rust Belt Rust

This year's Rust Belt Rust will be taking place in Dayton, OH, USA, the birthplace of flight! The CFP and ticket sales will open soon. Check Twitter for announcements.

November: RustFest Barcelona

The exact dates for RustFest Barcelona haven't been announced yet, but it's slated to happen sometime in November. Keep an eye on the RustFest Twitter for announcements!

We are so lucky and excited to have so many wonderful conferences around the world in 2019! Have fun at the events, and we hope there are even more in 2020!

Categorieën: Mozilla-nl planet

Cameron Kaiser: TenFourFox FPR14 available

za, 18/05/2019 - 03:21
TenFourFox Feature Parity Release 14 final is now available for testing (downloads, hashes, release notes). Besides outstanding security updates, this release fixes the current tab with TenFourFox's AppleScript support so that this exceptional script now functions properly as expected:

tell application "TenFourFoxG5"
  tell front browser window
    set URL of current tab to "https://www.google.com/"
    repeat while (current tab is busy)
      delay 1
    end repeat
    tell current tab
      run JavaScript "let f = document.getElementById('tsf');f.q.value='tenfourfox';f.submit();"
    end tell
    repeat while (current tab is busy)
      delay 1
    end repeat
    tell current tab
      run JavaScript "return document.getElementsByTagName('h3')[0].innerText + ' ' + document.getElementsByTagName('cite')[0].innerText"
    end tell
  end tell
end tell

The font blacklist has also been updated and I have also hard-set the frame rate to 30 in the pref even though the frame rate is capped at 30 internally and such a change is simply a placebo. However, there are people claiming this makes a difference, so now you have your placebo pill and I hope you like the taste of it. :P The H.264 wiki page is also available, if you haven't tried MPEG-4/H.264 playback. The browser will finalize Monday evening Pacific as usual.

For FPR15, the JavaScript update that slipped from this milestone is back on. It's hacky and I don't know if it will work; we may be approaching the limits of feature parity, but it should compile, at least. I'm trying to reduce the changes to JavaScript in this release so that regressions are also similarly limited. However, I'm also looking at adding some later optimizations to garbage collection and using Mozilla's own list of malware scripts to additionally seed basic adblock, which I think can be safely done simultaneously.

Categorieën: Mozilla-nl planet

Mozilla VR Blog: Bringing WebXR to iOS

vr, 17/05/2019 - 00:11
Bringing WebXR to iOS

The first version of the WebXR Device API is close to being finalized, and browsers will start implementing the standard soon (if they haven't already). Over the past few months we've been working on updating the WebXR Viewer (source on github, new version available now on the iOS App Store) to be ready when the specification is finalized, giving developers and users at least one WebXR solution on iOS. The current release is a step along this path.

Most of the work we've been doing is hidden from the user; we've re-written parts of the app to be more modern, more robust and efficient. And we've removed little-used parts of the app, like video and image capture, that have been made obsolete by recent iOS capabilities.

There are two major parts to the recent update of the Viewer that are visible to users and developers.

A New WebXR API

We've updated the app to support a new implementation of the WebXR API based on the official WebXR Polyfill. This polyfill currently implements a version of the standard from last year, but when it is updated to the final standard, the WebXR API used by the WebXR Viewer will follow quickly behind. Keep an eye on the standard and polyfill to get a sense of when that will happen; keep an eye on your favorite tools, as well, as they will be adding or improving their WebXR support over the next few months. (The WebXR Viewer continues to support our early proposal for a WebXR API, but that API will eventually be deprecated in favor of the official one.)

We've embedded the polyfill into the app, so the API will be automatically available to any web page loaded in the app, without the need to load a polyfill or custom API. Our goal is to have any WebXR web applications designed to use AR on a mobile phone or tablet run in the WebXR Viewer. You can try this now, by enabling the "Expose WebXR API" preference in the viewer. Any loaded page will see the WebXR API exposed on navigator.xr, even though most "webxr" content on the web won't work right now because the standard is in a state of constant change.

You can find the current code for our API in the webxr-ios-js, along with a set of examples we're creating to explore the current API, and future additions to the API.These examples are available online. A glance at the code, or the examples, will show that we are not only implementing the initial API, but also building implementations of a number of proposed additions to the standard, including anchors, hit testing, and access to real world geometry. Implementing support for requesting geospatial coordinate system alignment allows integration with the existing web Geolocation API, enabling AR experiences that rely on geospatial data (illustrated simply in the banner image above). We will soon be exploring an API for camera access to enable computer vision.

Most of these APIs were available in our previous WebXR API implementation, but the former here more closely aligns with the work of the Immersive Web Community group. (We have also kept a few of our previous ARKit-specific APIs, but marked them explicitly as not being on the standards-track yet.)

A New Approach to WebXR Permissions

The most visible change to the application is a permissions API that is popped up when a web page requests a WebXR Session. Previously, the app was an early experiment and devoted to WebXR applications built with our custom API, so we did not explicitly ask for permission, inferring that anyone running an app in our experimental web browser intends to use WebXR capabilities.

When WebXR is released, browsers will need to obtain a user's permission before a web page can access the potentially sensitive data available via WebXR. We are particularly interested in what levels of permissions WebXR should have, so that users retain control of what data apps may require. One approach that seems reasonable is to differentiate between basic access (e.g., device motion, perhaps basic hit testing against the world), access to more detailed knowledge of the world (e.g., illumination, world geometry) and finally access to cameras and other similar sensors. The corresponding permission dialogs in the Viewer are shown here.

Bringing WebXR to iOS

If the user gives permission, an icon in the URL bar shows the permission level, similar to the camera and microphone access icons in Firefox today.

Bringing WebXR to iOS

Tapping on the icon brings up the permission dialog again, allowing the user to temporarily restrict the flow of data to the web app. This is particularly relevant for modifying access to cameras, where a mobile user (especially when HMDs are more common) may want to turn off sensors depending on the location, or who is nearby.

A final aspect of permissions we are exploring is the idea of a "Lite" mode. In each of the dialogs above, a user can select Lite mode, which brings up a UI allowing them to select a single ARKit plane.
Bringing WebXR to iOS
The APIs that expose world knowledge to the web page (including hit testing in the most basic level, and geometry in the middle level) will only use that single plane as the source of their actions. Only that plane would produce hits, and only that plane would have it's geometry sent into the page. This would allow the user to limit the information passed to a page, while still being able to access AR apps on the web.

Moving Forward

We are excited about the future of XR applications on the web, and will be using the WebXR Viewer to provide access to WebXR on iOS, as well as a testbed for new APIs and interaction ideas. We hope you will join us on the next step in the evolution of WebXR!

Categorieën: Mozilla-nl planet

Chris H-C: Virtual Private Social Network: Tales of a BBM Exodus

do, 16/05/2019 - 22:19

bbmTimeToSayGoodbye

On Thursday April 18, my primary mechanism for talking to friends notified me that it was going away. I’d been using BlackBerry Messenger (BBM) since I started work at Research in Motion in 2008 and had found it to be tolerably built. It messaged people instantly over any data connection I had access to, what more could I ask for?

The most important BBM feature in my circle of contacts was its Groups feature. A bunch of people with BBM could form a Group and then messages, video, pictures, lists were all shared amongst the people in the group.

Essentially it acted as a virtual private social network. I could talk to a broad group of friends about the next time were getting together or about some cute thing my daughter did. I could talk to the subset who lived in Waterloo about Waterloo activities, and whose turn it was for Sunday Dinner. The Beers group kept track of whose turn it was to pay, and it combined nicely with the chat for random nerdy tidbits and coordinating when each of us arrived at the pub. Even my in-laws had a group to coordinate visits, brag about child developmental milestones, and manage Christmas.

And then BBM announced it was going away, giving users six weeks to find a replacement… or, as seemed more likely to me, replacements.

First thing I did, since the notice came during working hours, was mutter angrily that Mozilla didn’t have an Instant Messaging product that I could, by default, trust. (We do have a messaging product, but it’s only for Desktop and has an email focus.)

The second thing I did was survey the available IM apps, cross-correlating them with whether or not various of my BBM contacts already had it installed… the existing landscape seemed to be a mess. I found that WhatsApp was by far the most popular but was bought by Facebook in 2014 and required a real phone number for your account. Signal’s the only one with a privacy/security story that I and others could trust (Telegram has some weight here, but not much) but it, too, required a phone number in order to sign up. Slack’s something only my tech friends used, and their privacy policy was a shambles. Discord’s something only my gaming friends used, and was basically Slack with push-to-talk.

So we fragmented. My extended friend network went to Google Hangouts, since just about everyone already had a Google Account anyway (even if they didn’t use it for anything). The Beers group went to Discord because a plurality of the group already had it installed.

And my in-laws’ family group… well, we still have two weeks left to figure that one out. Last I heard someone was stumping for Facebook Messenger, to which I replied “Could we not?”

The lack of reasonable options and the (sad, understandable) willingness of my relatives to trade privacy for convenience is bothering me so much that I’ve started thinking about writing my own IM/virtual private social network.

You’d think I’d know better than to even think about architecting anything even close to either of those topics… but the more I think about it the more webtech seems like an ideal fit for this. Notifications, Push, ServiceWorkers, WebRTC peer connections, TLS, WebSockets, OAuth: stir lightly et voila.

But even ignoring the massive mistake diving into either of those ponds full of crazy would be, the time was too short for that four weeks ago, and is trebly so now. I might as well make my peace that Facebook will learn my mobile phone number and connect it indelibly with its picture of what advertisements it thinks I would be most receptive to.

Yay.

:chutten

Categorieën: Mozilla-nl planet

Hacks.Mozilla.Org: Faster smarter JavaScript debugging in Firefox DevTools

do, 16/05/2019 - 17:28

Script debugging is one of the most powerful and complex productivity features in the web developer toolbox. Done right, it empowers developers to fix bugs quickly and efficiently. So the question for us, the Firefox DevTools team, has been, are the Firefox DevTools doing it right?

We’ve been listening to feedback from our community. Above everything we heard the need for greater reliability and performance; especially with modern web apps. Moreover, script debugging is a hard-to-learn skill that should work in similar fashion across browsers, but isn’t consistent because of feature and UI gaps.

With these pain points in mind, the DevTools Debugger team – with help from our tireless developer community – landed countless updates to design a more productive debugging experience. The work is ongoing, but Firefox 67 marks an important milestone, and we wanted to highlight some of the fantastic improvements and features. We invite you to open up Firefox Quantum: Developer Edition, try out the debugger on the examples below and your projects and let us know if you notice the difference.

A rock-solid debugging experience

Fast and reliable debugging is the result of many smaller interactions. From initial loading and source mapping to breakpoints, console logging, and variable previews, everything needs to feel solid and responsive. The debugger should be consistent, predictable, and capable of understanding common tools like webpack, Babel, and TypeScript.

We can proudly say that all of those areas have improved in the past months:

  1. Faster load time. We’ve eliminated the worst performance cliffs that made the debugger slow to open. This has resulted in a 30% speedup in our performance test suite. We’ll share more of our performance adventures in a future post.
  2. Excellent source map support. A revamped and faster source-map backend perfects the illusion that you’re debugging your code, not the compiled output from Babel, Webpack, TypeScript, vue.js, etc.
    Generating correct source maps can be challenging, so we also contributed patches to build tools (i.e. babel, vue.js, regenerator) – benefiting the whole ecosystem.
  3. Reduced overhead when debugger isn’t focused. No need to worry any longer about keeping the DevTools open! We found and removed many expensive calculations from running in the debugger when it’s in the background.
  4. Predictable breakpoints, pausing, and stepping. We fixed many long-standing bugs deep in the debugger architecture, solving some of the most common and frustrating issues related to lost breakpoints, pausing in the wrong script, or stepping through pretty-printed code.
  5. Faster variable preview. Thanks to our faster source-map support (and lots of additional work), previews are now displayed much more quickly when you hover your mouse over a variable while execution is paused.

These are just a handful of highlights. We’ve also resolved countless bugs and polish issues.

Looking ahead

Foremost, we must maintain a high standard of quality, which we’ll accomplish by explicitly setting aside time for polish in our planning. Guided by user feedback, we intend to use this time to improve new and existing features alike.

Second, continued investment in our performance and correctness tests ensures that the ever-changing JavaScript ecosystem, including a wide variety of frameworks and compiled languages, is well supported by our tools.

Debug all the things with new features

Finding and pausing in just the right location can be key to understanding a bug. This should feel effortless, so we’ve scrutinized our own tools—and studied others—to give you the best possible experience.

Inline breakpoints for fine-grained pausing and stepping

Inline handlers are the perfect match for the extra granularity.

Why should breakpoints operate on lines, when lines can have multiple statements? Thanks to inline breakpoints, it’s now easier than ever to debug minified scripts, arrow functions, and chained method calls. Learn more about breakpoints on MDN or try out the demo.

Logpoints combine the power of Console and Debugger

Adding Logpoints to monitor application state.

Console logging, also called printf() debugging, is a quick and easy way to observe your program’s flow, but it rapidly becomes tedious. Logpoints break that tiresome edit-build-refresh cycle by dynamically injecting console.log() statements into your running application. You can stay in the browser and monitor variables without pausing or editing any code. Learn more about log points on MDN.

Seamless debugging for JavaScript Workers

The new Threads panel in the Debugger for Worker debugging

Web Workers power the modern web and need to be first-class concepts in DevTools. Using the new Threads panel, you can switch between and independently pause different execution contexts. This allows workers and their scripts to be debugged within the same Debugger panel, similarly to other modern browsers. Learn more about Worker debugging on MDN.

Human-friendly variable names for source maps

Debugging bundled and compressed code isn’t easy. The Source Maps project, started and maintained by Firefox, bridges the gap between minified code running in the browser and its original, human-friendly version, but the translation isn’t perfect. Often, bits of the minified build output shine through and break the illusion. We can do better!

From build output back to original human-readable variables

By combining source maps with the Babel parser, Firefox’s Debugger can now preview the original variables you care about, and hide the extraneous cruft from compilers and bundlers. This can even work in the console, automatically resolving human-friendly identifiers to their actual, minified names behind the scenes. Due to its performance overhead, you have to enable this feature separately by clicking the “Map” checkbox in the Debugger’s Scopes panel. Read the MDN documentation on using the map scopes feature.

What’s next

Developers frequently need to switch between browsers to ensure that the web works for everyone, and we want our DevTools to be an intuitive, seamless experience. Though browsers have converged on the same broad organization for tools, we know there are still gaps in both features and UI. To help us address those gaps, please let us know where you experience friction when switching browsers in your daily work.

Your input makes a big difference

As always, we would love to hear your feedback on how we can improve DevTools and the browser.

While all these updates will be ready to try out in Firefox 67, when it’s released next week, we’ve polished them to perfection in Firefox 68 and added a few more goodies. Download Firefox Developer Edition (68) to try the latest updates for devtools and platform now.

The post Faster smarter JavaScript debugging in Firefox DevTools appeared first on Mozilla Hacks - the Web developer blog.

Categorieën: Mozilla-nl planet

Mike Conley: A few words on main thread disk access for general audiences

do, 16/05/2019 - 16:49

I’m writing this in lieu of a traditional Firefox Front-end Performance Update, as I think this will be more useful in the long run than just a snapshot of what my team is doing.

I want to talk about main thread disk access (sometimes referred to more generally as “main thread IO”). Specifically, I’m going to argue that main thread disk access is lethal to program responsiveness. For some folks reading this, that might be an obvious argument not worth making, or one already made ad nauseam — if that’s you, this blog post is probably not for you. You can go ahead and skip most or all of it, if you’d like. Or just skim it. You never know — there might be something in here you didn’t know or hadn’t thought about!

For everybody else, scoot your chairs forward, grab a snack, and read on.

Disclaimer: I wouldn’t call myself a disk specialist. I don’t work for Western Digital or Seagate. I don’t design file systems. I have, however, been using and writing software for computers for a significant chunk of my life, and I seem to have accumulated a bunch of information about disks. Some of that information might be incorrect or imprecise. Please send me mail at mike dot d dot conley at gmail dot com if any of this strikes you as wildly inaccurate (though forgive me if I politely disregard pedantry), and then I can update the post.

The mechanical parts of a computer

If you grab a screwdriver and (carefully) open up a laptop or desktop computer, what do you see? Circuit boards, chips, wires and plugs. Lots of electrons flowing around in there, moving quickly and invisibly.

Notably, there aren’t many mechanical moving parts of a modern computer. Nothing to grease up, nowhere to pour lubricant. Opening up my desktop at home, the only moving parts I can really see are the cooling fans near the CPU and power supply (and if you’re like me, you’ll also notice that your cooling fans are caked with dust and in need of a cleaning).

There’s another moving part that’s harder to see — the hard drive. This might not be obvious, because most mechanical drives (I’ve heard them sometimes referred to as magnetic drives, spinny drives, physical drives, platter drives and HDDs. There are probably more terms.) hide their moving parts inside of the disk enclosure.1

If you ever get the opportunity to open one of these enclosures (perhaps the disk has failed or is otherwise being replaced, and you’re just about to get rid of it) I encourage you to.

As you disassemble the drive, what you’ll probably notice are circular parts, layered on top of one another on a motor that spins them. In between those circles are little arms that can move back and forth. This next image shows one of those circles, and one of those little arms.

<figcaption>There are several of those circles stacked on top of one another, and several of those arms in between them. We’re only seeing the top one in this photo.</figcaption>

Does this remind you of anything? The circular parts remind me of CDs and DVDs, but the arms reaching across them remind me of vinyl players.

<figcaption>Vinyl’s back, baby!</figcaption>

The comparison isn’t that outlandish. If you ignore some of the lower-level details, CDs, DVDs, vinyl players and hard drives all operate under the same basic principles:

  1. The circular part has information encoded on it.
  2. An arm of some kind is able to reach across the radius of the circular part.
  3. Because the circular part is spinning, the arm is able to reach all parts of it.
  4. The end of the arm is used to read the information encoded on it.

There’s some extra complexity for hard drives. Normally there’s more than one spinning platter and one arm, all stacked up, so it’s more like several vinyl players piled on top of one another.

Hard drives are also typically written to as well as read from, whereas CDs, DVDs and vinyls tend to be written to once, and then used as “read-only memory.” (Though, yes, there are exceptions there.)

Lastly, for hard drives, there’s a bit I’m skipping over involving caches, where parts of the information encoded on the spinning platters are temporarily held elsewhere for easier and faster access, but we’ll ignore that for now for simplicity, and because it wrecks my simile.2

So, in general, when you’re asking a computer to read a file off of your hard drive, it’s a bit like asking it to play a tune on a vinyl. It needs to find the right starting place to put the needle, then it needs to put the needle there and only then will the song play.

For hard drives, the act of moving the “arm” to find the right spot is called seeking.

Contiguous blocks of information and fragmentation

Have you ever had to defragment your hard drive? What does that even mean? I’m going to spend a few moments trying to explain that at a high-level. Again, if this is something you already understand, go ahead and skip this part.

Most functional hard drives allow you to do the following useful operations:

  1. Write data to the drive
  2. Read data from the drive
  3. Remove data from the drive

That last one is interesting, because usually when you delete a file from your computer, the information isn’t actually erased from the disk. This is true even after emptying your Trash / Recycling Bin — perhaps surprisingly, the files that you asked to be removed are still there encoded on the circular platters as 1’s and 0’s. This is why it’s sometimes possible to recover deleted files even when it seems that all is lost.

Allow me to explain.

Just like there are different ways of organizing a sock drawer (at random, by colour, by type, by age, by amount of damage), there are ways of organizing a hard drive. These “ways” are called file systems. There are lots of different file systems. If you’re using a modern version of Windows, you’re probably using a file system called NTFS. One of the things that a file system is responsible for is knowing where your files are on the spinning platters. This file system is also responsible for knowing where there’s free space on the spinning platters to write new data to.

When you delete a file, what tends to happen is that your file system marks those sectors of the platter as places where new information can be written to, but doesn’t immediately overwrite those sectors. That’s one reason why sometimes deleted files can be recovered.

Depending on your file system, there’s a natural consequence as you delete and write files of different sizes to the hard drive: fragmentation. This kinda sounds like the actual physical disk is falling apart, but that’s not what it means. Data fragmentation is probably a more precise way of thinking about it.

Imagine you have a sheet of white paper broken up into a grid of 5 boxes by 5 boxes (25 boxes in total), and a box of paints and paintbrushes.

Each square on the paper is white to start. Now, starting from the top-left, and going from left-to-right, top-to-bottom, use your paint to fill in 10 of those boxes with the colour red. Now use your paint to fill in the next 5 boxes with blue. Now do 3 more boxes with yellow.

So we’ve got our colour-filled boxes in neat, organized rows (red, then blue, then yellow), and we’ve got 18 of them filled, and 7 of them still white.

Now let’s say we don’t care about the colour blue. We’re okay to paint over those now with a new colour. We also want to fill in 10 boxes with the colour purple. Hm… there aren’t enough free white boxes to put in that many purple ones, but we have these 5 blue ones we can paint over. Let’s paint over them with purple, and then put the next 5 at the end in the white boxes.

So now 23 of the boxes are filled, we’ve got 2 left at the end that are white, but also, notice that the purple boxes aren’t all together — they’ve been broken apart into two sections. They’ve been fragmented.

This is an incredibly simplified model, but (I hope) it demonstrates what happens when you delete and write files to a hard drive. Gaps open up that can be written to, and bits and pieces of files end up being distributed across the platters as fragments.

This also occurs as files grow. If, for example, we decided to paint two more white boxes red, we’d need to paint the ones at the very end, breaking up the red boxes so that they’re fragmented.

So going back to our vinyl player example for a second —  the ideal scenario is that you start a song at the beginning and it plays straight through until the end, right? The more common case with disk drives, however, is you read bits and pieces of a song from different parts of the vinyl: you have to lift and move the arm each time until eventually you have heard the song from start to finish. That seeking of the arm adds overhead to the time it takes to listen to the song from beginning to end.

When your hard drive undergoes defragmentation, what your computer does is try to re-organize your disk so that files are in contiguous sectors on the platters. That’s a fancy way of saying that they’re all in a row on the platter, so they can be read in without the overhead of seeking around to assemble it as fragments.

Skipping that overhead can have huge benefits to your computer’s performance, because the disk is usually the slowest part of your computer.

I’ve skipped over and simplified a bunch of stuff here in the interests of brevity, but this is a great video that gives a crash course on file systems and storage. I encourage you to watch it.

On the relative input / output speeds of modern computing components

I mentioned in the disclaimer at the start of this post that I’m not a disk specialist or expert. Scott Davis is probably a better bet as one of those. His bio lists an impressive wealth of experience, and mentions that he’s “a recognized expert in virtualization, clustering, operating systems, cloud computing, file systems, storage, end user computing and cloud native applications.”

I don’t know Scott at all (if you’re reading this, Hi, Scott!), but let’s just agree for now that he probably knows more about disks than I do.

I’m picking Scott as an expert because of a particularly illustrative analogy that was posted to a blog for a company he used to work for. The analogy compares the speeds of different media that can be used to store information on a computer. Specifically, it compares the following:

  1. RAM
  2. The network with a decent connection
  3. Flash drives
  4. Magnetic hard drives — what we’ve been discussing up until now.

For these media, the post claims that input / output speed can be measured using the following units:

  • RAM is in nanoseconds
  • 10GbE Network speed is in microseconds (~50 microseconds)
  • Flash speed is in microseconds (between 20-500+ microseconds)
  • Disk speed is in milliseconds

That all seems pretty fast. What’s the big deal? Well, it helps if we zoom in a little bit. The post does this by supposing that we pretend that RAM speed happens in minutes.

If that’s the case, then we’d have to measure network speed in weeks.

And if that’s the case, then we’d want to measure the speed of a Flash drive in months.

And if that’s the case, then we’d have to measure the speed of a magnetic spinny disk in decades.

Update (May 23, 2019): My Uncle Mark, who also works in computing, sent me links that show similar visualizations of computing latency: this one has a really excellent infographic, and this one has more discussion. These articles highlight network latency as the worst offender, which is true especially when the quality of service is low, but I’m mostly writing this post for folks who hack on Firefox where the vast majority of networking occurs off of the main thread.

I wish I had some ACM paper, or something written by a computer science professor that I could point to you to bolster the following claim. I don’t, not because one doesn’t exist, but because I’m too lazy to look for one. I hope you’ll forgive me for that, but I don’t think I’m saying anything super controversial when I say:

In the common case, for a personal computer, it’s best to assume that reading and writing to the disk is the slowest operation you can perform.

Sure, there are edge cases where other things in the system might be slower. And there is that disk cache that I breezed over earlier that might make reading or writing cheaper. And sometimes the operating system tries to do smart things to help you. For now, just let it go. I’m making a broad generalization that I think covers the common cases, and I’m talking about what’s best to assume.

Single and multi-threaded restaurants

When I try to describe threading and concurrency to someone, I inevitably fall back to the metaphor of cooks in a kitchen in a restaurant. This is a special restaurant where there’s only one seat, for a single customer — you, the user.

Single-threaded programs

Let’s imagine a restaurant that’s very, very small and simple. In this restaurant, the cook is also acting as the waiter / waitress / server. That means when you place your order, the server / cook goes into the kitchen and makes it for you. While they’re gone, you can’t really ask for anything else — the server / cook is busy making the thing you asked for last.

This is how most simple, single-threaded programs work—the user feeds in requests, maybe by clicking a button, or typing something in, maybe something else entirely—and then the program goes off and does it and returns some kind of result. Maybe at that point, the program just exits (“The restaurant is closed! Come back tomorrow!”), or maybe you can ask for something else. It’s really up to how the restaurant / program is designed that dictates this.

Suppose you’re very, very hungry, and you’ve just ordered a complex five-course meal for yourself at this restaurant. Blanching, your server / cook goes off to the kitchen. While they’re gone, nobody is refilling your water glass or giving you breadsticks. You’re pretty sure there’s activity going in the kitchen and that the server / cook hasn’t had a heart attack back there, but you’re going to be waiting a looooong time since there’s only one person working in this place.

Maybe in some restaurants, the server / cook will dash out periodically to refill your water glass, give you some breadsticks, and update you on how things are going, but it sure would be nice if we gave this person some help back there, wouldn’t it?

Multi-threaded programs

Let’s imagine a slightly different restaurant. There are more cooks in the kitchen. The server is available to take your order (but is also able to cook in the kitchen if need be), and you make your request from the menu.

Now suppose again that you order a five-course meal. The server goes to the kitchen and tells the cooks what you just ordered. In this restaurant, suppose the kitchen staff are a really great team and don’t get in each other’s way3, so they divide up the order in a way that makes sense and get to work.

The server can come back and refill your water glass, feed you breadsticks, perhaps they can tell you an entertaining joke, perhaps they can take additional orders that won’t take as long. At any rate, in this restaurant, the interaction between the user and the server is frequent and rarely interrupted.

The waiter / waitress / server is the main thread

In these two examples, the waiter / waitress / server is what is usually called the main thread of execution, which is the part of the program that the user interacts with most directly. By moving expensive operations off of the main thread, the responsiveness of the program increases.

Have you ever seen the mouse turn into an hourglass, seen the “This program is not responding” message on Windows? Or the spinny colourful pinwheel on macOS? In those cases, the main thread is off doing something and never came back to give you your order or refill your water or breadsticks — that’s how it generally manifests in common operating systems. The program seems “unresponsive”, “sluggish”, “frozen”. It’s “hanging”, or “stuck”. When I hear those words, my immediate assumption is that the main thread is busy doing something — either it’s taking a long time (it’s making you your massive five course meal, maybe not as efficiently as it could), or it’s stuck (maybe they fell down a well!).

In either case, the general rule of thumb to improving program responsiveness is to keep the server filling the user’s water and breadsticks by offloading complex things on the menu to other cooks in the kitchen.

Accessing the disk on the main thread

Recall that in the common case, for a personal computer, it’s best to assume that reading and writing to the disk is the slowest operation you can perform. In our restaurant example, reading or writing to the disk on the main thread is a bit like having your server hop onto their bike and ride out to the next town over to grab some groceries to help make what you ordered.

And sometimes, because of data fragmentation (not everything is all in one place), the server has to search amongst many many shelves all widely spaced apart to get everything.

And sometimes the grocery store is very busy because there are other restaurants out there that are grabbing supplies.

And sometimes there are police checks (anti-virus / anti-malware software) occurring for passengers along the road, where they all have to show their IDs before being allowed through.

It’s an incredibly slow operation. Hopefully by the time the server comes back, they don’t realize they have to go back out again to get more, but they might if they didn’t realize they were missing some more ingredients.4

Slow slow slow. And unresponsive. And a great way to lose a hungry customer.

For super small programs, where the kitchen is well stocked, or the ride to the grocery store doesn’t need to happen often, having a single-thread and having it read or write is usually okay. I’ve certainly written my fair share of utility programs or scripts that do main thread disk access.

Firefox, the program I spend most of my time working on as my job, is not a small program. It’s a very, very, very large program. Using our restaurant model, it’s many large restaurants with many many cooks on staff. The restaurants communicate with each other and ship food and supplies back and forth using messenger bikes, to provide to you, the customer, the best meals possible.

But even with this large set of restaurants, there’s still only a single waiter / waitress / server / main thread of execution as the point of contact with the user.

Part of my job is to help organize the workflows of this restaurant so that they provide those meals as quickly as possible. Sending the server to the grocery store (main thread disk access) is part of the workflow that we absolutely need to strike from the list.

Start-up main-thread disk access

Going back to our analogy, imagine starting the program like opening the restaurant. The lights go on, the chairs come off of the tables, the kitchen gets warmed up, and prep begins.

While this is occurring, it’s all hands on deck — the server might be off in the kitchen helping to do prep, off getting cutlery organized, whatever it takes to get the restaurant open and ready to serve. Before the restaurant is open, there’s no point in having the server be idle, because the customer hasn’t been able to come in yet.

So if critical groceries and supplies needed to open the restaurant need to be gotten before the restaurant is open, it’s fine to send the server to the store. Somebody has to do it.

For Firefox, there are various things that need to take place before we can display any UI. At that point, it’s usually fine to do main-thread disk access, so long as all of the things being read or written are kept to an absolute minimum. Find how much you need to do, and reduce it as much as possible.

But as soon as UI is presented to the user, the restaurant is open. At that point, the server should stay off their bike and keep chatting with the customer, even if the kitchen hasn’t finished setting up and getting all of their supplies. So to stay responsive, don’t do disk access on the main thread of execution after you’ve started to show the user some kind of UI.

Disk contention

There’s one last complication I want to capture here with our restaurant example before I wrap up. I’ve been saying that it’s important to send anyone except the server to the grocery store for supplies. That’s true — but be careful of sending too many other people at the same time.

Moving disk access off of the main thread is good for responsiveness, full stop. However, it might do nothing to actually improve the overall time that it takes to complete some amount of work. Put it another way: just because the server is refilling your glass and giving you breadsticks doesn’t mean that your five-course meal is going to show up any faster.

Also, disk operations on magnetic drives do not have a constant speed. Having the disk do many things at once within a single program or across multiple programs can slow the whole set of operations down due to the overhead of seeking and context switching, since the operating system will try to serve all disk requests at once, more or less.5

Disk contention and main thread disk access is something I think a lot about these days while my team and I work on improving Firefox start-up performance.

Some questions to ask yourself when touching disk

So it’s important to be thoughtful about disk access. Are you working on code that touches disk? Here are some things to think about:

Is UI visible, and responsiveness a goal?

If so, best to move the disk access off of the main-thread. That was the main thing I wanted to capture, and I hope I’ve convinced you of that point by now.

Does the access need to occur?

As programs age and grow and contributors come and go, sometimes it’s important to take a step back and ask, “Are the assumptions of this disk access still valid? Does this access need to happen at all?” The fastest code is the code that doesn’t run at all.

What else is happening during this disk access? Can disk access be prioritized more efficiently?

This is often trickier to answer as a program continues to run. Thankfully, tools like profilers can help capture recordings of things like disk access to gain evidence of simultaneous disk access.

Start-up is a special case though, since there’s usually a somewhat deterministic / reliably stable set of operations that occur in the same way in roughly the same order during start-up. For start-up, using a tool like a profiler, you can gain a picture of the sorts of things that tend to happen during that special window of time. If you notice a lot of disk activity occurring simultaneously across multiple threads, perhaps ponder if there’s a better way of ordering those operations so that the most important ones complete first.

Can we reduce how much we need to read or write?

There are lots of wonderful compression algorithms out there with a variety of performance characteristics that might be worth pondering. It might be worth considering compressing the data that you’re storing before writing it so that the disk has to write less and read less.

Of course, there’s compression and decompression overhead to consider here. Is it worth the CPU time to save the disk time? Is there some other CPU intensive task that is more critical that’s occurring?

Can we organize the things that we want to read ahead of time so that they’re more likely to be read contiguously (without seeking the disk)?

If you know ahead of time the sorts of things that you’re going to be reading off of the disk, it’s generally a good strategy to store them in that read order. That way, in the best case scenario (the disk is defragmented), the read head can fly along the sectors and read everything in, in exactly the right order you want them. If the user has defragmented their disk, but the things you’re asking for are all out of order on the disk, you’re adding overhead to seek around to get what you want.

Supposing that the data on the disk is fragmented, I suspect having the files in order anyways is probably better than not, but I don’t think I know enough to prove it.

Flawed but useful

One of my mentors, Greg Wilson, likes to say that “all models are flawed, but some are useful”. I don’t think he coined it, but he uses it in the right places at the right times, and to me, that’s what counts.

The information in this post is not exhaustive — I glossed over and left out a lot. It’s flawed. Still, I hope it can be useful to you.

Thanks

Thanks to the following folks who read drafts of this and gave feedback:

  • Mandy Cheang
  • Emily Derr
  • Gijs Kruitbosch
  • Doug Thayer
  • Florian Quèze
  1. There are also newer forms of disks called Flash disks and SSDs. I’m not really going to cover those in this post. 

  2. The other thing to keep in mind is that the disk cache can have its contents evicted at any time for reasons that are out of your control. If you time it right, you can maybe increase the probability of a file you want to read being in the cache, but don’t bet the farm on it. 

  3. When writing multi-threaded programs, this is much harder than it sounds! Mozilla actually developed a whole new programming language to make that easier to do correctly. 

  4. Keen readers might notice I’m leaving out a discussion on Paging. That’s because this blog post is getting quite long, and because it kinda breaks the analogy a bit — who sends groceries back to a grocery store? 

  5. I’ve never worked on an operating system, but I believe most modern operating systems try to do a bunch of smart things here to schedule disk requests in efficient ways. 

Categorieën: Mozilla-nl planet

Mozilla VR Blog: Making ethical decisions for the immersive web

wo, 15/05/2019 - 20:01
Making ethical decisions for the immersive web

One of the promises of immersive technologies is real time communication unrestrained by geography. This is as transformative as the internet, radio, television, and telephones—each represents a pivot in mass communications that provides new opportunities for information dissemination and creating connections between people. This raises the question, “what’s the immersive future we want?”

We want to be able to connect without traveling. Indulge our curiosity and creativity beyond our physical limitations. Revolutionize the way we visualize and share our ideas and dreams. Enrich everyday situations. Improve access to limited resources like healthcare and education.

The internet is an integral part of modern life—a key component in education, communication, collaboration, business, entertainment and society as a whole.
— Mozilla Manifesto, Principle 1

My first instinct is to say that I want an immersive future that brings joy. Do AR apps that help me maintain my car bring me joy? Not really.

What I really want is an immersive future that respects individual creators and users. Platforms and applications that thoughtfully approach issues of autonomy, privacy, bias, and accessibility in a complex environment. How do we get there? First, we need to understand the broader context of augmented and virtual reality in ethics, identifying overlap with both other technologies (e.g. artificial intelligence) and other fields (e.g. medicine and education). Then, we can identify the unique challenges presented by spatial and immersive technologies. Given the current climate of ethics and privacy, we can anticipate potential problems, identify the core issues, and evaluate different approaches.

From there, we have an origin for discussion and a path for taking practical steps that enable legitimate uses of MR while discouraging abuse and empowering individuals to make choices that are right for them.

For details and an extended discussion on these topics, see this paper.

The immersive web

Whether you have a $30 or $3000 headset, you should be able to participate in the same immersive universe. No person should be excluded due to their skin color, hairstyle, disability, class, location, or any other reason.

The internet is a global public resource that must remain open and accessible.
Mozilla Manifesto, Principle 2

The immersive web represents an evolution of the internet. Immersive technologies are already deployed in education and healthcare. It's unethical to limit their benefits to a privileged few, particularly when MR devices can improve access to limited resources. For example, Americans living in rural areas are underserved by healthcare, particularly specialist care. In an immersive world, location is no longer an obstacle. Specialists can be virtually present anywhere, just like they were in the room with the patient. Trained nurses and assistants would be required for physical manipulations and interactions, but this could dramatically improve health coverage and reduce burdens on both patients and providers.

While we can build accessibility into browsers and websites, the devices themselves need to be created with appropriate accomodations, like settings that indicate a user is in a wheelchair. When we design devices and experiences, we need to consider how they'll work for people with disabilities. It's imperative to build inclusive MR devices and experiences, both because it's unethical to exclude users due to disability, and because there are so many opportunities to use MR as an assistive technology, including:

  • Real time subtitles
  • Gaze-based navigation
  • Navigation with vehicle and obstacle detection and warning

The immersive web is for everyone.

Representation and safety

Mixed reality offers new ways to connect with each other, enabling us to be virtually present anywhere in the world instantaneously. Like most technologies, this is both a good and a bad thing. While it transforms how we can communicate, it also offers new vectors for abuse and harassment.

All social VR platforms need to have simple and obvious ways to report abusive behavior and block the perpetrators. All social platforms, whether 2D or 3D should have this, but the VR-enabled embodiment intensifies the impact of harassment. Behavior that would be limited to physical presence is no longer geographically limited, and identities can be more obfuscated. Safety is not a 'nice to have' feature — it's a requirement. Safety is a key component in inclusion and freedom of expression, as well as being a human right.

Freedom of expression in this paradigm includes both choosing how to present yourself and having the ability to maintain multiple virtual identities. Immersive social experiences allow participants to literally build their identities via avatars. Human identity is infinitely complex (and not always very human — personally, I would choose a cat avatar). Thoughtfully approaching diversity and representation in avatars isn't easy, but it is worthwhile.

Individuals must have the ability to shape the internet and their own experiences on it.
Mozilla Manifesto, Principle 5

Suppose Banksy, a graffiti artist known both for their art and their anonymity, is an accountant by day who used an HMD to conduct virtual meetings. Outside of work, Banksy is a virtual graffiti artist. However, biometric data could tie the two identities together, stripping Banksy of their anonymity. Anonymity enables free speech; it removes the threats of economic retailiation and social ostracism and allows consumers to process ideas free of predjudices about the creators. There's a long history of women who wrote under assumed names to avoid being dismissed for their gender, including JK Rowling and George Sand.

Unique considerations in mixed reality

Immersive technologies differ from others in their ability to affect our physical bodies. To achieve embodiment and properly interact with virtual elements, devices use a wide range of data derived from user biometrics, the surrounding physical world, and device orientation. As the technology advances, the data sources will expand.

The sheer amount of data required for MR experiences to function requires that we rethink privacy. Earlier, I mentioned that gaze-based navigation can be used to allow mobility impaired users to participate more fully on the immersive web. Unfortunately, gaze tracking data also exposes large amounts of nonverbal data that can be used to infer characteristics and mental states, including ADHD and sexual arousal.

Individuals’ security and privacy on the internet are fundamental and must not be treated as optional.
Mozilla Manifesto, Principle 4

While there may be a technological solution to this problem, it highlights a wider social and legal issue: we've become too used to companies monetizing our personal data. It's possible to determine that, although users report privacy concerns, they don't really care, because they 'consent' to disclosing personal information for small rewards. The reality is that privacy is hard. It's hard to define and it's harder to defend. Processing privacy policies feels like it requires a law degree and quantifying risks and tradeoffs is nondeterministic. In the US, we've focused on privacy as an individual's responsibility, when Europe (with the General Data Protection Regulation) shows that it's society's problem and should be tackled comprehensively.

Concrete steps for ethical decision making

Ethical principles aren't enough. We also need to take action — while some solutions will be technical, there are also legal, regulatory, and societal challenges that need to be addressed.

  1. Educate and assist lawmakers
  2. Establish a regulatory authority for flexible and responsive oversight
  3. Engage engineers and designers to incorporate privacy by design
  4. Empower users to understand the risks and benefits of immersive technology
  5. Incorporate experts from other fields who have addressed similar problems

Tech needs to take responsibility. We've built technology that has incredible positive potential, but also serious risks for abuse and unethical behavior. Mixed reality technologies are still emerging, so there's time to shape a more respectful and empowering immersive world for everyone.

Categorieën: Mozilla-nl planet

Hacks.Mozilla.Org: Empowering User Privacy and Decentralizing IoT with Mozilla WebThings

wo, 15/05/2019 - 19:47

Smart home devices can make busy lives a little easier, but they also require you to give up control of your usage data to companies for the devices to function. In a recent article from the New York Times’ Privacy Project about protecting privacy online, the author recommended people to not buy Internet of Things (IoT) devices unless they’re “willing to give up a little privacy for whatever convenience they provide.

This is sound advice since smart home companies can not only know if you’re at home when you say you are, they’ll soon be able to listen for your sniffles through their always-listening microphones and recommend sponsored cold medicine from affiliated vendors. Moreover, by both requiring that users’ data go through their servers and by limiting interoperability between platforms, leading smart home companies are chipping away at people’s ability to make real, nuanced technology choices as consumers.

At Mozilla, we believe that you should have control over your devices and the data that smart home devices create about you. You should own your data, you should have control over how it’s shared with others, and you should be able to contest when a data profile about you is inaccurate.

Mozilla WebThings follows the privacy by design framework, a set of principles developed by Dr. Ann Cavoukian, that takes users’ data privacy into account throughout the whole design and engineering lifecycle of a product’s data process. Prioritizing people over profits, we offer an alternative approach to the Internet of Things, one that’s private by design and gives control back to you, the user.

User Research Findings on Privacy and IoT Devices

Before we look at the design of Mozilla WebThings, let’s talk briefly about how people think about their privacy when they use smart home devices and why we think it’s essential that we empower people to take charge.

Today, when you buy a smart home device, you are buying the convenience of being able to control and monitor your home via the Internet. You can turn a light off from the office. You can see if you’ve left your garage door open. Prior research has shown that users are passively, and sometimes actively, willing to trade their privacy for the convenience of a smart home device. When it seems like there’s no alternative between having a potentially useful device or losing their privacy, people often uncomfortably choose the former.

Still, although people are buying and using smart home devices, it does not mean they’re comfortable with this status quo. In one of our recent user research surveys, we found that almost half (45%) of the 188 smart home owners we surveyed were concerned about the privacy or security of their smart home devices.

Bar graph showing about 45% of the 188 current smart home owners we surveyed were concerned about their privacy or security at least a few times per month.

User Research Survey Results

Last Fall 2018, our user research team conducted a diary study with eleven participants across the United States and the United Kingdom. We wanted to know how usable and useful people found our WebThings software. So we gave each of our research participants some Raspberry Pis (loaded with the Things 0.5 image) and a few smart home devices.

User research participants were given a raspberry Pi, a smart light, a motion sensor, a smart plug, and a door sensor.

Smart Home Devices Given to Participants for User Research Study

We watched, either in-person or through video chat, as each individual walked through the set up of their new smart home system. We then asked participants to write a ‘diary entry’ every day to document how they were using the devices and what issues they came across. After two weeks, we sat down with them to ask about their experience. While a couple of participants who were new to smart home technology were ecstatic about how IoT could help them in their lives, a few others were disappointed with the lack of reliability of some of the devices. The rest fell somewhere in between, wanting improvements such as more sophisticated rules functionality or a phone app to receive notifications on their iPhones.

We also learned more about people’s attitudes and perceptions around the data they thought we were collecting about them. Surprisingly, all eleven of our participants expected data to be collected about them. They had learned to expect data collection, as this has become the prevailing model for other platforms and online products. A few thought we would be collecting data to help improve the product or for research purposes. However, upon learning that no data had been collected about their use, a couple of participants were relieved that they would have one less thing, data, to worry about being misused or abused in the future.

By contrast, others said they weren’t concerned about data collection; they did not think companies could make use of what they believed was menial data, such as when they were turning a light on or off. They did not see the implications of how collected data could be used against them. This showed us that we can improve on how we demonstrate to users what others can learn from your smart home data. For example, one can find out when you’re not home based on when your door has opened and closed.

graph of door sensor logs over a week reveals when someone is not home

Door Sensor Logs can Reveal When Someone is Not Home

From our user research, we’ve learned that people are concerned about the privacy of their smart home data. And yet, when there’s no alternative, they feel the need to trade away their privacy for convenience. Others aren’t as concerned because they don’t see the long-term implications of collected smart home data. We believe privacy should be a right for everyone regardless of their socioeconomic or technical background. Let’s talk about how we’re doing that.

Decentralizing Data Management Gives Users Privacy

Vendors of smart home devices have architected their products to be more of service to them than to their customers. Using the typical IoT stack, in which devices don’t easily interoperate, they can build a robust picture of user behavior, preferences, and activities from data they have collected on their servers.

Take the simple example of a smart light bulb. You buy the bulb, and you download a smartphone app. You might have to set up a second box to bridge data from the bulb to the Internet and perhaps a “user cloud subscription account” with the vendor so that you can control the bulb whether you’re home or away. Now imagine five years into the future when you have installed tens to hundreds of smart devices including appliances, energy/resource management devices, and security monitoring devices. How many apps and how many user accounts will you have by then?

The current operating model requires you to give your data to vendor companies for your devices to work properly. This, in turn, requires you to work with or around companies and their walled gardens.

Mozilla’s solution puts the data back in the hands of users. In Mozilla WebThings, there are no company cloud servers storing data from millions of users. User data is stored in the user’s home. Backups can be stored anywhere. Remote access to devices occurs from within one user interface. Users don’t need to download and manage multiple apps on their phones, and data is tunneled through a private, HTTPS-encrypted subdomain that the user creates.

The only data Mozilla receives are the instances when a subdomain pings our server for updates to the WebThings software. And if a user only wants to control their devices locally and not have anything go through the Internet, they can choose that option too.

Decentralized distribution of WebThings Gateways in each home means that each user has their own private “data center”. The gateway acts as the central nervous system of their smart home. By having smart home data distributed in individual homes, it becomes more of a challenge for unauthorized hackers to attack millions of users. This decentralized data storage and management approach offers a double advantage: it provides complete privacy for user data, and it securely stores that data behind a firewall that uses best-of-breed https encryption.

The figure below compares Mozilla’s approach to that of today’s typical smart home vendor.

Comparison of Mozilla’s Approach to Typical Smart Home Vendor

Mozilla’s approach gives users an alternative to current offerings, providing them with data privacy and the convenience that IoT devices can provide.

Ongoing Efforts to Decentralize

In designing Mozilla WebThings, we have consciously insulated users from servers that could harvest their data, including our own Mozilla servers, by offering an interoperable, decentralized IoT solution. Our decision to not collect data is integral to our mission and additionally feeds into our Emerging Technology organization’s long-term interest in decentralization as a means of increasing user agency.

WebThings embodies our mission to treat personal security and privacy on the Internet as a fundamental right, giving power back to users. From Mozilla’s perspective, decentralized technology has the ability to disrupt centralized authorities and provide more user agency at the edges, to the people.

Decentralization can be an outcome of social, political, and technological efforts to redistribute the power of the few and hand it back to the many. We can achieve this by rethinking and redesigning network architecture. By enabling IoT devices to work on a local network without the need to hand data to connecting servers, we decentralize the current IoT power structure.

With Mozilla WebThings, we offer one example of how a decentralized, distributed system over web protocols can impact the IoT ecosystem. Concurrently, our team has an unofficial draft Web Thing API specification to support standardized use of the web for other IoT device and gateway creators.

While this is one way we are making strides to decentralize, there are complementary projects, ranging from conceptual to developmental stages, with similar aims to put power back into the hands of users. Signals from other players, such as FreedomBox Foundation, Daplie, and Douglass, indicate that individuals, households, and communities are seeking the means to govern their own data.

By focusing on people first, Mozilla WebThings gives people back their choice: whether it’s about how private they want their data to be or which devices they want to use with their system.

This project is an ongoing effort. If you want to learn more or get involved, check out the Mozilla WebThings Documentation, you can contribute to our documentation or get started on your own web things or Gateway.

If you live in the Bay Area, you can find us this weekend at Maker Faire Bay Area (May 17-19). Stop by our table. Or follow @mozillaiot to learn about upcoming workshops and demos.

The post Empowering User Privacy and Decentralizing IoT with Mozilla WebThings appeared first on Mozilla Hacks - the Web developer blog.

Categorieën: Mozilla-nl planet

Hacks.Mozilla.Org: TLS 1.0 and 1.1 Removal Update

wo, 15/05/2019 - 16:01
tl;dr Enable support for Transport Layer Security (TLS) 1.2 today!

 

As you may have read last year in the original announcement posts, Safari, Firefox, Edge and Chrome are removing support for TLS 1.0 and 1.1 in March of 2020. If you manage websites, this means there’s less than a year to enable TLS 1.2 (and, ideally, 1.3) on your servers, otherwise all major browsers will display error pages, rather than the content your users were expecting to find.

Screenshot of a Secure Connection Failed error page

In this article we provide some resources to check your sites’ readiness, and start planning for a TLS 1.2+ world in 2020.

Check the TLS “Carnage” list

Once a week, the Mozilla Security team runs a scan on the Tranco list (a research-focused top sites list) and generates a list of sites still speaking TLS 1.0 or 1.1, without supporting TLS ≥ 1.2.

Tranco list top sites with TLS <= 1.1

As of this week, there are just over 8,000 affected sites from the one million listed by Tranco.

There are a few potential gotchas to be aware of, if you do find your site on this list:

  • 4% of the sites are using TLS ≤ 1.1 to redirect from a bare domain (https://example.com) to www (https://www.example.com) on TLS ≥ 1.2 (or vice versa). If you were to only check your site post-redirect, you might miss a potential footgun.
  • 2% of the sites don’t redirect from bare to www (or vice versa), but do support TLS ≥ 1.2 on one of them.

The vast majority (94%), however, are just bad—it’s TLS ≤ 1.1 everywhere.

If you find that a site you work on is in the TLS “Carnage” list, you need to come up with a plan for enabling TLS 1.2 (and 1.3, if possible). However, this list only covers 1 million sites. Depending on how popular your site is, you might have some work to do regardless of whether you’re not listed by Tranco.

Run an online test

Even if you’re not on the “Carnage” list, it’s a good idea to test your servers all the same. There are a number of online services that will do some form of TLS version testing for you, but only a few will flag not supporting modern TLS versions in an obvious way. We recommend using one or more of the following:

Check developer tools

Another way to do this is open up Firefox (versions 68+) or Chrome (versions 72+) DevTools, and look for the following warnings in the console as you navigate around your site.

Firefox DevTools console warning

Chrome DevTools console warning

What’s Next?

This October, we plan on disabling old TLS in Firefox Nightly and you can expect the same for Chrome and Edge Canaries. We hope this will give enough time for sites to upgrade before affecting their release population users.

The post TLS 1.0 and 1.1 Removal Update appeared first on Mozilla Hacks - the Web developer blog.

Categorieën: Mozilla-nl planet

Cameron Kaiser: ZombieLoad doesn't affect Power Macs

wo, 15/05/2019 - 03:18
The latest in the continued death march of speculative execution attacks is ZombieLoad (see our previous analysis of Spectre and Meltdown on Power Macs). ZombieLoad uses the same types of observable speculation flaws to exfiltrate data but bases it on a new class of Intel-specific side-channel attacks utilizing a technique the investigators termed MDS, or microarchitectural data sampling. While Spectre and Meltdown attack at the cache level, ZombieLoad targets Intel HyperThreading (HT), the company's implementation of symmetric multithreading, by trying to snoop on the processor's line fill buffers (LFBs) used to load the L1 cache itself. In this case, side-channel leakages of data are possible if the malicious process triggers certain specific and ultimately invalid loads from memory -- hence the nickname -- that require microcode assistance from the CPU; these have side-effects on the LFBs which can be observed by methods similar to Spectre by other processes sharing the same CPU core. (Related attacks against other microarchitectural structures are analogously implemented.)

The attackers don't have control over the observed address, so they can't easily read arbitrary memory, but careful scanning for the type of data you're targeting can still make the attack effective even against the OS kernel. For example, since URLs can be picked out of memory, this apparent proof of concept shows a separate process running on the same CPU victimizing Firefox to extract the URL as the user types it in. This works because as the user types, the values of the individual keystrokes go through the LFB to the L1 cache, allowing the malicious process to observe the changes and extract characters. There is much less data available to the attacking process but that also means there is less to scan, making real-time attacks like this more feasible.

That said, because the attack is specific to architectural details of HT (and the authors of the attack say they even tried on other SMT CPUs without success), this particular exploit wouldn't work even against modern high-SMT count Power CPUs like POWER9. It certainly won't work against a Power Mac because no Power Mac CPU ever implemented SMT, not even the G5. While Mozilla is deploying a macOS-specific fix, we don't need it in TenFourFox, nor do we need other mitigations. It's especially bad news for Intel because nearly every Intel chip since 2011 is apparently vulnerable and the performance impact of fixing ZombieLoad varies anywhere from Intel's Pollyanna estimate of 3-9% to up to 40% if HT must be disabled completely.

Is this a major concern for users? Not as such: although the attacks appear to be practical and feasible, they require you to run dodgy software and that's a bad idea on any platform because dodgy software has any number of better ways of pwning your computer. So don't run dodgy programs!

Meanwhile, TenFourFox FPR14 final should be available for testing this weekend.

Categorieën: Mozilla-nl planet

The Rust Programming Language Blog: 4 years of Rust

wo, 15/05/2019 - 02:00

On May 15th, 2015, Rust was released to the world! After 5 years of open development (and a couple of years of sketching before that), we finally hit the button on making the attempt to create a new systems programming language a serious effort!

It’s easy to look back on the pre-1.0 times and cherish them for being the wild times of language development and fun research. Features were added and cut, syntax and keywords were tried, and before 1.0, there was a big clean-up that removed a lot of the standard library. For fun, you can check Niko’s blog post on how Rust's object system works, Marijn Haverbeke’s talk on features that never made it close to 1.0 or even the introductory slides about Servo, which present a language looking very different from today.

Releasing Rust with stability guarantees also meant putting a stop to large visible changes. The face of Rust is still very similar to Rust 1.0. Even with the changes from last year’s 2018 Edition, Rust is still very recognizable as what it was in 2015. That steadiness hides that the time of Rust’s fastest development and growth is now. With the stability of the language and easy upgrades as a base, a ton of new features have been built. We’ve seen a bunch of achievements in the last year:

This list could go on and on. While the time before and after release was a time where language changes had huge impact how Rust is perceived, it's becoming more and more important what people start building in and around it. This includes projects like whole game engines, but also many small, helpful libraries, meetup formats, tutorials other educational material. Birthdays are a great time to take a look back over the last year and see the happy parts!

Rust would be nothing, and especially not winning prizes, without its community. Community happens everywhere! We would like to thank everyone for being along on this ride, from team members to small scale contributors to people just checking the language out and finding interest in it. Your interest and curiosity is what makes the Rust community an enjoyable place to be. Some meetups are running birthday parties today to which everyone is invited. If you are not attending one, you can take the chance to celebrate in any other fashion: maybe show us a picture of what you are currently working on or talk about what excites you. If you want to take it to social media, consider tagging our Twitter account or using the hashtag #rustbirthday.

Categorieën: Mozilla-nl planet

Pagina's