Mozilla Nederland LogoDe Nederlandse

Abonneren op feed Mozilla planet
Planet Mozilla -
Bijgewerkt: 1 maand 3 weken geleden

Air Mozilla: Rust Libs Meeting 2017-06-13

di, 13/06/2017 - 22:00

Rust Libs Meeting 2017-06-13 walkdir crate evaluation

Categorieën: Mozilla-nl planet

Air Mozilla: Rust Libs Meeting 2017-06-13

di, 13/06/2017 - 22:00

Rust Libs Meeting 2017-06-13 walkdir crate evaluation

Categorieën: Mozilla-nl planet

The Mozilla Blog: The Best Firefox Ever

di, 13/06/2017 - 21:00

With E10s, our new version of Firefox nails the “just right” balance between memory and speed

On the Firefox team, one thing we always hear from our users is that they rely on the web for complex tasks like trip planning and shopping comparisons. That often means having many tabs open. And the sites and web apps running in those tabs often have lots of things going on– animations, videos, big pictures and more. Complex sites are more and more common. The average website today is nearly 2.5 megabytes – the same size as the original version of the game Doom, according to Wired. Up until now, a complex site in one Firefox tab could slow down all the others. That often meant a less than perfect browsing experience.

To make Firefox run even complex sites faster, we’ve been changing it to run using multiple operating system processes. Translation? The old Firefox used a single process to run all the tabs in a browser. Modern browsers split the load into several independent processes. We named our project to split Firefox into multiple processes ‘Electrolysis ’ (or E10s) after the chemical process that divides water into its core elements. E10s is the largest change to Firefox code in our history. And today we’re launching our next big phase of the E10s initiative.

A Faster Firefox With Four Content Processes

With today’s release, Firefox uses up to four processes to run web page content across all open tabs. This means that a heavy, complex web page in one tab has a much lower impact on the responsiveness and speed in other tabs. By separating the tabs into separate processes, we make better use of the hardware on your computer, so Firefox can deliver you more of the web you love, with less waiting.

I’ve been living with this turned on by default in the pre-release version of Firefox (Nightly). The performance improvements are remarkable. Besides running faster and crashing less, E10S makes websites feel more smooth. Even busy pages, like Facebook newsfeeds, spool out smoothly and cleanly. After making the switch to Firefox with E10s, now I can’t live without it.

Firefox 54 with E10s makes sites run much better on all computers, especially on computers with less memory. Firefox aims to strike the “just right” balance between speed and memory usage. To learn more about Firefox’s multi-process architecture, and how it’s different from Chrome’s, check out Ryan Pollock’s post about the search for the Goldilocks browser.

Multi-Process Without Memory Bloat Firefox Wins Memory Usage Comparison

In our tests comparing memory usage for various browsers, we found that Firefox used significantly less RAM than other browsers on Windows 10, macOS, and Linux. (RAM stands for Random Access Memory, the type of memory that stores the apps you’re actively running.) This means that with Firefox you can browse freely, but still have enough memory left to run the other apps you want to use on your computer.

The Best Firefox Ever

This is the best release of Firefox ever, with improvements that will be very noticeable to even casual users of our beloved browser. Several other enhancements are shipping in Firefox today, and you can visit our release notes to see the full list. If you’re a web developer, or if you’ve built a browser extension, check out the Hacks Blog to read about all the new Web Platform and WebExtension APIs shipping today.

As we continue to make progress on Project Quantum, we are pushing forward in building a completely revamped browser made for modern computing. It’s our goal to make Firefox the fastest and smoothest browser for PCs and mobile devices. Through the end of 2017, you’ll see some big jumps in capability and performance from Team Firefox. If you stopped using Firefox, try it again. We think you’ll be impressed. Thank you and let us know what you think.

The post The Best Firefox Ever appeared first on The Mozilla Blog.

Categorieën: Mozilla-nl planet

Air Mozilla: Selling Your Attention: The Web and Advertising with Tim Wu

di, 13/06/2017 - 21:00

 The Web and Advertising with Tim Wu You don't need cash to search Google or to use Facebook, but they're not free. We pay for these services with our attention and with...

Categorieën: Mozilla-nl planet

Air Mozilla: Selling Your Attention: The Web and Advertising with Tim Wu

di, 13/06/2017 - 21:00

 The Web and Advertising with Tim Wu You don't need cash to search Google or to use Facebook, but they're not free. We pay for these services with our attention and with...

Categorieën: Mozilla-nl planet

Hacks.Mozilla.Org: Firefox 54: E10S-Multi, WebExtension APIs, CSS clip-path

di, 13/06/2017 - 20:57
“E10S-Multi:” A new multi-process model for Firefox

Today’s release completes Firefox’s transformation into a fully multi-process browser, running many simultaneous content processes in addition to a UI process and, on Windows, a special GPU process. This design makes it easier to utilize all of the cores available on modern processors and, in the future, to securely sandbox web content. It also improves stability, ensuring that a single content process crashing won’t take out all of your other tabs, nor the rest of the browser.

Illustration of Firefox's new multi-process architecture, showing one Firefox UI process talking to four Content Processes. Each content process has several tabs within it.

An initial version of multi-process Firefox (codenamed “Electrolysis”, or “e10s” for short) debuted with Firefox 48 last August. This first version moved Firefox’s UI into its own process so that the browser interface remains snappy even under load. Firefox 54 takes this further by running many content processes in parallel: each one with its own RAM and CPU resources managed by the host operating system.

Additional processes do come with a small degree of memory overhead, no matter how well optimized, but we’ve worked wonders to reduce this to the bare minimum. Even with those optimizations, we wanted to do more to ensure that Firefox is respectful of your RAM. That’s why, instead of spawning a new process with every tab, Firefox sets an upper limit: four by default, but configurable by users (dom.ipc.processCount in about:config). This keeps you in control, while still letting Firefox take full advantage of multi-core CPUs.

To learn more about Firefox’s multi-process architecture, check out this Medium post about the search for the “Goldilocks” browser.

New WebExtension APIs

Firefox continues its rapid implementation of new WebExtension APIs. These APIs are designed to work cross-browser, and will be the only APIs available to add-ons when Firefox 57 launches this November.

Most notably, it’s now possible to create custom DevTools panels using WebExtensions. For example, the screenshot below shows the Chrome version of the Vue.js DevTools running in Firefox without any modifications. This dramatically reduces the maintenance burden for authors of devtools add-ons, ensuring that no matter which framework you prefer, its tools will work in Firefox.

Screenshot of Firefox showing the Vue.js DevTools extension running in Firefox


Read about the full set of new and changed APIs on the Add-ons Blog, or check out the complete WebExtensions documentation on MDN.

CSS shapes in clip-path

The CSS clip-path property allows authors to define which parts of an element are visible. Previously, Firefox only supported clipping paths defined as SVG files. With Firefox 54, authors can also use CSS shape functions for circles, ellipses, rectangles or arbitrary polygons (Demo).

Like many CSS values, clipping shapes can be animated. There are some rules that control how the interpolation between values is performed, but long story short: as long as you are interpolating between the same shapes, or polygons with the same number of vertices, you should be fine. Here’s how to animate a circular clipping:

See the Pen Animated clip-path by ladybenko (@ladybenko) on CodePen.

You can also dynamically change clipping according user input, like in this example that features a “periscope” effect controlled by the mouse:

See the Pen clip-path (periscope) by ladybenko (@ladybenko) on CodePen.

To learn more, check our article on clip-path from last week.

Project Dawn

Lastly, the release of Firefox 54 marks the completion of the Project Dawn transition, eliminating Firefox’s pre-beta release channel, codenamed “Aurora.” Firefox releases now move directly from Nightly into Beta every six weeks. Firefox Developer Edition, which was based on Aurora, is now based on Beta.

For early adopters, we’ve also made Firefox Nightly for Android available on Google Play.

Categorieën: Mozilla-nl planet

Daniel Stenberg: HTTP Workshop s03e02

di, 13/06/2017 - 17:29

(Season three, episode two)

Previously, on the HTTP Workshop. Yesterday ended with a much appreciated group dinner and now we’re back energized and eager to continue blabbing about HTTP frames, headers and similar things.

Martin from Mozilla talked on “connection management is hard“. Parts of the discussion was around the HTTP/2 connection coalescing that I’ve blogged about before. The ORIGIN frame is a draft for a suggested way for servers to more clearly announce which origins it can answer for on that connection which should reduce the frequency of 421 needs. The ORIGIN frame overrides DNS and will allow coalescing even for origins that don’t otherwise resolve to the same IP addresses. The Alt-Svc header, a suggested CERTIFICATE frame and how does a HTTP/2 server know for which origins it can do PUSH for?

A lot of positive words were expressed about the ORIGIN frame. Wildcard support?

Willy from HA-proxy talked about his Memory and CPU efficient HPACK decoding algorithm. Personally, I think the award for the best slides of the day goes to Willy’s hand-drawn notes.

Lucas from BBC talked about usage data for iplayer and how much data and number of requests they serve and how their largest share of users are “non-browsers”. Lucas mentioned their work on writing a libcurl adaption to make gstreamer use it instead of libsoup. Lucas talk triggered a lengthy discussion on what needs there are and how (if at all) you can divide clients into browsers and non-browser.

Wenbo from Google spoke about Websockets and showed usage data from Chrome. The median websockets connection time is 20 seconds and 10% something are shorter than 0.5 seconds. At the 97% percentile they live over an hour. The connection success rates for Websockets are depressingly low when done in the clear while the situation is better when done over HTTPS. For some reason the success rate on Mac seems to be extra low, and Firefox telemetry seems to agree. Websockets over HTTP/2 (or not) is an old hot topic that brought us back to reiterate issues we’ve debated a lot before. This time we also got a lovely and long side track into web push and how that works.

Roy talked about Waka, a HTTP replacement protocol idea and concept that Roy’s been carrying around for a long time (he started this in 2001) and to which he is now coming back to do actual work on. A big part of the discussion was focused around the wakli compression ideas, what the idea is and how it could be done and evaluated. Also, Roy is not a fan of content negotiation and wants it done differently so he’s addressing that in Waka.

Vlad talked about his suggestion for how to do cross-stream compression in HTTP/2 to significantly enhance compression ratio when, for example, switching to many small resources over h2 compared to a single huge resource over h1. The security aspect of this feature is what catches most of people’s attention and the following discussion. How can we make sure this doesn’t leak sensitive information? What protocol mechanisms exist or can we invent to help out making this work in a way that is safer (by default)?

Trailers. This is again a favorite topic that we’ve discussed before that is resurfaced. There are people around the table who’d like to see support trailers and we discussed the same topic in the HTTP Workshop in 2016 as well. The corresponding issue on trailers filed in the fetch github repo shows a lot of the concerns.

Julian brought up the subject of “7230bis” – when and how do we start the work. What do we want from such a revision? Fixing the bugs seems like the primary focus. “10 years is too long until update”.

Kazuho talked about “HTTP/2 attack mitigation” and how to handle clients doing many parallel slow POST requests to a CDN and them having an origin server behind that runs a new separate process for each upload.

And with this, the day and the workshop 2017 was over. Thanks to Facebook for hosting us. Thanks to the members of the program committee for driving this event nicely! I had a great time. The topics, the discussions and the people – awesome!

Categorieën: Mozilla-nl planet

This Week In Rust: This Week in Rust 186

di, 13/06/2017 - 06:00

Hello and welcome to another issue of This Week in Rust! Rust is a systems language pursuing the trifecta: safety, concurrency, and speed. This is a weekly summary of its progress and community. Want something mentioned? Tweet us at @ThisWeekInRust or send us a pull request. Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub. If you find any errors in this week's issue, please submit a PR.

Updates from Rust Community News & Blog Posts Crate of the Week

This week's crate is structopt, a crate that lets your auto-derive your command-line options from a struct to parse them into. Thanks to m4b for the suggestion!

Submit your suggestions and votes for next week!

Call for Participation

Always wanted to contribute to open-source projects but didn't know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

If you are a Rust project owner and are looking for contributors, please submit tasks here.

Updates from Rust Core

115 pull requests were merged in the last week.

New Contributors
  • Arthur Arnold
  • Campbell Barton
  • Fuqiao Xue
  • gentoo90
  • Inokentiy Babushkin
  • Michael Killough
  • Nick Whitney
Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

Final Comment Period

Every week the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now. This week's FCPs are:

New RFCs Style RFCs

Style RFCs are part of the process for deciding on style guidelines for the Rust community and defaults for Rustfmt. The process is similar to the RFC process, but we try to reach rough consensus on issues (including a final comment period) before progressing to PRs. Just like the RFC process, all users are welcome to comment and submit RFCs. If you want to help decide what Rust code should look like, come get involved!

We're making good progress and the style is coming together. If you want to see the style in practice, check out our example or use the Integer32 Playground and select 'Proposed RFC' from the 'Format' menu. Be aware that implementation is work in progress.

Issues in final comment period:

Good first issues:

We're happy to mentor these, please reach out to us in #rust-style if you'd like to get involved

Upcoming Events

If you are running a Rust event please add it to the calendar to get it mentioned here. Email the Rust Community Team for access.

Rust Jobs

Tweet us at @ThisWeekInRust to get your job offers listed here!

Quote of the Week

No quote was selected for QotW.

Submit your quotes for next week!

This Week in Rust is edited by: nasa42, llogiq, and brson.

Categorieën: Mozilla-nl planet

Mozilla Marketing Engineering & Ops Blog: MozMEAO SRE Status Report - June 13, 2017

di, 13/06/2017 - 02:00

Here’s what happened on the MozMEAO SRE team from June 6th - June 13th.

Current work Frankfurt Kubernetes cluster provisioning

We’re provisioning a new Kubernetes 1.6.4 cluster in Frankfurt (eu-central-1). This cluster takes advantage of features in new versions of kops, helm, and kubectl.

We’ve modified our New Relic, Datadog, and mig DaemonSets with tolerations so we can gather system metrics from both K8s master and worker nodes.

The first apps to be installed in this cluster will be bedrock and basket.

Basket move to Kubernetes

Basket has been moved to Kubernetes! We experienced some networking issues in our Virginia Kubernetes cluster, so traffic has been routed away from this cluster for the time being.


The Firefox 56 activity stream will ship to some users, with some form of snippets integration.

Categorieën: Mozilla-nl planet

Aaron Klotz: Why I prefer using CRITICAL_SECTIONs for mutexes in Windows Nightly builds

ma, 12/06/2017 - 23:50

In the past I have argued that our Nightly builds, both debug and release, should use CRITICAL_SECTIONs (with full debug info) for our implementation of mozilla::Mutex. I’d like to illustrate some reasons why this is so useful.

They enable more utility in WinDbg extensions

Every time you initialize a CRITICAL_SECTION, Windows inserts the CS’s debug info into a process-wide linked list. This enables their discovery by the Windows debugging engine, and makes the !cs, !critsec, and !locks commands more useful.

They enable profiling of their initialization and acquisition

When the “Create user mode stack trace database” gflag is enabled, Windows records the call stack of the thread that called InitializeCriticalSection on that CS. Windows also records the call stack of the owning thread once it has acquired the CS. This can be very useful for debugging deadlocks.

They track their contention counts

Since every CS has been placed in a process-wide linked list, we may now ask the debugger to dump statistics about every live CS in the process. In particular, we can ask the debugger to output the contention counts for each CS in the process. After running a workload against Nightly, we may then take the contention output, sort it descendingly, and be able to determine which CRITICAL_SECTIONs are the most contended in the process.

We may then want to more closely inspect the hottest CSes to determine whether there is anything that we can do to reduce contention and all of the extra context switching that entails.

In Summary

When we use SRWLOCKs or initialize our CRITICAL_SECTIONs with the CRITICAL_SECTION_NO_DEBUG_INFO flag, we are denying ourselves access to this information. That’s fine on release builds, but on Nightly I think it is worth having around. While I realize that most Mozilla developers have not used this until now (otherwise I would not be writing this blog post), this rich debugger info is one of those things that you do not miss until you do not have it.

For further reading about critical section debug info, check out this archived article from MSDN Magazine.

Categorieën: Mozilla-nl planet

Air Mozilla: Mozilla Weekly Project Meeting, 12 Jun 2017

ma, 12/06/2017 - 20:00

Mozilla Weekly Project Meeting The Monday Project Meeting

Categorieën: Mozilla-nl planet

Air Mozilla: Rain of Rust - 2nd online meeting

ma, 12/06/2017 - 18:00

Rain of Rust - 2nd online meeting This event belongs to a series of online Rust events that we run in the month of June, 2017

Categorieën: Mozilla-nl planet

Daniel Stenberg: HTTP Workshop – London edition. First day.

ma, 12/06/2017 - 17:40

The HTTP workshop series is back for a third time this northern hemisphere summer. The selected location for the 2017 version is London and this time we’re down to a two-day event (we seem to remove a day every year)…

Nothing in this blog entry is a quote to be attributed to a specific individual but they are my interpretations and paraphrasing of things said or presented. Any mistakes or errors are all mine.

At 9:30 this clear Monday morning, 35 persons sat down around a huge table in a room in the Facebook offices. Most of us are the same familiar faces that have already participated in one or two HTTP workshops, but we also have a set of people this year who haven’t attended before. Getting fresh blood into these discussions is certainly valuable. Most major players are represented, including Mozilla, Google, Facebook, Apple, Cloudflare, Fastly, Akamai, HA-proxy, Squid, Varnish, BBC, Adobe and curl!

Mark (independent, co-chair of the HTTP working group as well as the QUIC working group) kicked it all off with a presentation on quic and where it is right now in terms of standardization and progress. The upcoming draft-04 is becoming the first implementation draft even though the goal for interop is set basically at handshake and some very basic data interaction. The quic transport protocol is still in a huge flux and things have not settled enough for it to be interoperable right now to a very high level.

Jana from Google presented on quic deployment over time and how it right now uses about 7% of internet traffic. The Android Youtube app’s switch to QUIC last year showed a huge bump in usage numbers. Quic is a lot about reducing latency and numbers show that users really do get a reduction. By that nature, it improves the situation best for those who currently have the worst connections.

It doesn’t solve first world problems, this solves third world connection issues.

The currently observed 2x CPU usage increase for QUIC connections as compared to h2+TLS is mostly blamed on the Linux kernel which apparently is not at all up for this job as good is should be. Things have clearly been more optimized for TCP over the years, leaving room for improvement in the UDP areas going forward. “Making kernel bypassing an interesting choice”.

Alan from Facebook talked header compression for quic and presented data, graphs and numbers on how HPACK(-for-quic), QPACK and QCRAM compare when used for quic in different networking conditions and scenarios. Those are the three current header compression alternatives that are open for quic and Alan first explained the basics behind them and then how they compare when run in his simulator. The current HPACK version (adopted to quic) seems to be out of the question for head-of-line-blocking reasons, the QCRAM suggestion seems to run well but have two main flaws as it requires an awkward layering violation and an annoying possible reframing requirement on resends. Clearly some more experiments can be done, possible with a hybrid where some QCRAM ideas are brought into QPACK. Alan hopes to get his simulator open sourced in the coming months which then will allow more people to experiment and reproduce his numbers.

Hooman from Fastly on problems and challenges with HTTP/2 server push, the 103 early hints HTTP response and cache digests. This took the discussions on push into the weeds and into the dark protocol corners we’ve been in before and all sorts of ideas and suggestions were brought up. Some of them have been discussed before without having been resolved yet and some ideas were new, at least to me. The general consensus seems to be that push is fairly complicated and there are a lot of corner cases and murky areas that haven’t been clearly documented, but it is a feature that is now being used and for the CDN use case it can help with a lot more than “just an RTT”. But is perhaps the 103 response good enough for most of the cases?

The discussion on server push and how well it fares is something the QUIC working group is interested in, since the question was asked already this morning if a first version of quic could be considered to be made without push support. The jury is still out on that I think.

ekr from Mozilla spoke about TLS 1.3, 0-RTT, how the TLS 1.3 handshake looks like and how applications and servers can take advantage of the new 0-RTT and “0.5-RTT” features. TLS 1.3 is already passed the WGLC and there are now “only” a few issues pending to get solved. Taking advantage of 0RTT in an HTTP world opens up interesting questions and issues as HTTP request resends and retries are becoming increasingly prevalent.

Next: day two.

Categorieën: Mozilla-nl planet

Tarek Ziadé: Molotov, Arsenic & Geckodriver

ma, 12/06/2017 - 08:05

Molotov is the load testing tool we're using for stressing our web services at Mozilla QA.

It's a very simple framework based on asyncio & aiohttp, that will let you run tests with a lot of concurrent coroutines. Using an event loop makes it quite efficient to run a lot of concurrent requests against a single endpoint. Molotov is used with another tool to perform distributed load tests from the cloud. But even if you use it from your laptop, it can send a fair amount of load. On one project, we were able to kill the service with one macbook sending 30,000 requests per second.

Molotov is also handy to run integration tests. The same scenario used to load test a service can be used to simulate a few users on a service and make sure it behaves as expected.

But the tool can only test HTTP(S) endpoints via aiohttp.Client, so if you want to run tests through a real browser, you need to use a tool like Selenium, or drive the browser directly via Marionette for example.

Running real browsers in Molotov can make sense for some specific use cases. For example, you can have a scenario where you want to have several users interact on a web page and have the JS executed there. A chat app, a shared pad, etc..

But the problem with Selenium Python libraries is that they are all written (as far as I know) in a synchronous fashion. They can be used in Molotov of course, but each call would block the loop and defeat concurrency.

The other limitation is that one instance of a browser cannot be used by several concurrent users. For instance in Firefox, even if Marionette is internally built in an async way, if two concurrent scripts are trying to change the active tab at the same time, that would break their own scenario.

Introducing Arsenic

By the time I was thinking about building an async library to drive browsers, I had an interesting conversation with Jonas Obrist whom I had met at Pycon Malaysia last year. He was in the process of writing an asynchronous Selenium client for his needs. We ended up agreeing that it would be great to collaborate on an async library that would work against the new WebDriver protocol, which defines HTTP endpoints a browser can serve.

WebDriver is going to be implemented in all browsers, and a library that'd use that protocol would be able to drive all kind of browsers. In Firefox we have a similar feature with Marionette, which is a TCP server you can use to driver Firefox. But eventually, Firefox will implement WebDriver.

Geckodriver is Mozilla's WebDriver implementation, and can be used to proxy calls to Firefox. Geckodriver is an HTTP server that translates WebDriver calls into Marionette calls, and also deals with starting and stopping Firefox.

And Arsenic is the async WebDriver client Jonas started. It's already working great. The project is here on Github:

Molotov + Arsenic == Molosonic

To use Arsenic with Molotov, I just need to pass along the event loop that's utilized in the load testing tool, and also make sure that it runs at the most one Firefox browser per Molotov worker. We want to have a browser instance attached per session instance when the test is running.

The setup_session and teardown_session fixtures are the right place to start and stop a browser via Arsenic. To make the setup even easier, I've created a small extension for Molotov called Molosonic, that will take care of running a Firefox browser and attaching it to the worker session.

In the example below, a browser is created every time a worker starts a new session:

import molotov from molosonic import setup_browser, teardown_browser @molotov.setup_session() async def _setup_session(wid, session): await setup_browser(session) @molotov.teardown_session() async def _teardown_session(wid, session): await teardown_browser(session) @molotov.scenario(1) async def example(session): firefox = session.browser await firefox.get('')

That's all it takes to use a browser in Molotov in an asynchronous way, thanks to Arsenic. From there, driving a test that simulates several users hitting a webpage and interacting through it requires some synchronization subtleties I will demonstrate in a tutorial I am still working on.

All these projects are still very new and not ready for prime time, but you can still check out Arsenic's docs at

Beyond Molotov use cases, Arsenic is a very exciting project if you need a way to drive browsers in an async program. And async programming is tomorrow's standard in Python.

Categorieën: Mozilla-nl planet

Firefox Nightly: Date/Time Inputs Enabled on Nightly

ma, 12/06/2017 - 06:01

Exciting! Firefox is now providing simple and chic interfaces for representing, setting and picking a time or date on Nightly. Various content attributes defined in the HTML standard, such as @step, @min, and @max, are implemented for finer-grained control over data values.

Take a closer look at this feature, and come join us in making it better with better browser compatibility!

What’s Currently Supported <input type=time>

The default format is shown as below.

Here is how it looks when you are setting a value for a time. The value provided must be in the format “hh:mm[:ss[.mmm]]”, according to the spec.

Note that there is no picker for <input type=time>. We decided not to support it since we think it’s easier and faster to enter a time using the keyboard than selecting it from a picker. If you have a different opinion, let us know!

<input type=date>

The layout of an input field for date looks as below. If @value attribute is provided, it must be in the format “yyyy-mm-dd”, according to the spec.

A date picker will be popped out when you click on the input field. You can choose to set a date by typing in the field or selecting one from the picker.



Date/Time inputs allow you to set content attributes like @min, @max, @step or @required to specify the desired date/time range.

For example, you can set the @min and @max attribute for <input type=time>, and if the user selects a time outside of the specified range, a validation error message is shown to let the user know the expected range.

By setting the @step attribute, you can specify the expected date/time interval values. For example:


<input type=date> and <input type=time> input box are automatically formatted based on your browser locale, that means the Firefox browser with the language you downloaded and installed. This is the same as your interface language of Firefox.

This is how <input type=time> looks like using Firefox Traditional Chinese!

The calendar picker for <input type=date> is also formatted based on your browser language. Hence, the first day of the week can start on Monday or Sunday, depending on your browser language. Note that this is not configurable.

Only Gregorian calendar system is supported at the moment. All dates and times will be converted to ISO 8601 format, as specified in the spec, before being submitted to the web server.

Happy Hacking

Wondering how you can help us make this feature more awesome? Download the latest Firefox Nightly and give it a try.

Try it out:

Try it out:

If you are looking for more fun, you can try some more examples on MDN.

If you encounter an issue, report it by submitting the “summary” and “description” fields on Bugzilla.

If you are an enthusiastic developer and would like to contribute to the project, we have features that are in our backlog that you are welcome to contribute to! User interaction behaviors and visual styles are well defined in the specs.

The Date/Time Inputs Team

Categorieën: Mozilla-nl planet

The Servo Blog: This Week In Servo 104

ma, 12/06/2017 - 02:30

In the last week, we landed 116 PRs in the Servo organization’s repositories.

Planning and Status

Our roadmap is available online, including the overall plans for 2017.

This week’s status updates are here.

Notable Additions
  • bholley reduced the size of CSS rules in memory through clever bit packing.
  • SimonSapin avoided unnecessary allocations in ASCII upper/lower-case conversions.
  • hiikezoe implemented animation of shorthand SMIL CSS properties in Stylo.
  • upsuper added support for interpolation between currentColor and numeric colour values.
  • glennw implemented per-frame allocations in the WebRender GPU cache.
  • mbrubeck optimized the implementation of parallel layout to improve performance.
  • jamesmunns wrote a tutorial covering unions in rust-bindgen.
  • jdm increased the size of the buffer used when receiving network data.
  • asajeffrey implemented the basic plumbing for CSS Houdini paint worklets.
  • cbrewster added a custom element registry, as part of his Google Summer of Code project.
  • asajeffrey removed the assumption that Servo contains a single root browser context.
  • jdm added meaningful errors and error reporting to the CSS parser API.
  • gterzian separated event loop logic from the logic of running Servo’s compositor.
  • nox replaced some CSS property-specific code with more generic implementations.
  • bzbarsky reduced the size of an important style system type.
New Contributors

Interested in helping build a web browser? Take a look at our curated list of issues that are good for new contributors!

Categorieën: Mozilla-nl planet

Tom Schuster: The pitfalls of self-hosting JavaScript

vr, 09/06/2017 - 23:03

Recently the SpiderMonkey team has been looking into improving ECMAScript 6 and real world performance as part of the QuantumFlow project.

While working on this we realized that self-hosting functions can have significant downsides, especially with bad type information. Apparently even the v8 team is moving away from self-hosting to writing more functions in hand written macro assembler code.

Here is a list of things I can remember from the top of my head:

  • Self-hosted functions that always call out to C++ (native) functions that can not be inlined in IonMonkey are probably a bad idea.
  • Self-hosted functions often have very bad type-information, because they are called from a lot of different frameworks and user code etc. This means we need to absolutely be able to inline that function. (e.g. bug 1364854 about Object.assign or bug 1366372 about Array.from)
  • If a self-hosted function only runs in the baseline compiler we won’t get any inlining, this means all those small function calls to ToLength or Math.max add up. We should probably look into manually inling more or even using something like Facebook’s prepack.
  • We usually only inline C++ functions called from self-hosted functions in IonMonkey under perfect conditions, if those are not met we fall back to a slow JS to C++ call. (e.g. bug 1366263 about various RegExp methods)
  • Basically this all comes back to somehow making sure that even with bad type information (i.e. polymorphic types) your self-hosted JS code still reaches an acceptable level of performance. For example by introducing inline caching for the in operator we fixed a real world performance issue in the Array.prototype.concat method.
  • Overall just relying on IonMonkey inlining to save our bacon probably isn’t a good way forward.
Categorieën: Mozilla-nl planet

Jeff Walden: Not a gluten-free trail

vr, 09/06/2017 - 21:27

Sitting cool at mile 444 right now. I was aiming  to be at the Sierras by June 19 or 27, but the snow course I signed up for then got canceled, so I’m in no rush. Might slow down for particular recommended attractions, but otherwise the plan is consistent 20+-mile days.

Categorieën: Mozilla-nl planet

Sam Foster: Haiku Reflections: Experiences in Reality

vr, 09/06/2017 - 20:37

Over the several months we worked on Project Haiku, one of the questions we were repeatedly asked was “Why not just make a smartphone app to do this?” Answering that gets right to the heart of what we were trying to demonstrate with Project Haiku specifically, and wanted to see more of in general in IoT/Connected Devices.

This is part of a series of posts on a project I worked on for Mozilla’s Connected Devices group. For context and an overview of the project, please see my earlier post.

The problem with navigating virtual worlds

One of IoT’s great promises is to extend the internet and the web to devices and sensors in our physical world. The flip side of this is another equally powerful idea: to bring the digital into our environment; make it tangible and real and take up space. If you’ve lived through the emergence of the web over the last 20 years, web browsers, smart phones and tablets - that might seem like stepping backwards. Digital technology and the web specifically have broken down physical and geographical barriers to accessing information. We can communicate and share experiences across the globe with a few clicks or keystrokes. But, after 20 years, the web is still in “cyber-space”. We go to this parallel virtual universe and navigate with pointers and maps that have no reference to our analog lives and which confound our intuitive sense of place. This makes wayfinding and building mental models difficult. And without being grounded by inputs and context from our physical environment, the simultaneous existence of these two worlds remains unsettling and can cause a kind of subtle tension.

Imagined space, Hackers-style

As I write this, the display in front of me shows me content framed by a website, which is framed by my browser’s UI, which is framed by the operating system’s window manager and desktop. The display itself has it own frame - a bezel on an enclosure sitting on my desk. And these are just the literal boxes. Then there are the conceptual boxes - a page within a site, within a domain, presented by an application as one of many tabs. Sites, domains, applications, windows, homescreens, desktops, workspaces…

The flexibility this arrangement brings is truly incredible. But, for some common tasks it is also a burden. If we could collapse some of these worlds within worlds down to something simpler, direct and tangible, we could engage that ancestral part of our brains that really wants things to have three dimensions and take up space in our world. We need a way to tear off a piece of the web and pin it to the wall, make space for it on the desk, carry it with us; to give it physical presence.

Permission to uni-task

Assigning a single function to a thing - when the capability exists to be many things at once - was another source of skepticism and concern throughout Project Haiku. But in the history of invention, the pendulum swings continually between uni-tasking and multi-tasking; specialized and general. A synthesizer and an electric piano share origins and overlap in functions, but one does not supersede the other. They are different tools for distinct circumstances. In an age of ubiquitous smart phones, wrist watches still provide a function, and project status and values. There’s a pragmatism and attractive simplicity to dedicating a single task to an object we use. The problem is that as we stack functions into a single device, each new possibility requires a means of selecting which one we want. Reading or writing? Bold or italic text? Shared or private, published or deleted, for one group or broadcast to all? Each decision, each action is an interaction with a digital interface, stacked and overlaid into the same physical object that is our computer, tablet or phone. Uni-tasking devices give us an opportunity to dismantle this stack and peel away the layers.

The two ideas of single function and occupying physical space are complementary: I check the weather by looking out the window, I check the time by glancing at my wrist, the recipe I want is bookmarked in the last book on the shelf. We can create similar coordinates or landmarks for our digital interactions as well.

Our sense of place and proximity is also an important input to how we prioritize what needs doing. A sink full of dishes demands my attention - while I’m in the kitchen. But when I’m downtown, it has to wait while I attend to other matters. Similarly, a colleague raising a question can expect me to answer when I’m in the same room. But we both understand that as the distance between us changes, so does the urgency to provide an answer. When I’m at the office, work things are my priority. As I travel home, my context shifts. Expectations change as we move from place to place, and physical locations and boundaries help partition our lives. Its true that the smart phone started as a huge convenience by un-tethering us from the desk to carry our access to information - and its access to us - with us. But, by doing so, we lost some of the ability to walk away; to step out from a conversation or leave work behind.

A concept rendering using one of the proposed form-factors for the Haiku device

Addressing these tensions became one of the goals of Project Haiku. As we talked to people about their interactions with technology in their home and in their lives, we saw again and again how poor a fit the best of today’s solutions were. What began as empowering and liberating has started to infringe on people’s freedom to chose how to spend their time.

When I’m spending time on my computer, its just more opportunities for it to beep at me. Every chance I get I turn it off. Typing into a box - what fun is that? You guys should come up with something… good.

This is a quote from one of our early interviews. It was a refreshing perspective and sentiments like this - as well as the moments of joy and connectedness that we saw were possible - that helped steer this project. We weren’t able to finish the story by bringing a product to market. But the process and all we learned along the way will stick with me. It is my hope that this series of posts will plant some seeds and perhaps give other future projects a small nudge towards making our technology experiences more grounded in the world we move about in.

Categorieën: Mozilla-nl planet

Mike Hoye: Trimming The Roster

vr, 09/06/2017 - 20:25

This is a minor administrative note about Planet Mozilla.

In the next few weeks I’ll be doing some long-overdue maintenance and cleaning out dead feeds from Planet and the various sub-Planet blogrolls to help keep them focused and helpful.

I’m going to start by scanning existing feeds and culling any that error out every day for the next two weeks. After that I’ll go down the list of remaining feeds individually, and confirm their author’s ongoing involvement in Mozilla and ask for tagged feeds wherever possible. “Involved in Mozilla” can mean a lot of things – the mission, the many projects, the many communities – so I’ll be happy to take a yes or no and leave it at that.

The process should be pretty painless – with a bit of luck you won’t even notice – but I thought I’d give you a heads up regardless. As usual, leave a comment or email me if you’ve got questions.

Categorieën: Mozilla-nl planet