mozilla

Mozilla Nederland LogoDe Nederlandse
Mozilla-gemeenschap

Daniel Stenberg: HTTP Workshop s03e02

Mozilla planet - ti, 13/06/2017 - 17:29

(Season three, episode two)

Previously, on the HTTP Workshop. Yesterday ended with a much appreciated group dinner and now we’re back energized and eager to continue blabbing about HTTP frames, headers and similar things.

Martin from Mozilla talked on “connection management is hard“. Parts of the discussion was around the HTTP/2 connection coalescing that I’ve blogged about before. The ORIGIN frame is a draft for a suggested way for servers to more clearly announce which origins it can answer for on that connection which should reduce the frequency of 421 needs. The ORIGIN frame overrides DNS and will allow coalescing even for origins that don’t otherwise resolve to the same IP addresses. The Alt-Svc header, a suggested CERTIFICATE frame and how does a HTTP/2 server know for which origins it can do PUSH for?

A lot of positive words were expressed about the ORIGIN frame. Wildcard support?

Willy from HA-proxy talked about his Memory and CPU efficient HPACK decoding algorithm. Personally, I think the award for the best slides of the day goes to Willy’s hand-drawn notes.

Lucas from BBC talked about usage data for iplayer and how much data and number of requests they serve and how their largest share of users are “non-browsers”. Lucas mentioned their work on writing a libcurl adaption to make gstreamer use it instead of libsoup. Lucas talk triggered a lengthy discussion on what needs there are and how (if at all) you can divide clients into browsers and non-browser.

Wenbo from Google spoke about Websockets and showed usage data from Chrome. The median websockets connection time is 20 seconds and 10% something are shorter than 0.5 seconds. At the 97% percentile they live over an hour. The connection success rates for Websockets are depressingly low when done in the clear while the situation is better when done over HTTPS. For some reason the success rate on Mac seems to be extra low, and Firefox telemetry seems to agree. Websockets over HTTP/2 (or not) is an old hot topic that brought us back to reiterate issues we’ve debated a lot before. This time we also got a lovely and long side track into web push and how that works.

Roy talked about Waka, a HTTP replacement protocol idea and concept that Roy’s been carrying around for a long time (he started this in 2001) and to which he is now coming back to do actual work on. A big part of the discussion was focused around the wakli compression ideas, what the idea is and how it could be done and evaluated. Also, Roy is not a fan of content negotiation and wants it done differently so he’s addressing that in Waka.

Vlad talked about his suggestion for how to do cross-stream compression in HTTP/2 to significantly enhance compression ratio when, for example, switching to many small resources over h2 compared to a single huge resource over h1. The security aspect of this feature is what catches most of people’s attention and the following discussion. How can we make sure this doesn’t leak sensitive information? What protocol mechanisms exist or can we invent to help out making this work in a way that is safer (by default)?

Trailers. This is again a favorite topic that we’ve discussed before that is resurfaced. There are people around the table who’d like to see support trailers and we discussed the same topic in the HTTP Workshop in 2016 as well. The corresponding issue on trailers filed in the fetch github repo shows a lot of the concerns.

Julian brought up the subject of “7230bis” – when and how do we start the work. What do we want from such a revision? Fixing the bugs seems like the primary focus. “10 years is too long until update”.

Kazuho talked about “HTTP/2 attack mitigation” and how to handle clients doing many parallel slow POST requests to a CDN and them having an origin server behind that runs a new separate process for each upload.

And with this, the day and the workshop 2017 was over. Thanks to Facebook for hosting us. Thanks to the members of the program committee for driving this event nicely! I had a great time. The topics, the discussions and the people – awesome!

Categorieën: Mozilla-nl planet

This Week In Rust: This Week in Rust 186

Mozilla planet - ti, 13/06/2017 - 06:00

Hello and welcome to another issue of This Week in Rust! Rust is a systems language pursuing the trifecta: safety, concurrency, and speed. This is a weekly summary of its progress and community. Want something mentioned? Tweet us at @ThisWeekInRust or send us a pull request. Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub. If you find any errors in this week's issue, please submit a PR.

Updates from Rust Community News & Blog Posts Crate of the Week

This week's crate is structopt, a crate that lets your auto-derive your command-line options from a struct to parse them into. Thanks to m4b for the suggestion!

Submit your suggestions and votes for next week!

Call for Participation

Always wanted to contribute to open-source projects but didn't know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

If you are a Rust project owner and are looking for contributors, please submit tasks here.

Updates from Rust Core

115 pull requests were merged in the last week.

New Contributors
  • Arthur Arnold
  • Campbell Barton
  • Fuqiao Xue
  • gentoo90
  • Inokentiy Babushkin
  • Michael Killough
  • Nick Whitney
Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

Final Comment Period

Every week the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now. This week's FCPs are:

New RFCs Style RFCs

Style RFCs are part of the process for deciding on style guidelines for the Rust community and defaults for Rustfmt. The process is similar to the RFC process, but we try to reach rough consensus on issues (including a final comment period) before progressing to PRs. Just like the RFC process, all users are welcome to comment and submit RFCs. If you want to help decide what Rust code should look like, come get involved!

We're making good progress and the style is coming together. If you want to see the style in practice, check out our example or use the Integer32 Playground and select 'Proposed RFC' from the 'Format' menu. Be aware that implementation is work in progress.

Issues in final comment period:

Good first issues:

We're happy to mentor these, please reach out to us in #rust-style if you'd like to get involved

Upcoming Events

If you are running a Rust event please add it to the calendar to get it mentioned here. Email the Rust Community Team for access.

Rust Jobs

Tweet us at @ThisWeekInRust to get your job offers listed here!

Quote of the Week

No quote was selected for QotW.

Submit your quotes for next week!

This Week in Rust is edited by: nasa42, llogiq, and brson.

Categorieën: Mozilla-nl planet

Mozilla Marketing Engineering & Ops Blog: MozMEAO SRE Status Report - June 13, 2017

Mozilla planet - ti, 13/06/2017 - 02:00

Here’s what happened on the MozMEAO SRE team from June 6th - June 13th.

Current work Frankfurt Kubernetes cluster provisioning

We’re provisioning a new Kubernetes 1.6.4 cluster in Frankfurt (eu-central-1). This cluster takes advantage of features in new versions of kops, helm, and kubectl.

We’ve modified our New Relic, Datadog, and mig DaemonSets with tolerations so we can gather system metrics from both K8s master and worker nodes.

The first apps to be installed in this cluster will be bedrock and basket.

Basket move to Kubernetes

Basket has been moved to Kubernetes! We experienced some networking issues in our Virginia Kubernetes cluster, so traffic has been routed away from this cluster for the time being.

Snippets

The Firefox 56 activity stream will ship to some users, with some form of snippets integration.

Links
Categorieën: Mozilla-nl planet

Aaron Klotz: Why I prefer using CRITICAL_SECTIONs for mutexes in Windows Nightly builds

Mozilla planet - mo, 12/06/2017 - 23:50

In the past I have argued that our Nightly builds, both debug and release, should use CRITICAL_SECTIONs (with full debug info) for our implementation of mozilla::Mutex. I’d like to illustrate some reasons why this is so useful.

They enable more utility in WinDbg extensions

Every time you initialize a CRITICAL_SECTION, Windows inserts the CS’s debug info into a process-wide linked list. This enables their discovery by the Windows debugging engine, and makes the !cs, !critsec, and !locks commands more useful.

They enable profiling of their initialization and acquisition

When the “Create user mode stack trace database” gflag is enabled, Windows records the call stack of the thread that called InitializeCriticalSection on that CS. Windows also records the call stack of the owning thread once it has acquired the CS. This can be very useful for debugging deadlocks.

They track their contention counts

Since every CS has been placed in a process-wide linked list, we may now ask the debugger to dump statistics about every live CS in the process. In particular, we can ask the debugger to output the contention counts for each CS in the process. After running a workload against Nightly, we may then take the contention output, sort it descendingly, and be able to determine which CRITICAL_SECTIONs are the most contended in the process.

We may then want to more closely inspect the hottest CSes to determine whether there is anything that we can do to reduce contention and all of the extra context switching that entails.

In Summary

When we use SRWLOCKs or initialize our CRITICAL_SECTIONs with the CRITICAL_SECTION_NO_DEBUG_INFO flag, we are denying ourselves access to this information. That’s fine on release builds, but on Nightly I think it is worth having around. While I realize that most Mozilla developers have not used this until now (otherwise I would not be writing this blog post), this rich debugger info is one of those things that you do not miss until you do not have it.

For further reading about critical section debug info, check out this archived article from MSDN Magazine.

Categorieën: Mozilla-nl planet

Air Mozilla: Mozilla Weekly Project Meeting, 12 Jun 2017

Mozilla planet - mo, 12/06/2017 - 20:00

Mozilla Weekly Project Meeting The Monday Project Meeting

Categorieën: Mozilla-nl planet

Air Mozilla: Rain of Rust - 2nd online meeting

Mozilla planet - mo, 12/06/2017 - 18:00

Rain of Rust - 2nd online meeting This event belongs to a series of online Rust events that we run in the month of June, 2017

Categorieën: Mozilla-nl planet

Daniel Stenberg: HTTP Workshop – London edition. First day.

Mozilla planet - mo, 12/06/2017 - 17:40

The HTTP workshop series is back for a third time this northern hemisphere summer. The selected location for the 2017 version is London and this time we’re down to a two-day event (we seem to remove a day every year)…

Nothing in this blog entry is a quote to be attributed to a specific individual but they are my interpretations and paraphrasing of things said or presented. Any mistakes or errors are all mine.

At 9:30 this clear Monday morning, 35 persons sat down around a huge table in a room in the Facebook offices. Most of us are the same familiar faces that have already participated in one or two HTTP workshops, but we also have a set of people this year who haven’t attended before. Getting fresh blood into these discussions is certainly valuable. Most major players are represented, including Mozilla, Google, Facebook, Apple, Cloudflare, Fastly, Akamai, HA-proxy, Squid, Varnish, BBC, Adobe and curl!

Mark (independent, co-chair of the HTTP working group as well as the QUIC working group) kicked it all off with a presentation on quic and where it is right now in terms of standardization and progress. The upcoming draft-04 is becoming the first implementation draft even though the goal for interop is set basically at handshake and some very basic data interaction. The quic transport protocol is still in a huge flux and things have not settled enough for it to be interoperable right now to a very high level.

Jana from Google presented on quic deployment over time and how it right now uses about 7% of internet traffic. The Android Youtube app’s switch to QUIC last year showed a huge bump in usage numbers. Quic is a lot about reducing latency and numbers show that users really do get a reduction. By that nature, it improves the situation best for those who currently have the worst connections.

It doesn’t solve first world problems, this solves third world connection issues.

The currently observed 2x CPU usage increase for QUIC connections as compared to h2+TLS is mostly blamed on the Linux kernel which apparently is not at all up for this job as good is should be. Things have clearly been more optimized for TCP over the years, leaving room for improvement in the UDP areas going forward. “Making kernel bypassing an interesting choice”.

Alan from Facebook talked header compression for quic and presented data, graphs and numbers on how HPACK(-for-quic), QPACK and QCRAM compare when used for quic in different networking conditions and scenarios. Those are the three current header compression alternatives that are open for quic and Alan first explained the basics behind them and then how they compare when run in his simulator. The current HPACK version (adopted to quic) seems to be out of the question for head-of-line-blocking reasons, the QCRAM suggestion seems to run well but have two main flaws as it requires an awkward layering violation and an annoying possible reframing requirement on resends. Clearly some more experiments can be done, possible with a hybrid where some QCRAM ideas are brought into QPACK. Alan hopes to get his simulator open sourced in the coming months which then will allow more people to experiment and reproduce his numbers.

Hooman from Fastly on problems and challenges with HTTP/2 server push, the 103 early hints HTTP response and cache digests. This took the discussions on push into the weeds and into the dark protocol corners we’ve been in before and all sorts of ideas and suggestions were brought up. Some of them have been discussed before without having been resolved yet and some ideas were new, at least to me. The general consensus seems to be that push is fairly complicated and there are a lot of corner cases and murky areas that haven’t been clearly documented, but it is a feature that is now being used and for the CDN use case it can help with a lot more than “just an RTT”. But is perhaps the 103 response good enough for most of the cases?

The discussion on server push and how well it fares is something the QUIC working group is interested in, since the question was asked already this morning if a first version of quic could be considered to be made without push support. The jury is still out on that I think.

ekr from Mozilla spoke about TLS 1.3, 0-RTT, how the TLS 1.3 handshake looks like and how applications and servers can take advantage of the new 0-RTT and “0.5-RTT” features. TLS 1.3 is already passed the WGLC and there are now “only” a few issues pending to get solved. Taking advantage of 0RTT in an HTTP world opens up interesting questions and issues as HTTP request resends and retries are becoming increasingly prevalent.

Next: day two.

Categorieën: Mozilla-nl planet

Tarek Ziadé: Molotov, Arsenic & Geckodriver

Mozilla planet - mo, 12/06/2017 - 08:05

Molotov is the load testing tool we're using for stressing our web services at Mozilla QA.

It's a very simple framework based on asyncio & aiohttp, that will let you run tests with a lot of concurrent coroutines. Using an event loop makes it quite efficient to run a lot of concurrent requests against a single endpoint. Molotov is used with another tool to perform distributed load tests from the cloud. But even if you use it from your laptop, it can send a fair amount of load. On one project, we were able to kill the service with one macbook sending 30,000 requests per second.

Molotov is also handy to run integration tests. The same scenario used to load test a service can be used to simulate a few users on a service and make sure it behaves as expected.

But the tool can only test HTTP(S) endpoints via aiohttp.Client, so if you want to run tests through a real browser, you need to use a tool like Selenium, or drive the browser directly via Marionette for example.

Running real browsers in Molotov can make sense for some specific use cases. For example, you can have a scenario where you want to have several users interact on a web page and have the JS executed there. A chat app, a shared pad, etc..

But the problem with Selenium Python libraries is that they are all written (as far as I know) in a synchronous fashion. They can be used in Molotov of course, but each call would block the loop and defeat concurrency.

The other limitation is that one instance of a browser cannot be used by several concurrent users. For instance in Firefox, even if Marionette is internally built in an async way, if two concurrent scripts are trying to change the active tab at the same time, that would break their own scenario.

Introducing Arsenic

By the time I was thinking about building an async library to drive browsers, I had an interesting conversation with Jonas Obrist whom I had met at Pycon Malaysia last year. He was in the process of writing an asynchronous Selenium client for his needs. We ended up agreeing that it would be great to collaborate on an async library that would work against the new WebDriver protocol, which defines HTTP endpoints a browser can serve.

WebDriver is going to be implemented in all browsers, and a library that'd use that protocol would be able to drive all kind of browsers. In Firefox we have a similar feature with Marionette, which is a TCP server you can use to driver Firefox. But eventually, Firefox will implement WebDriver.

Geckodriver is Mozilla's WebDriver implementation, and can be used to proxy calls to Firefox. Geckodriver is an HTTP server that translates WebDriver calls into Marionette calls, and also deals with starting and stopping Firefox.

And Arsenic is the async WebDriver client Jonas started. It's already working great. The project is here on Github: https://github.com/HDE/arsenic

Molotov + Arsenic == Molosonic

To use Arsenic with Molotov, I just need to pass along the event loop that's utilized in the load testing tool, and also make sure that it runs at the most one Firefox browser per Molotov worker. We want to have a browser instance attached per session instance when the test is running.

The setup_session and teardown_session fixtures are the right place to start and stop a browser via Arsenic. To make the setup even easier, I've created a small extension for Molotov called Molosonic, that will take care of running a Firefox browser and attaching it to the worker session.

In the example below, a browser is created every time a worker starts a new session:

import molotov from molosonic import setup_browser, teardown_browser @molotov.setup_session() async def _setup_session(wid, session): await setup_browser(session) @molotov.teardown_session() async def _teardown_session(wid, session): await teardown_browser(session) @molotov.scenario(1) async def example(session): firefox = session.browser await firefox.get('http://example.com')

That's all it takes to use a browser in Molotov in an asynchronous way, thanks to Arsenic. From there, driving a test that simulates several users hitting a webpage and interacting through it requires some synchronization subtleties I will demonstrate in a tutorial I am still working on.

All these projects are still very new and not ready for prime time, but you can still check out Arsenic's docs at http://arsenic.readthedocs.io

Beyond Molotov use cases, Arsenic is a very exciting project if you need a way to drive browsers in an async program. And async programming is tomorrow's standard in Python.

Categorieën: Mozilla-nl planet

Firefox Nightly: Date/Time Inputs Enabled on Nightly

Mozilla planet - mo, 12/06/2017 - 06:01

Exciting! Firefox is now providing simple and chic interfaces for representing, setting and picking a time or date on Nightly. Various content attributes defined in the HTML standard, such as @step, @min, and @max, are implemented for finer-grained control over data values.

Take a closer look at this feature, and come join us in making it better with better browser compatibility!

What’s Currently Supported <input type=time>

The default format is shown as below.

Here is how it looks when you are setting a value for a time. The value provided must be in the format “hh:mm[:ss[.mmm]]”, according to the spec.

Note that there is no picker for <input type=time>. We decided not to support it since we think it’s easier and faster to enter a time using the keyboard than selecting it from a picker. If you have a different opinion, let us know!

<input type=date>

The layout of an input field for date looks as below. If @value attribute is provided, it must be in the format “yyyy-mm-dd”, according to the spec.

A date picker will be popped out when you click on the input field. You can choose to set a date by typing in the field or selecting one from the picker.

      

Validation

Date/Time inputs allow you to set content attributes like @min, @max, @step or @required to specify the desired date/time range.

For example, you can set the @min and @max attribute for <input type=time>, and if the user selects a time outside of the specified range, a validation error message is shown to let the user know the expected range.

By setting the @step attribute, you can specify the expected date/time interval values. For example:

Localization

<input type=date> and <input type=time> input box are automatically formatted based on your browser locale, that means the Firefox browser with the language you downloaded and installed. This is the same as your interface language of Firefox.

This is how <input type=time> looks like using Firefox Traditional Chinese!

The calendar picker for <input type=date> is also formatted based on your browser language. Hence, the first day of the week can start on Monday or Sunday, depending on your browser language. Note that this is not configurable.

Only Gregorian calendar system is supported at the moment. All dates and times will be converted to ISO 8601 format, as specified in the spec, before being submitted to the web server.

Happy Hacking

Wondering how you can help us make this feature more awesome? Download the latest Firefox Nightly and give it a try.

Try it out:

Try it out:

If you are looking for more fun, you can try some more examples on MDN.

If you encounter an issue, report it by submitting the “summary” and “description” fields on Bugzilla.

If you are an enthusiastic developer and would like to contribute to the project, we have features that are in our backlog that you are welcome to contribute to! User interaction behaviors and visual styles are well defined in the specs.

Thanks,
The Date/Time Inputs Team

Categorieën: Mozilla-nl planet

The Servo Blog: This Week In Servo 104

Mozilla planet - mo, 12/06/2017 - 02:30

In the last week, we landed 116 PRs in the Servo organization’s repositories.

Planning and Status

Our roadmap is available online, including the overall plans for 2017.

This week’s status updates are here.

Notable Additions
  • bholley reduced the size of CSS rules in memory through clever bit packing.
  • SimonSapin avoided unnecessary allocations in ASCII upper/lower-case conversions.
  • hiikezoe implemented animation of shorthand SMIL CSS properties in Stylo.
  • upsuper added support for interpolation between currentColor and numeric colour values.
  • glennw implemented per-frame allocations in the WebRender GPU cache.
  • mbrubeck optimized the implementation of parallel layout to improve performance.
  • jamesmunns wrote a tutorial covering unions in rust-bindgen.
  • jdm increased the size of the buffer used when receiving network data.
  • asajeffrey implemented the basic plumbing for CSS Houdini paint worklets.
  • cbrewster added a custom element registry, as part of his Google Summer of Code project.
  • asajeffrey removed the assumption that Servo contains a single root browser context.
  • jdm added meaningful errors and error reporting to the CSS parser API.
  • gterzian separated event loop logic from the logic of running Servo’s compositor.
  • nox replaced some CSS property-specific code with more generic implementations.
  • bzbarsky reduced the size of an important style system type.
New Contributors

Interested in helping build a web browser? Take a look at our curated list of issues that are good for new contributors!

Categorieën: Mozilla-nl planet

Tom Schuster: The pitfalls of self-hosting JavaScript

Mozilla planet - fr, 09/06/2017 - 23:03

Recently the SpiderMonkey team has been looking into improving ECMAScript 6 and real world performance as part of the QuantumFlow project.

While working on this we realized that self-hosting functions can have significant downsides, especially with bad type information. Apparently even the v8 team is moving away from self-hosting to writing more functions in hand written macro assembler code.

Here is a list of things I can remember from the top of my head:

  • Self-hosted functions that always call out to C++ (native) functions that can not be inlined in IonMonkey are probably a bad idea.
  • Self-hosted functions often have very bad type-information, because they are called from a lot of different frameworks and user code etc. This means we need to absolutely be able to inline that function. (e.g. bug 1364854 about Object.assign or bug 1366372 about Array.from)
  • If a self-hosted function only runs in the baseline compiler we won’t get any inlining, this means all those small function calls to ToLength or Math.max add up. We should probably look into manually inling more or even using something like Facebook’s prepack.
  • We usually only inline C++ functions called from self-hosted functions in IonMonkey under perfect conditions, if those are not met we fall back to a slow JS to C++ call. (e.g. bug 1366263 about various RegExp methods)
  • Basically this all comes back to somehow making sure that even with bad type information (i.e. polymorphic types) your self-hosted JS code still reaches an acceptable level of performance. For example by introducing inline caching for the in operator we fixed a real world performance issue in the Array.prototype.concat method.
  • Overall just relying on IonMonkey inlining to save our bacon probably isn’t a good way forward.
Categorieën: Mozilla-nl planet

Jeff Walden: Not a gluten-free trail

Mozilla planet - fr, 09/06/2017 - 21:27

Sitting cool at mile 444 right now. I was aiming  to be at the Sierras by June 19 or 27, but the snow course I signed up for then got canceled, so I’m in no rush. Might slow down for particular recommended attractions, but otherwise the plan is consistent 20+-mile days.

Categorieën: Mozilla-nl planet

Sam Foster: Haiku Reflections: Experiences in Reality

Mozilla planet - fr, 09/06/2017 - 20:37

Over the several months we worked on Project Haiku, one of the questions we were repeatedly asked was “Why not just make a smartphone app to do this?” Answering that gets right to the heart of what we were trying to demonstrate with Project Haiku specifically, and wanted to see more of in general in IoT/Connected Devices.

This is part of a series of posts on a project I worked on for Mozilla’s Connected Devices group. For context and an overview of the project, please see my earlier post.

The problem with navigating virtual worlds

One of IoT’s great promises is to extend the internet and the web to devices and sensors in our physical world. The flip side of this is another equally powerful idea: to bring the digital into our environment; make it tangible and real and take up space. If you’ve lived through the emergence of the web over the last 20 years, web browsers, smart phones and tablets - that might seem like stepping backwards. Digital technology and the web specifically have broken down physical and geographical barriers to accessing information. We can communicate and share experiences across the globe with a few clicks or keystrokes. But, after 20 years, the web is still in “cyber-space”. We go to this parallel virtual universe and navigate with pointers and maps that have no reference to our analog lives and which confound our intuitive sense of place. This makes wayfinding and building mental models difficult. And without being grounded by inputs and context from our physical environment, the simultaneous existence of these two worlds remains unsettling and can cause a kind of subtle tension.

Imagined space, Hackers-style

As I write this, the display in front of me shows me content framed by a website, which is framed by my browser’s UI, which is framed by the operating system’s window manager and desktop. The display itself has it own frame - a bezel on an enclosure sitting on my desk. And these are just the literal boxes. Then there are the conceptual boxes - a page within a site, within a domain, presented by an application as one of many tabs. Sites, domains, applications, windows, homescreens, desktops, workspaces…

The flexibility this arrangement brings is truly incredible. But, for some common tasks it is also a burden. If we could collapse some of these worlds within worlds down to something simpler, direct and tangible, we could engage that ancestral part of our brains that really wants things to have three dimensions and take up space in our world. We need a way to tear off a piece of the web and pin it to the wall, make space for it on the desk, carry it with us; to give it physical presence.

Permission to uni-task

Assigning a single function to a thing - when the capability exists to be many things at once - was another source of skepticism and concern throughout Project Haiku. But in the history of invention, the pendulum swings continually between uni-tasking and multi-tasking; specialized and general. A synthesizer and an electric piano share origins and overlap in functions, but one does not supersede the other. They are different tools for distinct circumstances. In an age of ubiquitous smart phones, wrist watches still provide a function, and project status and values. There’s a pragmatism and attractive simplicity to dedicating a single task to an object we use. The problem is that as we stack functions into a single device, each new possibility requires a means of selecting which one we want. Reading or writing? Bold or italic text? Shared or private, published or deleted, for one group or broadcast to all? Each decision, each action is an interaction with a digital interface, stacked and overlaid into the same physical object that is our computer, tablet or phone. Uni-tasking devices give us an opportunity to dismantle this stack and peel away the layers.

The two ideas of single function and occupying physical space are complementary: I check the weather by looking out the window, I check the time by glancing at my wrist, the recipe I want is bookmarked in the last book on the shelf. We can create similar coordinates or landmarks for our digital interactions as well.

Our sense of place and proximity is also an important input to how we prioritize what needs doing. A sink full of dishes demands my attention - while I’m in the kitchen. But when I’m downtown, it has to wait while I attend to other matters. Similarly, a colleague raising a question can expect me to answer when I’m in the same room. But we both understand that as the distance between us changes, so does the urgency to provide an answer. When I’m at the office, work things are my priority. As I travel home, my context shifts. Expectations change as we move from place to place, and physical locations and boundaries help partition our lives. Its true that the smart phone started as a huge convenience by un-tethering us from the desk to carry our access to information - and its access to us - with us. But, by doing so, we lost some of the ability to walk away; to step out from a conversation or leave work behind.

A concept rendering using one of the proposed form-factors for the Haiku device

Addressing these tensions became one of the goals of Project Haiku. As we talked to people about their interactions with technology in their home and in their lives, we saw again and again how poor a fit the best of today’s solutions were. What began as empowering and liberating has started to infringe on people’s freedom to chose how to spend their time.

When I’m spending time on my computer, its just more opportunities for it to beep at me. Every chance I get I turn it off. Typing into a box - what fun is that? You guys should come up with something… good.

This is a quote from one of our early interviews. It was a refreshing perspective and sentiments like this - as well as the moments of joy and connectedness that we saw were possible - that helped steer this project. We weren’t able to finish the story by bringing a product to market. But the process and all we learned along the way will stick with me. It is my hope that this series of posts will plant some seeds and perhaps give other future projects a small nudge towards making our technology experiences more grounded in the world we move about in.

Categorieën: Mozilla-nl planet

Mike Hoye: Trimming The Roster

Mozilla planet - fr, 09/06/2017 - 20:25

This is a minor administrative note about Planet Mozilla.

In the next few weeks I’ll be doing some long-overdue maintenance and cleaning out dead feeds from Planet and the various sub-Planet blogrolls to help keep them focused and helpful.

I’m going to start by scanning existing feeds and culling any that error out every day for the next two weeks. After that I’ll go down the list of remaining feeds individually, and confirm their author’s ongoing involvement in Mozilla and ask for tagged feeds wherever possible. “Involved in Mozilla” can mean a lot of things – the mission, the many projects, the many communities – so I’ll be happy to take a yes or no and leave it at that.

The process should be pretty painless – with a bit of luck you won’t even notice – but I thought I’d give you a heads up regardless. As usual, leave a comment or email me if you’ve got questions.

Categorieën: Mozilla-nl planet

Support.Mozilla.Org: Event Report: SUMO Community Meeting – Abidjan (3-4 June 2017)

Mozilla planet - fr, 09/06/2017 - 20:04

Hey there, SUMO Nation!

You may remember Abbackar’s previous post about meetings in Ivory Coast. I am very happy to inform you that the community there is going strong and keeps support Mozilla’s mission. Read Abbackar’s report from the recent meeting in Abidjan below.

On the weekend of 3rd and 4th of June, the community members of Côte d’Ivoire met in Abidjan for a SUMO Community Meetup. The event was attended by 21 people, six of who were new contributors, interested in participating in Mozilla’s mission through SUMO.

The Saturday meeting started at 9 and went on for six hours, with a small lunch break. During that time we talked about the state of SUMO and the Mozilla updates that had an influence for our community over the past months.

We also introduced new contributors to the website and the philosophy of SUMO – as well as the Respond social support tool. New contributors had a chance to see both sites in action, learn how they worked and discuss their future contributions.

After that, we had a practical session in Respond, allowing existing and new contributors to exchange knowledge and experiences.

An important fact to mention is that the computer we used for the event is a “Jerry” – a computer in a can – made from recycled materials and recycled by our community members.

After the training and a session of answering questions, we ended the first day of the meetup.

Sunday started with the analysis of the 2016 balance sheet and a discussion of our community’s roadmap for 2017. We talked about ways of increasing our community engagement in SUMO in 2017. Several solutions were discussed at length, allowing us to share and assign tasks to people present at the event.

We decided to train together on a single theme each month to increase focus. We also acknowledged the cancellation of our Nouchi localization project, due to the difficulties with creating a new technical vocabulary within that language. Our localization efforts will be focused on French from now on.

The Sunday lunch had in a great atmosphere as we shared a local dish called garba. The meeting ended with a Q&A session focused on addressing the concerns and doubts of the new contributors.

The meeting in Abidjan was a great opportunity to catch up, discuss the most recent updates, motivate existing contributors and recruit new ones for Mozilla’s mission. We ended the whole event with a family photo of all the people present.

We are all looking forward to the second session in the Bouake, in the center of Côte d’Ivoire.

We are humbled and grateful for the effort and passion of the community in Ivory Coast. Thank you for your inspiring report and local leadership, Abbackar :-) Onwards and forwards, to Bouake!

Categorieën: Mozilla-nl planet

Hacks.Mozilla.Org: CSS Shapes, clipping and masking – and how to use them

Mozilla planet - fr, 09/06/2017 - 17:46

The release of Firefox 54 is just around the corner and it will introduce new features into an already cool CSS property: clip-path.

clip-path is a property that allows us to clip (i.e., cut away) parts of an element. Up until now, in Firefox you could only use an SVG to clip an element:

But with Firefox 54, you will be able to use CSS shapes as well: insets, circles, ellipses and polygons!

Note: this post contains many demos, which require support for clip-path and mask. To be able to see and interact with every demo in this post, you will need Firefox 54 or higher.

Basic usage

It’s important to take into account that clip-path does not accept “images” as input, but as <clipPath> elements:

See the Pen clip-path (static SVG mask) by ladybenko (@ladybenko) on CodePen.

A cool thing is that these <clipPath> elements can contain SVG animations:

See the Pen clip-path (animated SVG) by ladybenko (@ladybenko) on CodePen.

However, with the upcoming Firefox release we will also have CSS shape functions at our disposal. These allow us to define shapes within our stylesheets, so there is no need for an SVG. The shape functions we have at our disposal are: circle, ellipse, inset and polygon. You can see them in action here:

See the Pen oWJBwW by ladybenko (@ladybenko) on CodePen.

And not only that, but we can animate them with CSS as well. The only restrictions are that we cannot “mix” function shapes (i.e., morphing from a circle to an inset), and that when animating polygons, the polygons must preserve the same number of vertices during the whole animation.

Here’s a simple animation using a circle shape:

See the Pen Animated clip-path by ladybenko (@ladybenko) on CodePen.

And here is another animation using polygon. Note: Even though we are restricted to preserving our set number of vertices, we can “merge” them by repeating the values. This creates the illusion of animating to a polygon with any number of sides.

See the Pen Animated clip-path (polygon) by ladybenko (@ladybenko) on CodePen.

Note that clip-path also opens new possibilities layout-wise. The following demo uses clipping to make an image more interesting in a multi-column article:

See the Pen Layout example by ladybenko (@ladybenko) on CodePen.

Spicing things up with JavaScript

Clipping opens up cool possibilities. In the following example, clip-path has been used to isolate elements of a site –in this case, simulating a tour/tutorial:

See the Pen tour with clip-path by ladybenko (@ladybenko) on CodePen.

It’s done with JavaScript by fetching the dimensions of an element on the fly, and calculating the distance with respect to a reference container, and then using that distance to update the inset shape used on the clip-path property.

We can now also dynamically change the clipping according to user input, like in this example that features a “periscope” effect controlled by the mouse:

See the Pen clip-path (periscope) by ladybenko (@ladybenko) on CodePen.

clip-path or mask?

There is a similar CSS property, mask, but it is not identical to clip-path. Depending on your specific use case, you should choose one or the other. Also note that support varies across browsers, and currently Firefox is the only browser that fully supports all the mask features, so you will need to run Firefox 54 to interact with the demos below on Codepen.

Masking can use an image or a <mask> element in an SVG. clip-path, on the other hand, uses an SVG path or a CSS shape.

Masking modifies the appearance of the element it masks. For instance, here is a circular mask filled with a linear gradient:

Linear gradient mask

And remember that you can use bitmap images as well even if they don’t have an alpha channel (i.e., transparency), by tweaking the mask-mode:

mask-mode example

The key concept of masking is that it modifies the pixels of an image, changing their values – to the point of making some of them fully transparent.

On the other hand, clipping “cuts” the element, and this includes its collision surface. Check out the following demo showing two identical pictures masked and clipped with the same cross shape. Try hovering over the pictures and see what happens. You will notice that in the masked image the collision area also contains the masked parts. In the clipped image, the collision area is only the visible part (i.e., the cross shape) of the element.

Mask vs clip comparison

Is masking then superior to clipping, or vice versa? No, they are just used for different things.

I hope this post has made you curious about clip-path. Check out the upcoming version of Firefox to try it!

Categorieën: Mozilla-nl planet

Mozilla Addons Blog: Keeping Up with the Add-ons Community

Mozilla planet - fr, 09/06/2017 - 16:00

With the add-ons community spread out among multiple projects and several communication platforms, it can feel difficult to stay connected and informed.

To help bridge some of these gaps, here is a quick refresher guide on our most-used communication channels and how you can use them to stay updated about the areas you care about most.

Announcements

Announcements will continue to be posted to the Add-ons Blog and cross-posted to Discourse.

Find Documentation

MDN Web Docs has great resources for creating and publishing extensions and themes.

You can also find documentation and additional information about specific projects on the Add-ons wiki and the WebExtensions wiki.

Get Technical Help Join a Public Meeting

Please feel welcome to join any or all of the following public meetings:

Add-ons Community Meeting (every other Tuesday at 17:00 UTC)

Join the add-ons community as we discuss current and upcoming happenings in the add-ons world. Agendas will be posted in advance to the Add-ons > Contribute category on Discourse. See the wiki for the next meeting date and call-in information.

Good First Bugs Triage (every other Tuesday at 17:00 UTC)

Come and help triage good first bugs for new contributors! See the wiki for the next meeting date and call-in information.

WebExtensions API Triage (every Tuesday at 17:30 UTC)

Join us to discuss proposals for new WebExtension APIs. Agendas are distributed in advance to the dev-addons mailing list and the Add-ons > Contribute category on Discourse. See the wiki for the next meeting date and call-in information. To request a new API, please read this first.

Be Social With Us Get Involved

Check out the Contribute wiki for ways you can get involved.

The post Keeping Up with the Add-ons Community appeared first on Mozilla Add-ons Blog.

Categorieën: Mozilla-nl planet

Hacks.Mozilla.Org: CSS shapes, clipping and masking

Mozilla planet - fr, 09/06/2017 - 11:58

The release of Firefox 54, is just around the corner and it will introduce new features into an already cool CSS property: clip-path.

clip-path is a property that allows us to clip (i.e. cut away) parts of an element. Until today, in Firefox you could only use an SVG to clip an element:

But with Firefox 54, you will be able to use CSS shapes as well: insets, circles, ellipses and polygons!

Basic usage

It’s important to take into account that clip-path does not accept “images” as input, but <clipPath> elements:

See the Pen clip-path (static SVG mask) by ladybenko (@ladybenko) on CodePen.

A cool thing is that these <clipPath> elements can contain SVG animations:

See the Pen clip-path (animated SVG) by ladybenko (@ladybenko) on CodePen.

However, with the upcoming Firefox release we will also have CSS shape functions at our disposal. These allow us to define shapes within our stylesheets, so there is no need for an  SVG. The shape functions we have at our disposal are: circle, ellipse, inset and polygon. You can see them in action here:

See the Pen oWJBwW by ladybenko (@ladybenko) on CodePen.

And not only that, but we can animate them with CSS as well. The only restriction is that we cannot “mix” function shapes (i.e., morphing from a circle to an inset), and that when animating polygons they must preserve the same number of vertices during the whole animation.

Here’s a simple animation using a circle shape:

See the Pen Animated clip-path by ladybenko (@ladybenko) on CodePen.

And here is another animation using polygon. Note: Even though we are restricted to preserving  our set number of vertices, we can “merge” them by repeating the values. This creates the illusion of animating to a polygon with any number of sides.

Note that clip-path also opens new possibilities layout-wise. The following demo uses clipping to make an image more interesting in a multi-column article:

<INSERT https://codepen.io/ladybenko/pen/QgjLMp >

Spicing things up with JavaScript

Clipping opens up cool possibilities. In the following example, clip-path has been used to isolate elements of a site –in this case, simulating a tour/tutorial:

It’s done with JavaScript by fetching the dimensions of an element on the fly, and calculating the distance respect a reference container, and then using that distance to update the inset shape used on the clip-path property.

We can now also  dynamically change the clipping according to user input, like in this example that features a “periscope” effect controlled by the mouse:

clip-path or mask?

There is a similar CSS property, mask, but it is  not identical to clip-path. Depending on your specific use case, you should choose one or the other. Also note that support varies across browsers, and currently Firefox is the only browser that fully supports all the mask features, so you will need Firefox to interact with the demos below on Codepen.

Masking can use an image or a <mask> element in an SVG. clip-path, on the other hand, uses an SVG path or a CSS shape.

Masking modifies the appearance of the element it masks. For instance, here is a circular mask filled with a linear gradient:

And remember that you can use bitmap images as well even if they don’t have an alpha channel (i.e. transparency), by tweaking the mask-mode:

The key concept of masking is that it modifies the pixels of an image, changing their values –to the point of making some of them fully transparent.

On the other hand, clipping “cuts” the element, and this includes its collision surface. Check out the following demo showing two identical pictures masked and clipped with the same cross shape. Try hovering it and see what happens. You will notice that in the masked image the collision area also contains the masked parts. In the clipped image, the collision area is only the visible part (i.e. the cross shape) of the element.

 

Is masking then superior to clipping, or vice versa? No, they are just used for different things.

I hope this post has made you curious about clip-path. Stay tuned to the upcoming version of Firefox to try it!

Categorieën: Mozilla-nl planet

Andy McKay: Cleaning up intermittents

Mozilla planet - fr, 09/06/2017 - 09:00

Orange Factor robot creates bugs in Bugzilla components when it detects intermittents in the Firefox test suite. Unfortunately it never cleans up after itself. Trying to keep the bug count in a component manageable is something that really helps me understand whats going on and the orange factor bugs that never get closed don't help.

I found that as I was triaging through I found a common pattern, which is basically go look on Brasstacks and see if it occured in a while. From that came a simple script that looks for intermittents and checks to see if it occurred on Brasstacks in the last 180 days, if not then close it.

Both Brasstacks and Bugzilla have REST APIs, but last week or so Brasstacks went behind Mozilla internal authentication. To get around that, you need to pass the session cookie and user agent through any requests.

The resulting script is on Github and closes out a couple of bugs for us each week.

For this script to work, you need a bunch of environment variables: the brasstacks session, the brasstacks user agent, the bugzilla API key and bugzilla API token. But this script is written for me, not for your project, you'll probably want to do something different anyway.

Categorieën: Mozilla-nl planet

Ehsan Akhgari: Quantum Flow Engineering Newsletter #12

Mozilla planet - fr, 09/06/2017 - 08:39

It has been a few weeks since I have given an update about our progress on reducing the amount of slow synchronous IPC messages that we send across our processes.  This hasn’t been because there hasn’t been a lot to talk about, quite to the contrary, so much great work has happened here that for a while I decided it may be better to highlight other ongoing work instead.  But now as the development cycle of Firefox 55 comes to a closing point, it’s time to have another look at where we stand on this issue.

I’ve prepared a new Sync IPC Analysis for today including data from both JS and C++ initiated sync IPCs.  First bit of unfortunate news is that the historical data in the spreadsheet is lost because the server hosting the data had a few hiccups and Google Spreadsheets seems to not really not like that.  Second bit of unfortunate news is that our hopes for disabling the non-multiprocess compatible add-ons by default in Nightly helping with reducing some of the noise in this data don’t seem to have panned out.  The data still shows a lot of synchronous IPC triggered from JS as before, and the lion’s share of it are messages that are clearly coming from add-ons judging from their names.  My guess about why is that Nightly users have probably turned these add-ons back on manually.  So we will have to live with the noise in the data for now (this is an issue that we have to struggle with when dealing with a lot of telemetry data unfortunately, here is another recent example that wasted some time and energy).

This time I won’t give out a percentage based break-down because now after many of these bugs have been fixed, the impact of really commonly occurring IPC messages such as the one we have for document.cookie really makes the earlier method of exploring the data pointless (you can explore the pie chart to get a quick sense of why, I’ll just say that message alone is now 55% of the chart and that plus the second one together form 75% of the data.)  This is a great problem to have, of course, it means that we’re now starting to get to the “long tail” part of this issue.

The current top offenders, besides the mentioned bug (which BTW is still being made great progress on!) are add-on/browser CPOW messages, two graphics initialization messages that we send at content process startup, NotifyIMEFocus that’s in the process of being fixed, and window.open() which I’ve spent weeks on but have yet to fix all of our tests to be able to land my fixes for (which I’ve also temporarily given up working on looking for something that isn’t this bug to work on for a little while!).  Besides those if you look at the dependency list of the tracker bug, there are many other bugs that are very close to being fixed.  Firefox 55 is going to be much better from this perspective and I hope the future releases will improve on that!

The other effort that is moving ahead quite fast is optimizing for Speedometer V2.  See the chart of our progress on AreWeFastYet.com:

Last week, our score on this chart was about 84.  Now we are at about 91.  Not bad for a week worth a work!  If you’re curious to follow along, see our tracker bug.  Also, Speedometer is a very JS heavy benchmark, so a lot of the bugs that are filed and fixed for it happen inside SpiderMonkey so watching the SpiderMonkey specific tracker bug is probably a good idea as well.

It’s time for a short performance story!  This one is about technical debt.  I’ve looked at many performance bugs over the past few months of the Quantum Flow project, and in many cases the solutions have turned out to be just deleting the slow code, that’s it!  It turns out that in a large code base as code ages, there is a lot of code that isn’t really serving any purpose any more but nobody discovers this because it’s impractical to audit every single line of code with scrutiny.  But then some of this unnecessary code is bound to have severe performance issues, and when it does, your software ends up carrying that cruft for years!  Here are a few examples: a function call taking 2.7 seconds on a cold startup doing something that became unnecessary once we dropped support for Windows XP and Vista, some migration code that was doing synchronous IO during all startups to migrate users of Firefox 34 and older to a newer version, and an outdated telemetry probe that turned out to not in use any more scheduling many unnecessary timers causing unneeded jank.

I’ve been thinking about what to do about these issues.  The first step is fix them, which is what we are busy doing now, but finding these issues typically requires some work, and it would be nice if we had a systematic way of dealing with some of them.  For example, wouldn’t it be nice if we had a MIMIMUM_WINDOWS macro that controlled all Windows specific code in the tree, and in the case of my earlier example perhaps the original code would have checked that macro against the minimum version (7 or higher) and when we’d bump MINIMUM_WINDOWS up to 7 along with bumping our release requirements, such code will turn itself into preprocessor waste (hurray!), but of course, the hard part is finding all the code that needs to abide by this macro, and the harder part is enforcing this consistently going forward!  Some of the other issues aren’t possible to deal with this way, so we need to work on getting better at detecting these issues.  Not sure, definitely some food for thought!

I’ll stop here, and move on to acknowledge the great work of all of you who helped make Firefox faster this past week!  As per usual, apologies to those who I’m forgetting to mention here:

Categorieën: Mozilla-nl planet

Pages