mozilla

Mozilla Nederland LogoDe Nederlandse
Mozilla-gemeenschap

Abonneren op feed Mozilla planet
Planet Mozilla - http://planet.mozilla.org/
Bijgewerkt: 6 dagen 19 uur geleden

Air Mozilla: Rep. Eshoo Net Neutrality Roundtable

ma, 19/06/2017 - 18:30

Rep. Eshoo Net Neutrality Roundtable Congresswoman Anna Eshoo (D-CA) will convene a roundtable to discuss the impacts of net neutrality and the consequence of eviscerating the policy. Eshoo will hear...

Categorieën: Mozilla-nl planet

Carsten Book: Reminder :) Please take part in the Sheriff Survey!

ma, 19/06/2017 - 16:45

Hi,
just a reminder that we have our Sheriff Survey Running and please take part in it, it helps us a lot to improve our work!

Link: https://docs.google.com/a/mozilla.com/forms/d/e/1FAIpQLSfGBZ50zkG9W-Wnk1ACBfFvj1iu8e46I5gs9t-G3ZWDpcy4-A/viewform

 

thanks!

Tomcat

Categorieën: Mozilla-nl planet

Cameron Kaiser: TenFourFox FPR1 available

ma, 19/06/2017 - 07:26
TenFourFox Feature Parity Release 1 is available for testing (downloads, hashes, release notes). There are no major changes from the beta except for a couple minor efficiency updates and a font blacklist update, and all remaining applicable security issues have been backported as well.

Chris T reported that old issue 72 (a/k/a bug 641597) has resurfaced in FPR1. Most likely this bug was never actually fixed, just wallpapered over by something or other, and the efficiency improvements in FPR1 have made it easier to trigger again. That said, it has only ever manifested on certain 10.5 systems; it has never been reproduced on 10.4 by anyone, and I can't reproduce it myself on my own 10.5 DLSD PowerBook G4. For that reason I'm proceeding with the release as intended but if your system is affected, please post your steps to replicate and we'll compare them with Chris' (especially if you have a 10.4 system, since that will be much easier for me to personally debug). Please also note any haxies or system extensions as the issue can be replicated on a clean profile, meaning addons or weird settings don't appear to be a factor. If we find a fix and enough people are bitten, it should be possible to spin a point release.

The plan is for a Tuesday/Wednesday release ahead of schedule, so advise if there are any new showstoppers.

Categorieën: Mozilla-nl planet

Emma Irwin: Escaping the economy of souls — starting with Facebook (in 4 steps)

zo, 18/06/2017 - 18:02

“I think it’s time for a reclamation movement.”

Tim Wu author of The Attention Merchant in a talk at @ Mozilla Toronto last week

A little over two months ago, I removed the web-warping, soul exploiting, goggles of a ‘free’ Facebook account — free as in guinea pig. I lost my best friend and partner to cancer around this time, and Facebook knew that.

I found myself staring at content curated just for me — a Ted Talk about end of life care, cancer foundations, hospital foundations, an ‘inspiring’ story of a boy who survived cancer, and a review of ‘Option B’, Sheryl Sandburg’s book on grief… I had joined her Facebook group, but they knew that too:

And there I was, as if waking in a horror movie finding vile tentacles of a venomous creature wrapped around me, I saw; I witnessed and felt the cost of free. The cost of my well being, of dignity and for all those around me — the cost of my attention, focus and awareness of the world around me.

Was my feed part of an experiment or just really shitty and cruel algorithms? Facebook doesn’t hide the fact it’s learning from people like me during personal crises. Rather, it publishes reports on the findings:

 

And probably what upset me the most was that Sheryl Sandburg of Facebook, whose book I liked and shared, who should be protective of people in grief was bringing large numbers of people to her Facebook group —so much heartbreak, so much trauma data. And Sheryl is aware…

“ However, the company was widely criticised for manipulating material from people’s personal lives in order to play with user emotions or make them sad.

In response on Thursday, Facebook said that it was introducing new rules for conducting research on users with clearer guidelines, better training for researchers and a stricter review process.

But, it did not state whether or not it would notify users — or seek their consent — before starting a study.”

— BBC News “Facebook admits failings over emotion manipulation study”

The reason I write this is to wake you up as well, although you are likely partially there — you need to get all the way there. Please stumble with me to some type of reclamation movement, it’s important for humanity (no exaggeration). Facebook, and others in the economy of souls design addictive technology to keep us there.

I’ve used the same excuses you are. The spine of Facebook’s business model is your contact list — and this should be the center of reclaim.

 

 

 

 

 

 

Dumping Facebook means threatening the ambient contact I have with my 81 year old Aunt whom I love and adore. And that is the problem.

— Clint Lalonde (@clintlalonde) May 28, 2017

 

Below are the steps I’ve taken to wean myself off Facebook and my contact list off Facebook for good. I want an empowered online life, and I want that for you too.

Step 1 — Snap out of it!

I really hope you don’t have to lose someone close to you, or go through a trauma or tragedy to see the impact of your data being used against you. If you need inspiration watch Tim Wu’s talk and embrace the message that ‘free’ is not free. Read Facebook’s data policy, and remember they never said they would stop doing this.

Step 2— Get Facebook Messenger, Disable Facebook

Didn’t see that one coming did you? As much as Messenger annoyed me, being a separate App, what it provides is the ability to fully disable Facebook itself, but keep messaging for a transition period — which can be as long as you need it to be. You can still talk to, and share photos with grandma.

Think of Messenger as nicotine gum for FB addiction. Not great, still being tracked, but will likely get you further than cold turkey.

This one step means you you’re unplugged from:

  • Fake News
  • Like/Reactions
  • Mindless feed scrolling
  • Interacting in groups
  • Unsolicited emotional reactions to content

But keep:

  • Messaging
  • Sharing photos,
  • Group conversations

And slowly start migrating people to other tools for chat and conversation. Let them know why.

Step 3— Curate Personal Content

Even though you spent a lot of time reading content on Facebook, chances are you’ve read fake news, crappy click bait and remained in a filter bubble of your own opinions. There’s a whole world out there!

  • Subscribe(yes pay) to actual newspapers with real reporters. I now subscribe to the New York Times, and support local journalism with a subscription as well.
  • Use good tools. I like Flipboard, and organize all ‘read later’ content into Pocket, which is my goto for the times I would normally have opened Facebook. Remember we’re dealing with addiction — replace habits with new ones.
  • Watch Netflix or read a book. Step away from news and the world and escape. ‘Attention Theft’ of Facebook really makes sense to me now I realize how many extended periods of time are available to me.
  • Follow people unlike yourself on Twitter. I know Twitter has issues, but one thing at a time.
Step 4 — Influence others

I feel like a tiny drop in the ocean, but when people tell me their <insert information thing here> is on Facebook, I tell them I’m not on Facebook and so require another way. I see others doing this too. Even public pages on Facebook are not public — they’re draped in a kind of ‘free membership paywall’, that hides half the page if you’re not logged in.

Facebook groups are not good for forums, there are (much, much) better and open source forums. Suggest alternatives.

Tell people why you’re not on Facebook, but not in an arrogant kind of way — more like ‘I quit smoking because my kids need me to live’ kind of way that makes people reflect on their own health.

Public Pages are trapped in a ‘Free Membership Paywall’

Step 4 — Turn off Facebook Messenger

Turn off Facebook Messenger. I haven’t done this yet, but I am using it less and less. I probably use it 3 x a week for people I haven’t moved over to other communications yet.

Go explore the web again.

https://twitter.com/clintlalonde/status/868661574914891777

Step 1 — Snap out of it!

I really hope you don’t have to lose someone close to you, or go through a trauma or tragedy to see the impact of your data being used against you. If you need inspiration watch Tim Wu’s talk and embrace the message that ‘free’ is not free. Read Facebook’s data policy, and remember they never said they would stop doing this.

Step 2— Get Facebook Messenger, Disable Facebook

Didn’t see that one coming did you? As much as Messenger annoyed me, being a separate App, what it provides is the ability to fully disable Facebook itself, but keep messaging for a transition period — which can be as long as you need it to be. You can still talk to, and share photos with grandma.

Think of Messenger as nicotine gum for FB addiction. Not great, still being tracked, but will likely get you further than cold turkey.

This one step means you you’re unplugged from:

  • Fake News
  • Like/Reactions
  • Mindless feed scrolling
  • Interacting in groups
  • Unsolicited emotional reactions to content

But keep:

  • Messaging
  • Sharing photos,
  • Group conversations

And slowly start migrating people to other tools for chat and conversation. Let them know why.

Step 3— Curate Personal Content

Even though you spent a lot of time reading content on Facebook, chances are you’ve read fake news, crappy click bait and remained in a filter bubble of your own opinions. There’s a whole world out there!

  • Subscribe(yes pay) to actual newspapers with real reporters. I now subscribe to the New York Times, and support local journalism with a subscription as well.
  • Use good tools. I like Flipboard, and organize all ‘read later’ content into Pocket, which is my goto for the times I would normally have opened Facebook. Remember we’re dealing with addiction — replace habits with new ones.
  • Watch Netflix or read a book. Step away from news and the world and escape. ‘Attention Theft’ of Facebook really makes sense to me now I realize how many extended periods of time are available to me.
  • Follow people unlike yourself on Twitter. I know Twitter has issues, but one thing at a time.
Step 4 — Influence others

I feel like a tiny drop in the ocean, but when people tell me their <insert information thing here> is on Facebook, I tell them I’m not on Facebook and so require another way. I see others doing this too. Even public pages on Facebook are not public — they’re draped in a kind of ‘free membership paywall’, that hides half the page if you’re not logged in.

Facebook groups are not good for forums, there are (much, much) better and open source forums. Suggest alternatives.

Tell people why you’re not on Facebook, but not in an arrogant kind of way — more like ‘I quit smoking because my kids need me to live’ kind of way that makes people reflect on their own health.

Public Pages are trapped in a ‘Free Membership Paywall’

Step 4 — Turn off Facebook Messenger

Turn off Facebook Messenger. I haven’t done this yet, but I am using it less and less. I probably use it 3 x a week for people I haven’t moved over to other communications yet.

Go reclaim the web.

Feature Photo by Marco Gomes Attribution-NonCommercial-ShareAlike License

Cross posted to Medium

FacebookTwitterGoogle+Share

Categorieën: Mozilla-nl planet

Daniel Stenberg: curl doesn’t spew binary anymore

za, 17/06/2017 - 00:00

One of the least favorite habits of curl during all these years, I’ve been told, is when users forget to instruct the command line tool where to store the downloaded file and as a direct consequence, curl instead sends a lot of binary “gunk” to the terminal. The end result of that is at best just a busload of weird-looking characters on the screen, but with just a little bit of bad luck it can also lock up the terminal completely or change it in other ways.

Starting in curl 7.55.0 (from this commit), curl will inspect the beginning of each download that has been told to get sent to the terminal (tty!) and attempt to detect and prevent raw binary output to get sent there. The code is only simply looking for a binary zero in the data.

$ curl https://example.com/image.jpg Warning: Binary output can mess up your terminal. Use "--output -" to tell curl to output it to your terminal anyway, or consider "--output <FILE>" to save to a file.

As the warning message says, there’s an option to use to switch off this emergency check for when you truly know what you’re doing and you don’t need curl to prevent you from doing this. Then you just tell curl explicitly that you want the output to stdout, with “–output -” (or “-o -” for a shorter version):

$ curl -o - https://example.com/binblob.img

We’re eager to get your input and feedback on how this works. We are aware of the risk of false positives for UTF-16 and UTF-32 outputs, but we think they are rare enough to not make this a huge problem.

This feature should be able to drastically reduce the risk for this:

Pipes

(Update, added after the initial posting.)

So many have remarked or otherwise asked how this affects when stdout is piped into something else. It doesn’t affect that! The whole point of this check is to only show the warning message if the binary output is sent to the terminal. If you instead pipe the output to another program or if you redirect the output with >, that will not trigger this warning but will instead continue just like before. Just like you’d expect it to.

Categorieën: Mozilla-nl planet

Air Mozilla: Webdev Beer and Tell: June 2017

vr, 16/06/2017 - 20:00

 June 2017 Once a month web developers across the Mozilla community get together (in person and virtually) to share what cool stuff we've been working on in...

Categorieën: Mozilla-nl planet

Air Mozilla: Webdev Beer and Tell: June 2017

vr, 16/06/2017 - 20:00

 June 2017 Once a month web developers across the Mozilla community get together (in person and virtually) to share what cool stuff we've been working on in...

Categorieën: Mozilla-nl planet

Will Kahn-Greene: The Soloists

vr, 16/06/2017 - 18:00

Building Firefox is a big endeavor. There are many teams and projects covering initiatives, maintenance, bug fixing, triage, localization, support, understanding feedback, marketing, communication, releasing, supporting infrastructure, crash analysis, and a bazillion other activities all to build a family of browsers and applications.

Teams and projects aren't static. People move around as priorities change and the landscape shifts and projects complete or are scuttled.

Sometimes projects get started up with a single person. Sometimes all the people except one move off a project. Sometimes we find ourselves working alone, in a basement office, with only a stapler equivalent to keep us company.

We are the soloists. You wouldn't believe the list of things we work on. Alone.

Where to find soloists: IRC, Slack

There's an IRC channel #soloists on irc.mozilla.org.

There's also a Slack channel #soloists on the Mozilla Slack [1].

These two places (and whatever other places soloists want to hang out at) are places where we can:

  • find some solace from the weary drudgery of being alone on their projects for days on end
  • ask for help
  • bounce ideas off each other
  • vent frustrations in a friendly forgiving place
  • get advice on dealing with things like code reviews and how to go on vacation
  • get recognition for a job well done

and a variety of other things that alleviate many of the problems we have as soloists.

[1]I just created it, so it's kind of empty. I'm feeling alone in the #soloists Slack channel. So alone. Stickers at the All Hands!

Over the last month or so, we spent some time figuring out #soloists stickers because we like stickers and you like stickers and everyone likes stickers.

They look like this:

/images/soloist_2017_handdrawn.thumbnail.png

Soloist 2017 sticker.

They're 2" by 2" and round. They're warm to the touch. They make you want to climb things. By yourself. Alone. With appropriate safety gear. [2]

If you're a soloist, come find one of us and get a sticker. Also, consider joining soloist channels.

If you support soloists, come find one of us and get a sticker. Ask us about the things we're working on. We may be solo, but we're working on real projects that almost certainly affect you. As a group, we did great things in the last 6 months. Alone. So alone.

[2]That's how they make me feel, anyhow.
Categorieën: Mozilla-nl planet

Daniel Stenberg: curl: read headers from file

vr, 16/06/2017 - 09:20

Starting in curl 7.55.0 (since this commit), you can tell curl to read custom headers from a file. A feature that has been asked for numerous times in the past, and the answer has always been to write a shell script to do it. Like this:

#!/bin/sh while read line; do args="$args -H '$line'"; done curl $args $URL

That’s now a response of the past (or for users stuck on old curl versions). We can now instead tell curl to read headers itself from a file using the curl standard @filename way:

$ curl -H @headers https://example.com

… and this also works if you want to just send custom headers to the proxy you do CONNECT to:

$ curl --proxy-headers @headers --proxy proxy:8080 https://example.com/

(this is a pure curl tool change that doesn’t affect libcurl, the library)

Categorieën: Mozilla-nl planet

Ehsan Akhgari: Quantum Flow Engineering Newsletter #13

vr, 16/06/2017 - 07:11

I’m back with some more updates on another week worth of work on improving various performance aspects of Firefox.

Similar to the past weeks, Speedometer remains a big focus area for performance work.  In addition to the many already identified bugs to work on, we are also still measuring the benchmark quite actively looking for more optimization opportunities.

Another item worthy of an update is Background Hang Reports.  Michael Layzell earlier today enabled collection of native stack traces on Win64 (and Mac) using the Gecko Profiler stack walking backend (Linux support soon to follow).  Because we are now using the Gecko Profiler backend for BHR, we can soon get interleaved native and pseudo-stacks from BHR similar to the ones that we have come to know and love in Gecko Profiler for a long time now!  Also, Doug Thayer has made a lot of progress on hangs.html, his front-end for exploring the native stack traces uploaded from BHR.  This is a nice and super fast tool to explore the hangs that our users are experiencing on the Nightly channel and it shows you the corresponding pseudo-stacks that are extremely helpful if for example the hang is coming from chrome-privileged JS (where we get full call stack information through telemetry).  Please have a look, and send him feedback.

This edition is exceptionally short, but the most interesting part of these is probably the last part anyway, the credits section, where I acknowledge the hard work of the people who worked on improving the performance of Firefox in the past week.  So let’s get to that, and I do hope I’m not dropping any names:

Categorieën: Mozilla-nl planet

Mike Hommey: Announcing git-cinnabar 0.5.0 beta 2

vr, 16/06/2017 - 01:12

Git-cinnabar is a git remote helper to interact with mercurial repositories. It allows to clone, pull and push from/to mercurial remote repositories, using git.

Get it on github.

These release notes are also available on the git-cinnabar wiki.

What’s new since 0.5.0 beta 1?
  • Enabled support for clonebundles for faster clones when the server provides them.
  • Git packs created by git-cinnabar are now smaller.
  • Added a new git cinnabar upgrade command to handle metadata upgrade separately from fsck.
  • Metadata upgrade is now significantly faster.
  • git cinnabar fsck also faster.
  • Both now also use significantly less memory.
  • Updated git to 2.13.1 for git-cinnabar-helper.
Categorieën: Mozilla-nl planet

Daniel Stenberg: curling over HTTP proxy

vr, 16/06/2017 - 00:09

Starting in curl 7.55.0 (this commit), curl will no longer try to ask HTTP proxies to perform non-HTTP transfers with GET, except for FTP. For all other protocols, curl now assumes you want to tunnel through the HTTP proxy when you use such a proxy and protocol combination.

Protocols and proxies

curl supports 23 different protocols right now, if we count the S-versions (the TLS based alternatives) as separate protocols.

curl also currently supports seven different proxy types that can be set independently of the protocol.

One type of proxy that curl supports is a so called “HTTP proxy”. The official HTTP standard includes a defined way how to speak to such a proxy and ask it to perform the request on the behalf of the client. curl supports using that over either HTTP/1.1 or HTTP/1.0, where you’d typically only use the latter version if you the first really doesn’t work with your ancient proxy.

HTTP proxy

All that is fine and good. But HTTP proxies were really only defined to handle HTTP, and to some extent HTTPS. When doing plain HTTP transfers over a proxy, the client will send its request to the proxy like this:

GET http://curl.haxx.se/ HTTP/1.1 Host: curl.haxx.se Accept: */* User-Agent: curl/7.55.0

… but for HTTPS, which should provide end to end encryption, a client needs to ask the proxy to instead tunnel through the proxy so that it can do TLS all the way, without any middle man, to the server:

CONNECT curl.haxx.se:443 HTTP/1.1 Host: curl.haxx.se:443 User-Agent: curl/7.55.0

When successful, the proxy responds with a “200” which means that the proxy has established a TCP connection to the remote server the client asked it to connect to, and the client can then proceed and do the TLS handshake with that server. When the TLS handshake is completed, a regular GET request is then sent over that established and secure TLS “tunnel” to the server. A GET request that then looks like one that is sent without proxy:

GET / HTTP/1.1 Host: curl.haxx.se User-Agent: curl/7.55.0 Accept: */* FTP over HTTP proxy

Things get more complicated when trying to perform transfers over the HTTP proxy using schemes that aren’t HTTP. As already described above, HTTP proxies are basically designed only for doing HTTP over them, but as they have this concept of tunneling through to the remote server it doesn’t have to be limited to just HTTP.

Also, historically, for decades people have deployed HTTP proxies that recognize FTP URLs, and transparently handle them for the client so the client can almost believe it is HTTP while the proxy has to speak FTP to the remote server in the other end and convert it back to HTTP to the client. On such proxies (Squid and Apache both support this mode for example), this sort of request is possible:

GET ftp://ftp.funet.fi/ HTTP/1.1 Host: ftp.funet.fi User-Agent: curl/7.55.0 Accept: */*

curl knows this and if you ask curl for FTP over an HTTP proxy, it will assume you have one of these proxies. It should be noted that this method of course limits what you can do FTP-wise and for example FTP upload is usually not working and if you ask curl to do FTP upload over and HTTP proxy it will do that with a HTTP PUT.

HTTP proxy tunnel

curl features an option (–proxytunnel) that lets the user forcible tell the client to not assume that the proxy speaks this protocol and instead use the CONNECT method with establishing a tunnel through the proxy to the remote server.

It should of course be noted that very few deployed HTTP proxies in the wild allow clients to CONNECT to whatever port they like. HTTP proxies tend to only allow connecting to port 443 as that is the official HTTPS port, and if you ask for another port it will respond back with a 4xx response code refusing to comply.

Not HTTP not FTP over HTTP proxy

So HTTP, HTTPS and FTP are sent over the HTTP proxy fine. That leaves us with nineteen more protocols. What happens with them when you ask curl to perform them over a HTTP proxy?

Now we have finally reached the change that has just been merged in curl and changes what curl does.

Before 7.55.0

curl would send all protocols as a regular GET to the proxy if asked to use a HTTP proxy without seeing the explicit proxy-tunnel option. This came from how FTP was done and grew from there without many people questioning it. Of course it wouldn’t ever work, but also very few people would actually attempt it because of that.

From 7.55.0

All protocols that aren’t HTTP, HTTPS or FTP will enable the tunnel-through mode automatically when a HTTP proxy is used. No more sending funny GET requests to proxies when they won’t work anyway. Also, it will prevent users from accidentally leak credentials to proxies that were intended for the server, which previously could happen if you omitted the tunnel option with a few authentication setups.

HTTP/2 proxy

Sorry, curl doesn’t support that yet. Patches welcome!

Categorieën: Mozilla-nl planet

Justin Dolske: Photon Engineering Newsletter #6

do, 15/06/2017 - 22:34

More exciting progress this week! Here’s Photon update #6!

New Menus

Work on the new Photon menus has reached the point where we’re ready to turn them on by default (for Nightly). Bug 1372309 is tracking the last remaining work (mostly test fixes), and you should see this happen in tomorrow’s Nightly. Up until now you’ve needed to manually enable the “browser.photon.structure.enabled” pref to play with the new menus – you’ll no longer need to flip that pref as it will already be enabled.

The biggest change you’ll notice is that the application menu (a.k.a. the “hamburger menu”) contents look different. Instead of a grid of icons, it’s a linear list of commands. Opening the menu and entering submenus is much snappier than before. Here’s the new look on Windows 10 (left) and macOS (right):

menus

The overflow menu (under the “>>” icon) has existed for a long time now, normally it’s only shown when the window is so narrow that we run out of space to show all the toolbar icons. You can now pin items to it permanently, as the new destination for commands you want easily accessible without taking up toolbar space. (Previously you could do this by adding items to the hamburger menu. That’s no longer customizable.)

overflows

There are also some minor related changes to Customization Mode, which now shows the overflow menu as a customization target instead of the old hamburger menu.

Recent changes

Menus/structure:

  • Enabling the new menus, as mentioned above.
  • The sidebar toolbar button no longer has a panel dropdown, instead it just toggles the display of the sidebar (you can change which sidebar is shown from inside the sidebar itself).
  • Various smaller styling/polish fixes to the different panels and toolbar items have landed and will continue to land this week.
  • WebExtension browser actions will now be pinned to the overflow panel instead of the hamburger menu (though we are aware of at least one remaining issue with this).

 

Animation:

  • The Photon-themed download icon landed, this was spun out of the main download animation bug to start landing pieces as they’re ready.
  • Work continues on animations for downloads toolbar button, stop/reload button, and page loading indicator. We’re working through some performance issues with the latter two — these animations are triggered during our performance test suites, and we see some impact to the measurements.
  • New arrow-panel animations are underway. We’re updating the way panels and menus animate when they’re opened and closed. On macOS we’re temporarily removing the current animation entirely, while we await platform improvements that allow us to get the effect we want in a way that performs well.

 

Preferences:

  • QA-sign off received for the old preferences shipping in Firefox 55 (which have not been the default on Nightly since landing the new preference reorg).
  • Search followups are largely complete, and we are enabling the search feature this week.
    search-prefs-demo

 

Visual redesign:

  • We got some good contributions from community member UK92! Thanks!
    • Updated two of our in-content pages (about:about and about:rights) to use the new Photon style.
    • With maximized windows on Windows 10, the window control buttons now span the entire height of the tabstrip, eliminating a small gap.
  • Landing updates to the sidebar styling (header and search box)
  • Updated the Synced Tabs button icon in the toolbar.
  • Starting work on changing the color of the titlebar on macOS (making it darker, similar to Windows 10).

 

Onboarding:

  • Lots of discussion and decisions, finalized scope and content for Firefox 56 tour.
  • De-scoped automigration, and are instead moving ahead with a manual import option accessible from the new Activity Stream page.
  • Simplified tour and notification logic
  • Outstanding technical issues resolved and a few 56 tour contents are ready to land this week. No more blank tour overlay in Nightly!

 

Performance:

 

Stay tuned for more updates next week!


Categorieën: Mozilla-nl planet

Chris McDonald: Message Broker: Channel Naming

do, 15/06/2017 - 19:04

I’ve started building a message broker as a learning project. There are several out there such a RabbitMQ, Kafka, Redis’ pub/sub layer, and some brokerless message queue solutions like 0mq. Having used many of them over the years and studying the topic both from the classic “Enterprise Integration” side and the more modern/agile “Microservices” side, I figured I’d try my hand at implementing one.

In this blog post, I’m going to go over how I designed and implemented channel naming. Channel names fill the role of data descriptor used by the publishers and the role of query language for subscribers. This turned out to be a delightful series of problems to explore. Questions such as “how do people use them”, “what features are expected”, “what limitations are common” came up. Realizing that my message broker is essentially providing a naming framework that will have at least some opinion, I needed to ask “are there practices I want to encourage or discourage” and recognize my influence.

In almost all message queue systems, channels have a string based name. This name may be broken up by delimiters, and that delimiter is sometimes chosen by the client or in a config on the broker. Most support the ability to wild card parts of the channel when subscribing. Sometimes the wild card is purely string based, in delimited channels often the wild card is done by chunk. Wild cards are sometimes restricted in what positions they can be in, almost always allowed at the end, sometimes in the middle and sometimes disallowed at the start.

When deciding between these I also wanted to keep in mind what solutions are relatively easy to code, straightforward to debug, and allow for acceptable performance. Not allowing wild cards at all means that simple string comparison works, but omits a common feature. Using simple strings and a well known text query setup like regular expressions would give huge amounts of flexibility in selecting what messages you’d like, but comes with a large performance cost, libraries, and is much harder to spot debug. I decided that since I’m not going to use a common query language, that I’d want to make the names as simple as possible to parse.

A Rope is a data structure I’d heard about a few times and I knew it had something to do with making string operations faster, but I was hazy on the details. Since this is a project without deadlines, I set about reading a paper on them to learn more. Shortly into the paper it was clear this wasn’t the solution for me, ropes are designed for larger bodies of text, manipulating that text in a variety of ways, and making common editor features easier to implement. But it shared that part of the problem was breaking down the text, and part of it was comparing text.

Since channel names are often a hierarchy that uses the delimiter to separate the layers, I split the name using that delimiter into an array. But that then left me with a bunch of smaller strings I had to compare, which seemed much slower than just walking through the name and query once, using the wild cards to skip characters. To avoid a char by char parse on every channel name comparison, I drew on the common native layer practice of hashing strings then comparing the hashes. Hashing has an up front cost of processing the string into a numeric form, but then makes comparisons extremely fast. Since a message’s channel would be used to query for appropriate subscribers, it could be hashed once and compared many times. An unfortunate side effect of hashing the substrings though, I wouldn’t be able to allow partial segment matches. The wild card would be all or nothing in a given position, just a.* no a.b*.

I decided that side effect of only allowing wild cards at the segment layer was ultimately a good thing. While there may be transition times where the feature of partial segment matching would help, it would allow people to break the idea that each segment is a complete chunk of data. This similar reasoning is why the only selectors are exact and wild card, no numeric or alpha only sorts of selection.

After all this research, thinking, note taking, and general meandering thing the computer science fields, I started to implement my solution in Rust. My message broker isn’t actually ready for the channel names yet, as I’m still working on how to manage connections and properly do the various forms of store and forward. But this problem tickled me so I set about solving it anyway. I created a file channel.rs to try building these ideas.

I started with a basic struct with the string form of the name for debugging, and a hashed form of the name for comparisons.

struct Channel { raw_name: String, hashed_name: Vec<Option<u64>> }

raw_name is a String so the struct can own the string without any lifetime concerns of a str, this value will mostly be used for debugging or admin purposes. The hashed_name is a Vec so it can be variable sized, currently I have no limit on the number of delimiters you can use. Option is what I used to handle the wild card. If it was Some<u64> then you have a hash to compare. If it was None then it was wild card and you don’t have to compare it. After thinking harder though, I realized that I didn’t want to have the binary Option as my indicator for whether to use a wild card or not. If I added a new type of wild card, for instance, one that allowed any number of segments, I’d have to replace my Option usage everywhere. So instead I’ve preemptively changed to using my own ChannelSegment type like so:

struct Channel { raw_name: String, hashed_name: Vec<ChannelSegment> } enum ChannelSegment { Wild, Hash(u64) }

Next, I set about parsing the input. I knew that hard coding strings in tests meant that I wanted to have a from_str variant. People will often have hard coded channel names but there will also be generated ones, and for that allowing a from_string is nice. I also knew I was going to turn it into a String anyway to assign to raw_name. So I did the following to enable both:

pub fn from_str(input: &str) -> Result<Channel, String> { Channel::from_string(input.to_owned()) } pub fn from_string(input: String) -> Result<Channel, String> { // parsing code goes here }

Parsing was a pretty simple matter. Using String.split(char) to get an iterator returning each segment. Then relying on Rust’s pattern matching for cases I specifically cared about matching, like empty string (which I made into an error to prevent mistakes) and "*" which is the wild card character. Building up the hashed name, then returning it.

let mut hashed_name = Vec::new(); for chunk in input.split('.') { match chunk { "" => { return Err("empty entry is invalid".to_owned()); } "*" => { hashed_name.push(ChannelSegment::Wild); } _ => { hashed_name.push(ChannelSegment::Hash(calculate_hash(&chunk))); } } } Ok(Channel{ raw_name: input, hashed_name: hashed_name })

One could easily see using a LRU Cache (Least Recently Used cache that removes the oldest entries when it gets too full) to skip parsing the channel name for the most commonly used channels, but I’m not doing that yet until this proves to be a part that is slowing me down.

To compare between two Channels I added a .matches(&Channel) method. I decided against implementing PartialEq since I wouldn’t be testing for exact match, but instead match when considering wild cards, and most developers expect a more exact match when using ==.

pub fn matches(&self, other: &Channel) -> bool { if self.hashed_name.len() != other.hashed_name.len() { return false; } for (a, b) in self.hashed_name.iter().zip(&other.hashed_name) { if let (&ChannelSegment::Hash(inner_a), &ChannelSegment::Hash(inner_b)) = (a, b) { if inner_a != inner_b { return false; } } } true }

Since my only wild card is for a single whole segment, I know I can immediately return if they have different lengths. Then I zip the two hashed_name together so I can iterate through them at the same time. In other languages one would commonly just create an integer, increment that and use it to index into both of the sequences at the same time, ensuring to not walk past the end ourselves. In Rust we rely on iterators to give us fast access to our sequences with minimal checking, keeping us safe against others (or ourselves) mutating the sequences in dangerous ways while iterating. Using zip we can create an iterator that walks both sequences at the same time keeping our to our land of safety and speed.

As a side note, the calculate_hash function is one I pulled from the docs but changed a little bit to fit my style better:

fn calculate_hash<T: Hash>(t: &T) -> u64 { let mut hasher = DefaultHasher::new(); t.hash(&mut hasher); hasher.finish() }

A feature of Rust I really enjoyed while developing this, was the ability to write tests in the same file as I went. I could quickly write a few examples then make the code pass, focusing on just the higher level parts of the API and not testing the internals so much. Think more “this errors, this does not error” and less “this returns a vector of integers that are ascending…” while you are adding such tests, to make sure you can quickly change out the implementation while the idea keeps working, here are the tests I wrote as I was developing:

#[test] fn create_basic_channel() { Channel::from_str("a.b.c").unwrap(); Channel::from_str("name").unwrap(); } #[test] fn create_with_wildcard() { Channel::from_str("a.*.b").unwrap(); Channel::from_str("*").unwrap(); Channel::from_str("*.end").unwrap(); Channel::from_str("start.*").unwrap(); } #[test] fn create_invalid() { assert!(Channel::from_str(".a.b").is_err()); assert!(Channel::from_str("c.b.").is_err()); assert!(Channel::from_str("g.l..b").is_err()); assert!(Channel::from_str("").is_err()); } #[test] fn matches_with_self_exact() { let channel = Channel::from_str("a.b.c").unwrap(); assert!(channel.matches(&channel)); let channel = Channel::from_str("a").unwrap(); assert!(channel.matches(&channel)); let channel = Channel::from_str("dabbling.b.c").unwrap(); assert!(channel.matches(&channel)); let channel = Channel::from_str("abba.bobble").unwrap(); assert!(channel.matches(&channel)); } #[test] fn not_matches_exact() { let channel_full = Channel::from_str("s.t.r").unwrap(); let channel_sub = Channel::from_str("s.t").unwrap(); assert!(!channel_full.matches(&channel_sub)); assert!(!channel_sub.matches(&channel_full)); } #[test] fn matches_with_self_wild() { let channel = Channel::from_str("a.*.c").unwrap(); assert!(channel.matches(&channel)); let channel = Channel::from_str("*").unwrap(); assert!(channel.matches(&channel)); let channel = Channel::from_str("*.b.c").unwrap(); assert!(channel.matches(&channel)); let channel = Channel::from_str("abba.*").unwrap(); assert!(channel.matches(&channel)); } #[test] fn matches_wild_card_one_side() { let wild_channel = Channel::from_str("alpha.beta.*").unwrap(); let tame_channel = Channel::from_str("alpha.beta.charlie").unwrap(); assert!(wild_channel.matches(&tame_channel)); assert!(tame_channel.matches(&wild_channel)); }

Mostly mundane stuff, values all quickly typed in to ensure the general concept works, and common edge cases in parsing (start, end, middle) are covered. But note this isn’t combinatorially testing this, there isn’t unicode or other trouble points, those sorts of tests can come with time. Being able to add in a new test quickly when I noticed an edge case is easily one of my favorite features of Rust.

I’ll be open sourcing my work in progress soon, but for now it mostly lives in my notebook as some scribbles, and a smattering of mostly disconnected code on my laptop. If you have feedback on this blog post, how I’ve setup channels in my message broker, or generally about my rust code I’d love to hear it in comments below. Before the comments roll in though, I know using Strings for errors is not great, I just don’t have an error system setup in my project yet.


Categorieën: Mozilla-nl planet

Hacks.Mozilla.Org: Network Monitor Reloaded (Part 1)

do, 15/06/2017 - 18:07

The Network Monitor tool has been available in Firefox since the earliest days of Firefox Dev Tools. It’s an invaluable tool for anyone who cares about page load performance and fast modern web pages. This tool went through extensive refactoring recently (under the project codename Netmonitor.html), and this post is intended as an explanation of how we designed the new architecture and what cool new technologies we used.

See the Network Monitor running inside the Firefox Developer Toolbox:

Goals

One of the main goals of the refactoring was to rebuild the entire tool on top of standard web technologies. We removed all Firefox-specific legacy code like XUL (XML User Interface Language) but also the code that used Firefox-specific APIs. This is a great step forward since using web standards now allows you to run the entire tool’s code base in two different environments:

  • The Developer Toolbox
  • Any web page

The first case is well known to anyone who’s familiar with Firefox Developer Tools (see also the screenshot above). The Developer Toolbox can easily be opened at the bottom of a browser window with various tools, Network Monitor included, at your fingertips.

The second use case is new. Now the tool can be loaded within a browser tab just like any other standard web application. See how it looks like in the next screenshot:

Note that the page is loaded from localhost:8000. This is where the development server is running.

The ability to run the tool as a web app is a big deal! Now we can use all in-browser tools for the development workflow. Although it was possible to use DevTools to debug DevTools before (with the Browser Toolbox), it is now so much easier and more convenient to simply use the in-browser tools. And of course, we can also load the tool in other browsers. The development is also simpler since we don’t have to build Firefox. Instead, a simple tab-refresh is enough to get Network Monitor reloaded and test your code changes.

Architecture

We’ve build the new Network Monitor front-end on top of the following technologies:

Firefox Developer Tools need complex UI features and we are using the popular React & Redux combo for all of our tools to build a clean and consistent code base. The Network Monitor is no exception. We’ve implemented a set of React components that are responsible for rendering the view (UI), a store with all data collected by HTTP interception and finally a set of actions the user might want to execute.

We’ve also changed the way we write tests. Instead of using the Firefox specific test harness we are slowly shifting towards well known libraries like Mocha and Enzyme. This way it is easier to understand our code base and also contribute to it.

We are using Webpack to build a bundle when running inside a web page. The bundle is consequently served through localhost:8000.

The general architecture is based on a flow introduced in the React & Redux concept.

  • The root component representing the NetMonitorApp can be rendered within Developer Toolbox or a web page.
  • Actions are responsible for things like filtering, clearing the list of requests, sorting and opening a side panel with detailed information.
  • All of our data are stored within a store object. Including all the collected data about HTTP traffic.
New Features

We’ve been focused mostly on codebase refactoring, but there were some new features/UI improvements implemented along the way as well. Let’s see some of them.

Column Picker

There are new columns with additional information about individual requests and the user can use the context menu to select those that are important.

Summary Data

We’ve implemented a better summary for currently displayed requests in the list. It’s now located at the bottom of the panel.

  • Number of requests in the list
  • Size/transferred size of all requests
  • Total time needed to load all requests
  • Time when DomContentLoaded event occurred
  • Time when load event occurred
Filtering By Properties

The existing Filter UI is now a lot more powerful. It’s possible to filter the list of requests according to various properties. For example, you can type: larger-than:50 into the Filter input box to see only those requests that are larger than 50 bytes.

Read more about filtering by properties on MDN.

Learn More in MDN

There are links in many places in the UI pointing to MDN for more information. For example, you can quickly learn how various HTTP headers are used.

Conclusion

We believe that building the new generation of Firefox Developer Tools on top of web standards is the right way to go since it means the tools can run in different environments and integrate more effectively with other projects (e.g., IDEs). Building on web standards makes many things possible: Now we can also think about shipping our tools as an online web service that can benefit from the internet platform. We can share collected data as well as debugging context across the web, opening doors to a real social debugging world.

The Netmonitor.html team has done a tremendous amount of work on the refactoring. Big thanks to the core team:

  • Ricky Chien
  • Fred Lin

But there has been many external contributors as well:

  • Jaroslav Snajdr
  • Leonardo Couto
  • Tim Nguyen
  • Deepjyoti Mondal
  • Locke Chen
  • Michael Brennan
  • Ruturaj Vartak
  • Vangelis Katsikaros
  • Adrien Enault
  • And many more…

Let us know what you think. You can join us on the devtools-html Slack.

Jan ‘Honza’ Odvarko

Read next: Hacking on the Network Monitor

Categorieën: Mozilla-nl planet

Hacks.Mozilla.Org: Hacking on the Network Monitor Developer Tool (Part 2)

do, 15/06/2017 - 18:06

In the previous post, Network Monitor Reloaded, we walked through the reasoning for refactoring the Network Monitor tool. We also learned that using web standards for building Dev Tools enables us to running them in different environments – loaded either within the Firefox Developer Toolbox or within a browser tab as a standard web application.

In this companion article, we’ll show you how to try these things and see the Network Monitor in action.

Get to the Source

The Firefox Developer Tools code base is currently part of the Firefox source repository, and so, downloading it requires downloading the entire repo. There are several ways how to get the source code and work on the code. You might want to start with our Github docs for detailed instructions.

One option is to use Mercurial and clone the mozilla-central repository to get a local copy.

# This may take a while... hg clone https://hg.mozilla.org/mozilla-central cd mozilla-central

Part of our strategy to use web standards to build tools for the web also involves moving our code base from Mercurial to Git (on github.com). So, ultimately, the way to get the source code will change permanently, and it will be easier and faster to clone and work with.

Run Developer Toolbox

For now, if you want to build the Network Monitor and run it inside the Firefox Developer Toolbox, follow these detailed instructions.

Essentially, all you need to do is use the mach command.

cd mozilla-central ./mach build

After the build is complete, start the compiled binary and open the Developer Toolbox (Tools -> Web Developer -> Toggle Tools).

You can rebuild quickly after making changes to the source code as follows:

./mach build faster Run Development Server

In order to run Net Monitor inside a web page (experimental) you’ll need to install the following packages:

We’ve developed a simple container that allows running Firefox Dev Tools (not only the Network Monitor) inside a web page. This is called Launchpad. The Launchpad is responsible for making a connection to the instance of Firefox being debugged and loading our Network Monitor tool.

The following diagram depicts the entire concept:

  • The Net Monitor tool (client) is running inside a Browser tab just like any other standard web application.
  • The app is served by the development server (server) through localhost:8000
  • The Net Monitor tool (client) is connecting to the target (debugged) Firefox instance through a WebSocket.
  • The target Firefox instance needs to listen on port 6080 to allow the WebSocket connection to be created.
  • The development server is started using yarn start

Let’s take a closer look at how to set up the development environment.

First we need to install dependencies for our development server:

cd mozilla-central cd devtools/client/netmonitor yarn install

Now we can run it:

yarn start

If all is ok, you should see the following message:

Development Server Listening at http://localhost:8000

Next, we need to listen for incoming connection in the target Firefox browser we want to debug. Open Developer Toolbar (Tools -> Web Developer -> Developer Toolbar) and type the following command into it. This will start listening so tools can connect to this browser.

listen 6080

The Developer Toolbar UI should be opened at the bottom of the browser window.

Finally, you can load localhost:8000

You should see the Launchpad user interface now. It lists the opened browser tabs in the target Firefox browser. You should also see that one of these tabs is the Launchpad itself (the last net monitor tab running from localhost:8000).

All you need to do is to click one of the tabs you want to debug. As soon as the Launchpad and Network monitor tools connect to the selected browser tab, you can reload the connected tab and see a list of HTTP requests.

If you change the underlying source code and refresh the page you’ll see your changes immediately.

Check out the following screencast for a detailed walk-through of running the Network monitor tool on top of the Launchpad and utilizing the hot-reload feature to see code changes instantly.

You might also want to read mozilla-central/devtools/client/netmonitor/README.md for more detailed info about how to build and run the Network Monitor tool.

Future Plans

We believe that building tools for the web using standard web technologies is the right way to go! Our tools are for web developers. We’d like you to be able to work with our tools using the same skills and knowledge that you already apply when developing web apps and services.

We are planning many more powerful features for Firefox Dev Tools, and we believe that the future holds a lot of exciting things. Here’s a teaser for what’s ahead on the roadmap.

  • Connecting to Chrome
  • Connecting to NodeJS
  • Integration with existing IDEs

Stay tuned!

Jan ‘Honza’ Odvarko

Categorieën: Mozilla-nl planet

Air Mozilla: Reps Weekly Meeting Jun. 15, 2017

do, 15/06/2017 - 16:00

Reps Weekly Meeting Jun. 15, 2017 This is a weekly call with some of the Reps to discuss all matters about/affecting Reps and invite Reps to share their work with everyone.

Categorieën: Mozilla-nl planet

Air Mozilla: Reps Weekly Meeting Jun. 15, 2017

do, 15/06/2017 - 16:00

Reps Weekly Meeting Jun. 15, 2017 This is a weekly call with some of the Reps to discuss all matters about/affecting Reps and invite Reps to share their work with everyone.

Categorieën: Mozilla-nl planet

Ryan Harter: Bad Tools are Insidious

do, 15/06/2017 - 09:00

This is my first job making data tools that other people use. In the past, I've always been a data scientist - a consumer of these tools. I'm learning a lot.

Last quarter, I learned that bad tools are often hard to spot even when they're damaging productivity. I sum this up by saying that bad tools are insidious. This may be obvious to you but I'm excited by the insight.

Bad tools are hard to spot

I spent some time working directly with analysts building ETL jobs. I found some big usability gaps with our tools and I was surprised I wasn't hearing about these problems from our analysts.

I looked back to previous jobs where I was on the other side of this equation. I remember being totally engrossed in a problem and excited to finding a solution. All I wanted were tools good enough to get the job done. I didn't care to reflect on how I could make the process smoother. I wanted to explore and interate.

When I dug into analyses this quarter, I had a different perspective. I was working with the intention of improving our tools and the analysis was secondary. It was much easier to find workflow improvements this way.

In the Design of Everyday Things Donald notes that users tend to blame themselves when they have difficulty with tools. That's probably part of the issue here as well.

Bad tools hurt

If our users aren't complaining, is it really a problem that needs to get fixed? I think so. We all understand that bad tools hurt our productivity. However, I think we tend to underestimate the value of good tools when we do our mental accounting.

Say I'm working on a new ETL job that takes ~5 minutes to test by hand but ~1 minute to test programatically. By default, I'd value implementing good tests at 4 minutes per test run.

This is a huge underestimate! Testing by hand introduces a context shift, another chance to get distracted, and another chance to fall out of flow. I'll bet a 5 minute distraction can easily end up costing me 20 minutes of productivity on a good day.

Your tools should be a joy to use. The better they work, the easier it is to stay in flow, be creative, and stay excited.

In Summary

Don't expect your users to tell you how to improve your tools. You're probably going to need to eat your own dogfood.

Categorieën: Mozilla-nl planet

Daniel Stenberg: target-independent libcurl headers

do, 15/06/2017 - 08:46

We write libcurl to be very portable. It can be built and run on virtually every operating system with an CPU architecture that is at least 32 bit, from some of the most legacy Unixes from the early 90s to the most recent updates to all the popular systems, including the widespread mobile platforms.

Type sizes on different archs

In the early 2000s we added support to libcurl for “large files” (back in the days when that support wasn’t always present in your operating systems) and large variable types (beyond 32 bits) to work for applications and libcurl alike, and to work the same way for libcurl-using applications independently on which platform you’d compile the code on.

We started out using compiler/system defines to figure out for example the size of the native “off_t” type to know if it was 32 bit or 64 bit. That turned out to be problematic as users accidentally ended up in situations where the library considered a type to be one size and the application considered it to be another, leading to unexpected behaviors at best or downright crashes and misery.

Determine at lib build-time

The fix to that run-time size-of-variables confusion was to generate a fixed “outcome” at build-time that would then be used by both the library and applications so that they could never again disagree on this. The obvious downside here was that we had to generate this target specific information into public headers for the library (known as curl/curlbuild.h). We didn’t like doing it this way, but this approach was a better situation than before as it caused less headaches for users.

Now we instead created problems for system packagers who wanted to provide a set of curl headers and allow users to build for example either a 32 bit build or a 64 bit build of their application – so they had to generate two sets of curl headers. Or having the headers on a shared file system to be used by many different systems. Inconvenient. But as this solution didn’t hurt too many people, was a cumbersome problem to fix and yet possible to work around, it remained in the curl project since August 7, 2008 (commit 14240e9e109fe6af1).

Determine at app build-time

In March 2017 (commit 9506d01ee5) we introduced a new take on this problem. A new header that checks systems defines and determines all the necessary information at the time the application is compiled instead of at the time libcurl is compiled. We call it curl/system.h.

The goal was to replace the generated curlbuild.h header, but since it would cause serious problems if this new header would get any different results (like variable type sizes) than the old header, it was a risky move. We needed extra seat-belts for this.

We therefor added the new header next to the old header in parallel, and introduced a test case in the curl test suite that verifies the output from the two systems and make sure that they agree, and had them present in the curl source tree, coexisting. The curl/system.h file of course without being used for anything real, but tested by everyone who runs the test suite – to make sure it isn’t awful.

We think the new header file has now proven itself worthy. We have not gotten any recent reports on problems with test 1541. It is time to cut out the old header system and launch the new!

Starting in release curl 7.55.0, due to be released on August 9, 2017, the header files will finally again be truly platform agnostic. It took us nine years but we finally did it! The bulk of the change is made in this commit.

Just another detail in the machinery.

Categorieën: Mozilla-nl planet

Pagina's