I fly – a lot. I spend more time in airports, in the air, hotel rooms and conferences than at home. As I am a natural recording and analysing device, I take in a lot of things on my travels. People at airports are stressed, confused, don’t pay attention to things, eat badly and are not always feeling good. They are tired, they feel rushed and they want just to get things over with and get where they want to go. Others – those new to travel – are overly excited about everything and want to things right, making mistakes because they are too eager. Exactly what users on the web are like. I found that companies who use technology for the benefit of their users are those people love and support. That’s what progressive enhancement means to me. But let’s start at the beginning.
Getting somewhere by plane is pretty simple. You buy a ticket and you get a booking confirmation number, an airport you leave from, a time and a destination airport. To claim all this and get on the flight, you also need to prove that you are you. You can do this in domestic flights with the credit card you booked the flight with, a driving license or your passport. For international travels, the latter is always the safest option.
The main thing you have to fear about flying is delays that make you miss your plane. Delays can be natural problems, technical failures with the plane or the airport. They could also be issues with air traffic control. It is busy up in the blue yonder, as this gorgeous visualisation shows. Another big issue is getting to the airport in time as all kind of traffic problems can delay you.
You can’t do much about that – you just have to take it in stride. I plan 3 hours from my house to sitting on the plane.Avoid the queue
One thing you want to avoid is queues. The longer the queue, the more likely you are to miss your plane. Every single person in that queue and their problems become yours.
Airlines understand that and over the years have put improvements in place that make it easier for you to get up in the air.
In essence, what you need to get in exchange of your information is a boarding pass. It is the proof that all is well and you are good to go.
The fool-proof way of doing that is having check-in counters. These have people with computers and you go there, tell them your information and you get your boarding pass. You can also drop off your luggage and you get up-to-date information from them on delays, gates and – if you are lucky – upgrades. Be nice to them – they have a tough job and they can mess up your travels if you give them a tough time.Improvement: self check-in counters
Manned check-in counters are also the most time consuming and expensive way. They also don’t scale to hundreds of customers – hence the queues.
The first step to improve this was self-check-in terminals. If you allow people to type in their booking confirmation and scan their passport, a machine can issue the boarding pass. You can then have a special check-in counter only for those who need to drop off luggage. Those without luggage, move on to the next level without having to interact with a person behind the counter and take up a space in the queue. Those who don’t know how to use the machine or who forgot some information or encounter a technical failure can still go to a manned check-in counter.Improvement: mobile apps
Nowadays this is even better. We have online check-in that allows us to check in at home and print out our own boarding passes. As printer ink is expensive and boarding passes tend to be A4 and littered with ads, you can also use apps on smartphones.
Of course, every airline has their own app and all work different and – at times – in mysterious ways. But let’s not dwell on that.
Apps are incredible – they show you when your flight happens, delays, and you don’t need to print out anything. You get this uplifting feeling that you’re part of a technical elite and that you know your stuff.
Of course, as soon as you go high-tech, things also break:
- You can always run out of battery
- Apps crash and need to have a connection to re-start and re-fresh your booking content. That’s why a lot of people take screenshots of their boarding passes in the app.
- You need to turn off your phone on planes, which means on changing to another plane you need to re-boot it, which takes time.
- Some airports don’t have digital readers of QR codes or have access to priority lane only as a rubber stamp on a paper boarding pass (looking at you, SFO). That’s why you need a printout.
- Staff checking your boarding pass at security and gate staff tend to wait for your phone display to go to sleep before trying to scan it. Then they ask you to enter your unlock code. There is probably some reason for that.
- Some security lanes need you to keep your boarding pass with you but you can’t keep your phone on you as it needs to be X-Rayed. You see the problem…
Despite all that, you are still safe. When things go wrong, there are the fallbacks of the machines or the manned counter to go back to.This is progressive enhancement
This, is progressive enhancement.
- You put things in place that work and you make it more convenient for your users who have technical abilities.
- You analyse the task at hand and offer the most basic solution.
- With this as a security blanket, you think of ways to improve the experience and distribute the load.
You make it easier for users who are frequently using your product. That’s why I get access to fast-track security lanes and lounges. I get a reward for saving the company time and money and allowing them to cater to more users.
You almost never meet people in these lounges who have bad things to say about the airline. Of course they are stressed – everybody at an airport is – but there is a trust in the company they chose and good experiences means having a good relationship. You can check in 24 hours before your flight and all you bring to the airport is your phone and your passport. If you fail to do so, or you feel like it, you can still go to the counter. You feel like James Bond or Tony Stark.Forcing your users to upgrade
Then there is Ryanair and other budget airlines. You will be hard pushed to find anyone who loves them. The mood ranges from “meh, it is convenient, as I can afford it” to “necessary evil” and ends in “spawn of satan and bane of my existence”. Why is that?
Well, budget airlines try to save and make money wherever they can. They have less ground staff and check-in counters. They have online check-in and expect you to bring a printout of your boarding pass. They have draconic measures when it comes to the size and weight of your luggage. They are less concerned when it comes to your available space on the plane or happy to charge extra for it. Instead of using a service it feels like you have to game it. You need to be on your toes, or you pay extra. You feel like you have to work for what you already paid for and you feel not empowered, but stupid when you forgot to have one thing the company requests you to have – things others don’t bother with.
They also have apps. And pretty ones at that. When everything goes right, these are cool. Yet, these come with silly limitations. These companies chose to offer apps so they can cut down on ground staff and less check-in counters. They are not an improvement or convenience, but become a necessity.The “let’s make you queue anyways” app experience
The other day I was in Italy flying to Germany with Ryanair. I have no Italian data connection and roaming is expensive. I also had no wireless in the hotel or the convention I was at. Ryanair allows me to check-in online with a browser 24 hours before the flight. I couldn’t. When you use the app is even more draconic: you can only check in two hours before the flight. If you remember, I add a my 3 hour trip cushion to the airport to my travels. Which means I am on the road which in London means I am underground without a connection when I need to check in.
I grumpily queued up at the hot, packed airport in a massive queue full of screaming kids and drunk tourists. Others were people standing over half-unpacked luggage as their passports were missing. When I arrived at the counter, the clerk told me that as I needed to print out my boarding pass or check in with the app. As I failed to do so, I now need to pay 45 Euro for my boarding pass if he were to print it for me.
This was almost the price of the ticket. I told him that because of the 2 hour period and me not having connectivity, I couldn’t do that. All I got was “this is our policy”.
I ground my teeth, and connected my roaming data on my phone, trying to check in with the app. Instead of asking for my name and booking confirmation it asked for all kind of extra information. I guess the reason was that I hadn’t booked the ticket but someone had booked it for me. The necessary information included entering a lot of dates with a confusing date picker. In the end, I was one minute late and the app told me there is no way I can check in without going to a counter. I queued up again, and the clerk told me that I can not pay at his counter. Instead I needed to go to the other side of the airport to the ticketing counter, pay there and bring back a printout that I did pay. Of course, there was another queue. Coming back, I ended up in yet another queue, this time for another flight. I barely made it to my plane.
Guess what my attitude towards future business with this airline is. Right – they have a bleak future with me.Progressive enhancement is for the user and you benefit, too
And this is when you use progressive enhancement the wrong way. Yes, an app is an improvement over queuing up or printing out. But you shouldn’t add arbitrary rules or punish those who can’t use it. Progressive enhancement is for the benefit of the end user. We also benefit a lot from it. Unlike the physical world of airport we can enhance without extra overhead. We don’t need to hire extra ground staff or put up hardware to read passports. All we need to do is to analyse:
- What is the basic information the user needs to provide to fulfill a task
- What is the simplest interface to reach this
- How can we improve the experience for more advanced users and those on more advanced hardware?
The latter is the main thing: you don’t rely on any of those. Instead you test if they can be applied and apply them as needed.
Use progressive enhancement as a means to reward your users. Don’t expect them to do things for you just to use your product. If the tools you use means your users have to have a “modern” browser and load a lot of script you share your problems with them. You can only get away with that if you offer them a cheaper version of what others offer but that’s a risky race to take part in. You can win their current business, but never their hearts or support. You become a necessary evil, not something they tell others about.
Ready for the release! This RC was mainly about fixing the last pocket bugs and some stability fixes.
- 22 changesets
- 47 files changed
- 301 insertions
- 191 deletions
ExtensionOccurrences html13 cpp6 js3 sh2 properties2 ini2 py1 mn1 json1 jsm1 java1 h1
ModuleOccurrences dom16 mobile15 browser7 toolkit2 testing2 js2 gfx2 layout1
List of changesets:Jean-Yves AvenardBug 1154881 - Disable test. r=karlt, a=test-only - 573c47bc1bf2 Nick AlexanderBug 1151619 - Add Adjust SDK license. r=gerv, a=NPOTB - 62e7fffff542 Ryan VanderMeulenBug 1164866 - Bump mozharness.json to rev 6f91445be987. a=test-only - f2ef3e1dadaf Jared WeinBug 1166240 - Add pocket.svg to aero section of toolkit's windows/jar.mn. r=Gijs, a=gavin - 58d8fb9fc5e3 James WillcoxBug 1163841 - Always call eglInitialize(), but kill the preloading hack (which was crashing before). r=nchen, a=sledru - daa1f205525a Benjamin ChenBug 1149842 - Release the mutex for NS_OpenAnonymousTemporaryFile to prevent the deadlock. r=roc, a=sledru - 06bdddc6463d Chris ManchesterBug 978846 - Add a file to the tree to tell mozharness what arguments from try are acceptable to pass on to the harness process. r=ahal, a=test-only - cda517b321ee Alexandre LissyBug 960762 - Fix intermittence of Notification mochitests. r=mhenretty, a=test-only - fe2c942655ec Aaron KlotzBug 1158761 - Part 1: Make CheckPluginStopEvent run asynchronously. r=bholley, a=sledru - c163f5453215 Aaron KlotzBug 1158761 - Part 2: Update checks for plugin stop event in tests. r=jimm, a=sledru - aa884d29e93c tbirdbldAutomated checkin: version bump for thunderbird 38.0b6 release. DONTBUILD CLOSED TREE a=release - 7f925ad5b331 Justin DolskeBug 1164649 - More late string changes in Pocket. r=jaws a=Sylvestre - 36b60a224d01 Geoff BrownBug 1073761 - Increase timeout for test_value_storage. r=dholbert, a=test-only - 1266331d5bc7 Kyle MachulisBug 1166870 - Fix permissions on settings event tests. a=test-only - 9e473441cbd9 Albert CrespellBug 849642 - Intermittent test_networkstats_enabled_perm.html. r=ettseng, a=test-only - bee6825f6c92 Albert CrespellBug 958689 - Fix intermittent errors in networkstats tests. r=ettseng, a=test-only - ad098fdd6f81 Milan SreckovicBug 1156058 - Null pointer check. r=jgilbert, a=sledru - 013da2859c88 Nicholas NethercoteBug 1103375 - Fix some crashes triggered from about:memory. r=mrbkap, a=sledru - b90caf52b6e2 Gijs KruitboschBug 1166771 - Force isArticle to false on pushstate on non-article pages. r=margaret, a=sledru - 17169e355c59 Jeff MuizelaarBug 1165732 - Block WARP when using the built-in VGA driver. r=bas, a=sledru - a297bd71b81a Gijs KruitboschBug 1167096 - Flip introductory prefs if there's no saved state. r=jaws, a=sledru - 3ef925962765 Nick ThomasBackout rev 27bacb9dff64 to make mozilla-release ready to do release builds again, ra=release DONTBUILD - 79f9cd31b4b1
You might have noticed that I had no “Things I’ve Learned This Week” post last week. Sorry about that – by the end of the week, I looked at my Evernote of “lessons from the week”, and it was empty. I’m certain I’d learned stuff, but I just failed to write it down. So I guess the lesson I learned last week was, always write down what you learn.How to make your mozilla-central Mercurial clone work faster
I like Mercurial. I also like Git, but recently, I’ve gotten pretty used to Mercurial.
One complaint I hear over and over (and I’m guilty of it myself sometimes), is that “Mercurial is slow”. I’ve even experienced that slowness during some of my Joy of Coding episodes.
This document did not exist when I first started working with Mercurial – back then, I was using mq or sometimes pbranch, and grumbling about how I missed Git.
But there is some gold in this document.
gps has been doing some killer work documenting best practices with Mercurial, and this document is one of the results of his labour.
watchman is a tool that some folks at Facebook wrote to monitor changes in a folder. hgwatchman is an extension for Mercurial that takes advantage of watchman for a repository, smartly precomputing a bunch of stuff when the folder changes so that when you fire a command, likehg status
It takes a fraction of the time it’d take without hgwatchman. A fraction.
Here’s how I set hgwatchman up on my MacBook (though you should probably go by the Mercurial for Mozillians doc as the official reference):
- Install watchman with brew: brew install watchman
- Clone the hgwatchman extension to some folder that you can easily remember and build it: hg clone https://bitbucket.org/facebook/hgwatchman cd hgwatchman make local
- Add the following lines to my user .hgrc: [extensions] hgwatchman = cloned-in-dir/hgwatchman/hgwatchman
- Make sure the extension is properly installed by running: hg help extensions
- hgwatchman should be listed under “enabled extensions”. If it didn’t work, keep in mind that you want to target the hgwatchman directory
- And then in my mozilla-central .hg/.hgrc: [watchman] mode = on
- Boom, you’re done!
Congratulations, hg should feel snappier now!
In Episode 15, we kept working on the same bug as the last two episodes – proxying the printing dialog on OS X to the parent process from the content process. At the end of Episode 14, we’d finished the serialization bits, and put in the infrastructure for deserialization. In this episode, we did the rest of the deserialization work.
And then we attempted to print a test page. And it worked!
We did it!
Then, we cleaned up the patches and posted them up for review. I had a lot of questions about my Objective-C++ stuff, specifically with regards to memory management (it seems as if some things in Objective-C++ are memory managed, and it’s not immediately obvious what that applies to). So I’ve requested review, and I hope to hear back from someone more experienced soon!
I also plugged a new show that’s starting up! If you’re a designer, and want to see how a designer at Mozilla does their work, you’ll love The Design Hour, by Ricardo Vazquez. His design chops are formidable, and he shows you exactly how he operates. It’s great!
Finally, I failed to mention that I’m on holiday next week, so I can’t stream live. I have, however, pre-recorded a shorter Episode 16, which should air at the right time slot next week. The show must go on!
The Balkans Inter-Community meet-up 2015 will take place in Bucharest, Romania, on May 22-24th. Lead contributors from Balkan communities will be invited and sponsored by...
It's been a while (March 2014 to be precise) since I gathered meaningful rr performance numbers. I'm preparing a talk for the TCE 2015 conference and as part of that I ran some new benchmarks with mozilla-central Firefox. It turned out that numbers had regressed --- unsurprisingly, since we don't have continuous performance tests for rr, and a lot has changed since March 2014. In particular, Firefox has evolved a lot, our tests have changed, we're using x86-64 now instead of x86-32, and rr has changed a lot. Over the last few days I studied the regressions and fixed a number of issues: in particular, during the transition to x86-64 some of the optimizations related to syscall-buffering were lost because we weren't patching some important syscall callsites and we weren't handling the recvfrom syscall, which is common in 64-bit Firefox. I also realized that in some cases we were flushing much more data from the syscallbuf to the trace file than we'd actually recorded in the buffer, massively bloating the traces, and fixed that.
There are still some regressions evident since last March. Octane overhead has increased significantly. Forcing Octane to run on a single core without rr shows a similar overhead; in particular that alone causes one test (Mandreel) to regress by a factor of 10! My guess is that Spidermonkey is using multiple cores much more aggressively that it did last year and because it's carefully tuned for Octane, going back to a single core really hurts performance. Replay overhead on the HTML mochitests has also increased significantly; I think this is partly because we changed rr to disable syscall buffering on writes to standard output. This improves the debugging experience but it results in a lot more overhead during replay.
Overall though, I remain very happy with rr performance, especially recording performance, which is critical when you're trying to capture a test failure under rr. Replay performance is becoming more important since it impacts the debugging experience, especially reverse execution; but doing a lot of work to improve raw replay performance is low priority since I think there are projects that could provide a better improvement in the debugging experience for less work (e.g. the ability to take a checkpoint during a recording and start debugging from there, and implement support for gdb's evaluate-in-target conditional breakpoints).
Last Thursday we had our weekly call about the Reps program, where we talk about what’s going on in the program and what Reps have been doing during the last week.
- Webmaker & Mozilla Learning Update.
- Suggested Tiles for Firefox update.
- Featured Events.
- Help me with my project.
- Whistler WorkWeek – Reimbursements
- Mozilla Reps SEA (SouthEast Asia) Online Meetup
Michelle joined the call to talk about webmaker and Mozilla Learning projects.
- Webmaker! It has a new look, an improved toolset https://beta.webmaker.org/
- teach.mozilla.org is the new home for web literacy and the teaching community. https://teach.mozilla.org/
- Mozilla Clubs is a model for how to teach the web in a sustained way and a huge opportunity to work with Reps and Mozilla Clubs to have low-no budget sustainable local groups that learn and teach the web. More information.
Have a question? Ask on discourse.
Patrick joined the call to update Reps about suggested Tiles in Firefox.
This week is going to be announced that it’s landing in beta starting with US users.
Firefox will use locally use the history to suggest interesting tiles for the user and it’s going to be super easy to opt-out or hide tiles you are not interested in. Firefox is the one deciding which tiles to show, not the partners.
Reps can be involved with this projects in two ways:
- Suggesting community tiles.
- Helping to curate relevant local content from partners.
Patrick will work with the Reps team to open this opportunity and to improve localization around this announcement and the technical details.
We have opened a discourse topic to ask any questions you might have.Featured events
These are some events that have happened or are happening this week.
- Mozilla Balkans: 22-25 May. More info on the wiki
- Rust Releases parties: 23rd (Pune, Bangalore)
- Debian/Ubuntu community Conference: 23-24 (Milano).
- Mozilla QA Bangladesh, Train the contributors: 26th (Dhaka).
- Festival TIK 2015 – Bandung: 28-29 (Bandung, Indonesia).
@george would love to know if a few volunteers would be excited to help out with new staff onboarding.
Requires availability at 17:15 UTC on Mondays for a 15min presentation, the benefit is that we would provide public speaking/presentation training and coaching and you will talk to new hires about the community and how awesome it is.
Business card generator
@helios needs help with the business card generator, which is written in nodejs.
@lshapiro needs help to test multiprocess in Firefox Developer Edition and add-ons.
We are only one month to the workweek and there might be some questions about how to help volunteers that need reimbursements.
Reps will be reached out from Mozillians for reimbursement, so help them as better as possible to make the reimbursement smooth.
There will be an event created on reps portal to use on the budget form, otherwise add the mozillians page URL as event in the request form (or Reps profile URL).
Contact your mentor if you have doubts about reimbursing without an event, other questions reach out to @franc.Mozilla Reps SEA (SouthEast Asia) Online Meetup
The next online meetup of ReMo SEA will be on Fri 22 MAY 2015 at 1200Z (UTC)
This is a monthly meet-up held by @bobreyes. Reps based in nearby countries (i.e. China [including Hong Kong], Taiwan, Japan and Korea) are also welcome to attend the online meetup, even people from Europe/Americas are invited to join!
They will share more details once the meet-up is over.
Don’t forget to comment about this call on Discourse and we hope to see you next week!
Twenty years ago (May 22nd 1995), Pulp released the single "Common People":
Unfortunately, that version is censored. At 2 mins 30 seconds the lyrics are:You'll never watch your life slide out of view,
and dance and drink and screw
Because there's nothing else to do.
The version on youtube omits "and screw". This song has some great lyrics:Smoke some fags and play some pool, pretend you never went to school.
But still you'll never get it right
'cos when you're laid in bed at night watching roaches climb the wall
If you call your Dad he could stop it all.
This was a defining song of the Britpop era and I can remember it clearly being part of an Oasis (Manchester) vs Pulp (London) rivalry.
Of course, you haven't made it until William Shatner covers it:
Of that Jarvis said:In 2011 Jarvis Cocker praised the cover version: "I was very flattered by that because I was a massive Star Trek fan as a kid and so you know, Captain Kirk is singing my song! So that was amazing."
Apparently the subject of song is a lady who might have been named Danae, my wife's name.
There is a documentary about Pulp I haven't seen:
I'm now going to go listen to every Pulp song ever.
Last week I went to BlinkOn 4 in Sydney, having been invited by a Google developer. It was a lot of fun and I'm glad I was able to go. A few impressions:
It was good to hear talk about acting responsibly for the Web platform. My views about Google are a matter of public record, but the Blink developers I talked to have good intentions.
The talks were generally good, but there wasn't as much audience interaction as I'd expected. In my experience interaction makes most talks a lot better, and the BlinkOn environment is well-suited to interaction, so I'd encourage BlinkOn speakers and audiences to be a bit more interactive next time. I admit I didn't ask as many questions during talks as I usually do, because I felt the time belonged to actual Blink developers.
Blink project leaders felt that there wasn't enough long-term code ownership, so they formed subteams to own specific areas. It's a tricky balance between strong ownership, agile migration to areas of need, and giving people the flexibility to work on what excites them. I think Mozilla has a good balance right now.
The Blink event scheduling work is probably the only engine work I saw at BlinkOn that I thought was really important and that we're not currently working on in Gecko. We need to get going on that.
Another nice thing that Blink has that Gecko needs is the ability to do A/B performance testing on users in the field, i.e. switch on a new code path for N% of users and see how that affects performance telemetry.
On the other hand, we're doing some cool stuff that Blink doesn't have people working on --- e.g. image downscaling during decode, and compositor-driven video frame selection.
I spent a lot of time talking to Google staff working on the Blink "slimming paint" project. Their design is similar to some of what Gecko does, so I had information for them, but I also learned a fair bit by talking to their people. I think their design can be improved on, but we'll have to see about that.
Perhaps the best part of the conference was swapping war stories, realizing that we all struggle with basically the same set of problems, and remembering that the grass is definitely not all green on anyone's side of the fence. For example, Blink struggles with flaky tests just as we do, and deals with them the same way (by disabling them!).
It would be cool to have a browser implementors' workshop after some TPAC; a venue to swap war stories and share knowledge about how to implement all the specs we agreed on at TPAC :-).
This paper is the last artifact of my work at Mozilla, since I left employment there at the beginning of April. I believe that Mozilla can make progress in privacy, but leadership needs to recognize that current advertising practices that enable "free" content are in direct conflict with security, privacy, stability, and performance concerns -- and that Firefox is first and foremost a user-agent, not an industry-agent.
Advertising does not make content free. It merely externalizes the costs in a way that incentivizes malicious or incompetent players to build things like Superfish, infect 1 in 20 machines with ad injection malware, and create sites that require unsafe plugins and take twice as many resources to load, quite expensive in terms of bandwidth, power, and stability.
It will take a major force to disrupt this ecosystem and motivate alternative revenue models. I hope that Mozilla can be that force.
Pinning helps protect users from man-in-the-middle-attacks and rogue certificate authorities. When the root cert for a pinned site does not match one of the known good CAs, Firefox will reject the connection with a pinning error. This type of error can also occur if a CA mis-issues a certificate.
Pinning errors can be transient. For example, if a person is signing into WiFi, they may see an error like the one below when visiting a pinned site. The error should disappear if the person reloads after the WiFi access is setup.
Firefox 32 and above supports built-in pins, which means that the list of acceptable certificate authorities must be set at time of build for each pinned domain. Pinning is enforced by default. Sites may advertise their support for pinning with the Public Key Pinning Extension for HTTP, which we hope to implement soon. Pinned domains include addons.mozilla.org and Twitter in Firefox 32, and Google domains in Firefox 33, with more domains to come. That means that Firefox users can visit Mozilla, Twitter and Google domains more safely. For the full list of pinned domains and rollout status, please see the Public Key Pinning wiki.
Thanks to Camilo Viecco for the initial implementation and David Keeler for many reviews!
Zweiwöchentliches Meeting der deutschsprachigen Community. ==== German speaking community bi-weekly meeting.
Once a month, web developers from across the Mozilla Project get together to organize our poltical lobbying group, Web Developers Against Reality. In between sessions with titles like “Three Dimensions: The Last Great Lie” and “You Aren’t Real, Start Acting Like It”, we find time to talk about our side projects and drink, an occurrence we like to call “Beer and Tell”.
Groovecoder stopped by to share WellHub, a site for storing and visualizing log data from wells. The site was created for StartupWeekend Tulsa, and uses WebGL (via ThreeJS) + WebVR to allow for visualization of the wells based on their longitude/latitude and altitude using an Oculus Rift or similar virtual reality headset.Osmose: Refract
Next up was Osmose (that’s me!), who shared some updates to Refract, a webpage previously shown in Beer and Tell that turns any webpage into an installable application. The main change this month was added support for generating Chrome Apps in addition to the Open Web Apps that it already supported.
This month’s session was a productive one, up until a pro-reality plant asked why we were having a real-life meetup for an anti-reality group, at which point most of the people in attendance began to scream uncontrollably.
If you’re interested in attending the next Beer and Tell, sign up for the email@example.com mailing list. An email is sent out a week beforehand with connection details. You could even add yourself to the wiki and show off your side-project!
See you next month!
Illustrating something highly-technical is more about storytelling than it is about design. My personal process often starts with a deluge of diagrams, wiki pages, stakeholder meetings, and follow-up discussions with engineers. Once I finally understand the details myself, it’s then my job to distill all that raw information into a single, coherent story.
That’s where the plot usually takes an interesting detour.
The Content Services team recently asked me to develop an infographic depicting “How user data is protected on Firefox New Tab” (PDF – 633 kB). The narrative itself was easy to illustrate because I had tremendous help from my teammates. But regardless of the refinements I continued making to the design, a crucial element always remained conspicuously absent:
The main character.
In this case, the main character was a Firefox User. My principle challenge, of course, was representing a person of any age, gender, ethnicity or language from around the globe. Secondarily, I wanted readers to feel something – maybe even smile. But most importantly, I wanted readers to clearly identify the User as the star of the infographic.
In other words, I needed a good mascot.
Folks don’t generally connect with the generic on an emotional level; so, I instinctively knew that flat, vaguely male or female silhouettes would be overly general for a global audience.
Maybe an animal? The Firefox mascot is a fox, after all, and small furry creatures are inherently disarming. I quickly discovered, though, that many animals could be interpreted as personalities types or even specific nations. Every option seemed close to the mark, but fell short upon further reflection.
Then the obvious roared in my face.
Historically, Mozilla has been represented by a dinosaur. And not the dead-fossil kind, either, but a living, breathing carnivore. I’ve always liked that image. The Mozilla T-rex, however, wasn’t the star of the story (and Mozillians aren’t all that carnivorous, anyway). Still, I could easily build upon this imagery without fear of alienating any particular person or group.
In the end, the species I chose to represent Users is one of the most recognizable. Besides being herbivores (which somehow seemed more appropriate), Triceratops command attention and demand respect. They’re creatures who appeal to our cooperative, yet intensely protective, instincts. They’re important, impossible to ignore.
And when they’re smiling, it’s hard not to love them.
Done and done.
At our May Brantina (Breakfast + Cantina), we'll be joined by Kate Heddleston, a software engineer in San Francisco. Kate will share how effective onboarding...
Mozilla has a long history of innovating with how users interact with content: tabs, add-ons, live bookmarks, the Awesome bar – these and many more innovations have helped the Web to dominate desktop computing for the last decade. Six months ago we launched Directory Tiles in Firefox, and have had great success with commercial partnerships and in aiding awareness for content important to the project, including Mozilla advocacy campaigns in support of net neutrality and the Mozilla Manifesto.
Today, I’m pleased to announce Suggested Tiles – our latest innovation and complement to Directory Tiles, as we work to create a more powerful and personalized Web experience for our users. I discussed the Mozilla mission in the context of digital advertising earlier this year. Suggested Tiles represents an important step for us to improve the state of digital advertising for the Web, and to deliver greater user agency.
Much of today’s digital advertising utilizes data harvested through a user’s browsing habits to target ads. However, many consumers are increasingly weary of how their data is being collected and shared in the advertising ecosystem without transparency and consent – and complex opt-outs or unreadable privacy policies exacerbate this. Many users even block advertisements altogether. This situation is bad for users, bad for advertisers and bad for the Web.
With Suggested Tiles, we want to show the world that it is possible to do relevant advertising and content recommendations while still respecting users’ privacy and giving them control over their data. And to bring influence to bear on the whole industry, we know we will need to deliver a highly effective advertising product.
We believe users should be able easily to understand what content is promoted, who it is from and why they are seeing it. It is the user who owns the profile: only a Firefox user can edit their own browsing history. And for users who do not want to see Suggested Tiles, opting out only takes two clicks from the New Tab page, without having to read a lot of instructions. To deliver Suggested Tiles we do not retain or share personal data, nor are we using cookies. If you want to learn more about how Suggested Tiles protect a user’s data, we produced this infographic, and the Mozilla policy team have described the details of how our data principles translate to the data policy for Suggested Tiles.
Suggested Tiles are controlled by the user, respect their privacy and are not directed towards a captive audience. As different as this sounds, we believe that this makes Tiles a better experience for users and for advertisers.
Suggested Tiles will help advertisers and content owners connect with millions of Firefox users, and do so at a time when the user is receptive to hearing from them, making it a much more valuable connection. By delivering content experiences based on the user’s recent and most frequent browsing, we know when content will have high relevance. And because we are delivering this content early in a browsing session – rather than mixed in with the user’s activity – we know they are more likely to engage with it. We already have some very satisfied partners for Directory Tiles, and I am confident that Suggested Tiles will deliver even higher levels of engagement.
For partners who are interested in getting involved with the Suggested Tiles initiative, we have a site where you can learn more and register your interest: http://content.mozilla.org.
So what happens next? Suggested Tiles will be going to Beta soon and then live later in the summer. Initially, users will first see “Affiliate” Tiles advertisements for other Mozilla causes and Firefox products before Suggested Tiles from our content partners appear. Note that we’ll be rolling out the product in phases starting first with Firefox users in the US.
If you have any questions about how Suggested Tiles will work, need more information or want to explore a potential partnership with us, please visit content.mozilla.org.
This is still one of our early steps towards our goal of improving the state of digital advertising for the Web – delivering greater transparency for advertisers, better, more relevant content experiences and, above all, greater control for Firefox users.
I wrote a previous update about my work on multiplexing in curl. This is a follow-up to describe the status as of today.
I’ve successfully used the http2-upload.c code to upload 600 parallel streams to the test server and they were all sent off fine and the responses received were stored fine. MAX_CONCURRENT_STREAMS on the server was set to 100.
This is using curl git master as of right now (thus scheduled for inclusion in the pending curl 7.43.0 release). I’m not celebrating just yet, but it is looking pretty good. I’ll continue testing.
Commit b0143a2a3 was crucial for this, as I realized we didn’t store and use the read callback in the easy handle but in the connection struct which is completely wrong when many easy handles are using the same connection! I don’t recall the exact reason why I put the data in that struct (I went back and read the commit messages etc) but I think this setup is correct conceptually and code-wise, so if this leads to some side-effects I think we need to just fix it.
Next up: more testing, and then taking on the concept of server push to make libcurl able to support it. It will certainly be a subject for future blog posts…
At Mozilla we’ve been using The Mozilla Defense Platform (lovingly referred to as MozDef) for almost two years now and we are happy to release v1.9. If you are unfamiliar, MozDef is a Security Information and Event Management (SIEM) overlay for ElasticSearch.
MozDef aims to bring real-time incident response and investigation to the defensive tool kits of security operations groups in the same way that Metasploit, LAIR and Armitage have revolutionized the capabilities of attackers.
We use MozDef to ingest security events, alert us to security issues, investigate suspicious activities, handle security incidents and to visualize and categorize threat actors. The real-time capabilities allow our security personnel all over the world to work collaboratively even though we may not sit in the same room together and see changes as they occur. The integration plugins allow us to have the system automatically respond to attacks in a preplanned fashion to mitigate threats as they occur.
Notable changes include:
- Support for Google API logs (login/logout/suspicious activity for Google Drive/Docs)
- http://cymon.io api integration
- myo armband integration
Using the Myo armband in a TLS environment may require some tweaking to allow the browser to connect to the local Myo agent. Look for a how-to in the docs section soon.
Feel free to take it for a spin on the demo site. You can login by creating any test email/password combination you like. The demo site is rebuilt occasionally so don’t expect anything you put there to live for more than a couple days but feel free to test it out.