Hello, SUMO Nation (and Planet Mozilla, woohoo!) It is time again to give you a fresh slew of updates and reminders (I know you love reminders) from the world of SUMO!New SUMO warriors, assemble & support!
say “hi” in the forums! Contributors of the week
- Philipp – for rounding up bugs/common issues on the forums
- Michael – for being an all-star host in the forums and helping with the Templates
- Vanja – for localizing all Serbian Templates and his on-going awesomeness in the l10n corner of SUMO
Gentlemen, we salute you!Monday SUMO Community meeting
- …is going to take place on Monday, 20th of July. Join us!
- If you want to add a discussion topic to upcoming the live meeting agenda:
- Start a thread in the Community Forums, so that everyone in the community can see what will be discussed and voice their opinion here before Monday (this will make it easier to have an efficient meeting).
- Please do so as soon as you can before the meeting, so that people have time to read, think, and reply (and also add it to the agenda).
- Help us make things happen after the #mozwww Whistler meetup – go to this thread and read more about the sessions linked from there – and provide feedback!
- Let us know what you think should be the theme for the upcoming SUMO KB Day (next week)
- Send Firefox 39 and Firefox 40 bugs to Mark.
- Let’s work together on the script for the tutorial videos for SUMO l10n! Use the “comments” feature to leave your feedback and ideas in the document.
- Help us figure out what swag we’ll bring for those who will join us in Orlando! Email or ping Roland on IRC.
- The notes from the most recent Platform meeting can be found here.
- The video is on our YouTube channel as well.
- The next Platform meeting will take place on the 23rd of July.
- Sprint 12 is in full swing: http://edwin-dev.herokuapp.com/t/sumo
- Among other things, our Devs are working on:
- implementing missing features for CodeMirror
- setting up A/B experiments for boost values
- making sure that the “No one has helped translate this article yet…” notification appears only on articles that are marked as “Ready For Localization”
- For the Eid al-Fitr festival some of our contributors might be silent for next 2 weeks.
- The Mozilla Weekend took place last weekend in Berlin.
- Mozfest East Africa is here!
- Maker Party 2015 is here! The fourth annual celebration of making and learning on the Web has started and runs until Friday, July 31. Join the rest of the world!
- Two very interesting talks from #mozwww have dropped on Air Mozilla – watch them here!
- New advanced search will be hopefully coming this quarter!
- The One and Done SUMO Contributor Support Training is live. Start here!
- A few light experiments are going on for new article request forms and an easier-to-use article submission form via Google Docs.
- There could be more Windows 10 articles coming up – Joni is working this out with the UX team.
- We are patiently awaiting Pontoon to roll out to mozilla.org, and then… We’re looking forward to setting it up for SUMO!
- Templates have been quite a mess recently, but Kadir & Mike will be putting a good fix in place within the next few days. In the meantime, if you’re unsure how to localize Template articles, please read this page.
- Adobe Flash became a hot topic recently for all the wrong reasons. If you want to get the most up-to-date information about our community actions around this, visit this thread.
- Win64 in Firefox 40
- Recent launches for 2.0 devices happened in South Africa, Mauritius, Niger, Mali, Senegal, Tunisia and Botswana. Hooray for Africa!
- The documentation for v40 started this week with the password manager.
- Reminder: as of July 1st, Thunderbird is 100% Community powered and owned! You can still +needinfo Roland in Bugzilla or email him if you need his help as a consultant. He will also hang around in #tb-support-crew.
There you go – we hope you enjoyed this round of updates and will join us for the Monday meeting. In the meantime, stay cool!
I’m going back to my roots as a volunteer!
The last 6 years have been incredible. Meeting and working with hundreds of Mozillians around the world has been inspiring, challenging and lots of fun. Since my internship in 2009, I’ve seen both myself and Mozilla grow a lot. Thank you for working with me on the Project and teaching me constantly.
Mozillians, continue to do great things for the Web. I’m convinced that we have the most impact when we focus on the user and do things unconventionally. I’m excited about Mozilla’s current focus and journey to space. And while it’s challenging, I know Mozillians are ready for those challenges.
Don’t just fly, soar.
I’ll continue to be involved as a Mozilla Reps mentor and as a contributor. You’ll see me around, mostly online, instead of at the Mozilla SF space each day.
This xckd reminded me of the challenges of managing our buildfarm somedays :-)
I took three courses from Coursera's Data Science track from John Hopkins University. As with previous coursera classes I took, all the course material is online (lecture videos and notes). There are quizzes and assignments that are due each week. Each course below was about four weeks long.
The Data Scientist's Toolbox - This course was pretty easy. Basically a introduction to the questions that data scientists deal as well a primer on installing R, RStudio (IDE for R), and using GitHub.
R Programming - Introduction to R. Most of the quizzes and examples used publicly available data for the programming exercises. I found I had to do a lot of reading in the R API docs or on stackoverflow to finish the assignments. The lectures didn't provide a lot of the material needed to complete the assignments. Lots of techniques to learn how to subset data using R which I found quite interesting, reminded me a lot of querying databases with SQL to conduct analysis.
Getting and Cleaning Data - More advanced techniques using R. Using publicly available data sources to clean different data sources in different formats, XML, excel spreadsheets, comma or tab delimited. Given this data, we had to answer many questions and conduct specific analysis by writing R programs. The assignments were pretty challenging and took a long time. Again, the course material didn't really cover all the material you needed to do the assignments so a lot of additional reading was required.
There are six more courses in the Data Science track that I'll start tackling again in the fall that cover subjects such as reproducible research, statistical inference and machine learning. My next coursera class is Introduction to Systems Engineering which I'll start in a couple of weeks. I've really become interested in learning more about this subject after reading Thinking in Systems.
The other course I took this spring was the Software Carpentry Instructor training course. The Software Carpentry Foundation teachers researchers basic software skills. For instance, if you are a biologist analyzing large data sets it would be useful to learn how to use R, Python, and version control to store the code you wrote to share with others. These are not skills that many scientists acquire in their formal university training, and learning them allows them to work more productively. The instructor course was excellent, thanks Greg Wilson for your work teaching us.
We read two books for this course:
Building a Better Teacher: An interesting overview of how teacher is taught in different countries and how to make it more effective. Most important: Have more opportunities for other teachers to observe your classroom and provide feedback which I found analogous to how code review makes us better software developers.
How Learning Works: Seven Research-Based Principles for Smart Teaching: A book summarizing the research in disciplines such as education, cognitive science and psychology on the effective techniques for teaching students new material. How assessing student's prior knowledge can help you better design your lessons, how to to ask questions to determine what material students are failing to grasp, how to understand student's motivation for learning and more. Really interesting research.
For the instructor course, we met every couple of weeks online where Greg would conduct a short discussion on some of the topics on a conference call and we would discuss via etherpad interactively. We would then meet in smaller groups later in the week to conduct practice teaching exercises. We also submitted example lessons to the course repo on GitHub. The final project for the course was to conduct a short lesson to a group of instructors that gave feedback, and submit a pull request to update an existing lesson with a fix. Then we are ready to sign up to teach a Software Carpentry course!
In conclusion, data science is a great skill to have if you are managing large distributed systems. Also, using evidence based teaching methods to help others learn is the way to go!
Other fun data science examples include
Tracking down the Villains: Outlier Detection at Netflix - detecting rogue servers with machine learning
Finding Shoe Stores in 100k Merchants: Using Data to Group All Things - finding out what Shopify merchants sell shoes using Apache Spark and more
Looking Through Camera Lenses: The Application of Computer Vision at Etsy
…and in the context of the Mozilla Learning Strategy.Another obligatory photo of the mountains
Mozilla recently held a coincidental work week in Whistler Village.
I knew this would be a busy week when the primary task for MoFo across teams was to ‘Dig into our relationships goal‘.
- ‘Relationships‘ meant a lot of talk about relationship management (aka CRM).
- ‘Goal‘ meant a lot of talk about metrics.
Between these two topics, my calendar was fully booked long before I landed in Canada.
My original plan was to avoid booking any meetings, and be more flexible about navigating the week. But that wasn’t possible. This is a noticeable change for me in how people want to talk about metrics and data. A year ago, I’d be moving around the teams at a work-week nudging others to ask ‘do you want to talk about metrics?’. This week in contrast was back-to-back sessions requested by many others.
We also spent a lot of time together talking about the Mozilla Learning Strategy. And this evolving conversation is feeding back into my own thinking about delivering CRM to the org.
Where I had been thinking about how to design a central service that copes with the differences between all the teams, what I actually need to focus on is the things that are the same across teams.
I’m not designing many CRMs for many teams, but instead a MoFo CRM that brings together many teams.
I’m not actually giving this post enough writing time to add the context for some readers, but that small change in framing is important. And I think very positive.
Lastly, one other important lessons learned: Pay attention to time-zones and connecting flights when booking your travel, or you’ll end up sleeping in the airport.
In a few talks and interviews I lamented about a phenomenon in our market that’s always been around, but seems to be rampant by now: the one of the full stackoverflow developer. Prompted by Stephen Hay on Twitter, I shall now talk a bit about what this means.
Full Stack Overflow developers work almost entirely by copying and pasting code from Stack Overflow instead of understanding what they are doing. Instead of researching a topic, they go there first to ask a question hoping people will just give them the result.
In many cases, this works out. It is amazing what you can achieve by pasting things you don’t understand, that people who know what they are doing put out there.
I am not having a go at Stackoverflow here. It is an incredible resource and it is hard to create a community like this and not drown in spam and mediocrity (trust me, I am admin on several technical Facebook groups).
We had that problem for a long time. I challenge anyone learning PHP not simply copying the code examples in the notes. W3Schools for years gave us the answers we wanted, but didn’t need. Heck, even Matt’s Script Archive is probably the source for many a spam mailer as people used formmail.pl without knowing what it does.
I am, however, worried about how rampant this behaviour is today. Of course, it is understandable:
- Creating something is more fun than reading up on how to create something
- Using something that works immediately, even if you don’t know how it does it, feels better than encountering the frustration of not being able to fix something.
- You feel like you cheated the system – shortcuts are fun, and makes you feel like you’re cleverer than all those chumps who spend all this time learning
- Our job is mainstream and there is a massive need for developers. The speed of how we are asked to deliver has massively increased. People want results quicker, rather than cleaner.
We, as a community, are partly to blame for breeding this kind of developer:
- When we answer questions, we tend to give the solution instead of analysing what the person really needs. This is much more work, so we tend to avoid it.
- Posting the “one true solution” and winning a thread on StackOverflow feels great – even if we have no plan whatsoever to come back to it later if it turns out not to be such a good idea any longer as the environment changed
- Getting recognition, Karma and upvotes for giving the solution is much easier than getting it for being the person who asks the right questions to get to the source of the problem
- It is easy to lose patience with getting the same questions over and over again and a “just use jQuery” is easy to paste
Of course, you can call me a grumpy old so-and-so now and tell me that the concept of learning the basics in software is an outdated concept. The complexity of today’s products makes it almost impossible to know everything and in other, highly successful environments using lots of packages and libraries is par for the course. Fine, although we seem to be understanding that software as a whole might be more broken than we care to admit, and this might be one of the causes.
There are several problems with full stack overflow development:
- It assumes the simplest answer with the most technical detail is the best. This is dangerous and can result in many a copied and pasted example which has a lot of issues surviving for years and years on the web.
- The most copy-able answer being used, upvoted and linked to means that better solutions that fix its issues are much less likely to succeed in replacing them. There is no “digging deeper”, so even important fixes will fall under the radar.
- Any expert community is most likely to have a lot of assumptions as to what makes up a “professional environment”. That means that answers giving in those communities are very likely to work in a great, new and complex developer setup but are not necessarily useful for our end users out there. It is very easy to add yet another library or npm package or bootstrap solution to a project, but it adds to the already full-to-the-brim landfill of outdated code on the web.
- It perpetuates the fondness we have for tersity over writing understandable code. The smallest solution – however unreadable – is always the one that wins the thread. As people copy and paste them without understanding, that’s fine by them. For debugging and maintenance, however, it is the worst solution. A great example for this is the use of || for default parameters. Short isn’t better, it is just less work.
- It cheapens our craft. If we, as the people delivering the work, don’t have any respect for it and simply put things together and hope they work, then we shouldn’t be surprised if our managers and peers don’t see us as professionals.
The biggest problem, however, is that it is bad for the developers themselves.Finding pride in your work is the biggest reward
Going on the web, finding a solution and copying and pasting it is easy – too easy. There is no effort in it, and it is not your work – it is someone elses. Instead of being proud of what you achieved, you are more likely to stress out as you don’t want to be found out as a phoney who uses other people’s work and sells it as your own.
Repetition is anathema to a lot of developers. Don’t repeat yourself (DRY) is a good concept in code, but for learning and working it is a terribly idea. We need repetition, to build up muscle memory. The more you repeat a task, the easier it gets and your body does it without you having to think about it.
When you started driving a car, you probably sat down on the seat and got utterly overwhelmed by all the gears, levers, pedals and things to pay attention to. After a while, you don’t even think about what you are doing any longer, and even switching from UK to other cars is not an issue. Failing and learning from it is something we retain much better than simply using something. We put more effort in, it feels more worthy.
Dan Ariely’s TED Talk “What makes us feel good about our work” has some incredibly good points about that topic:
Recognition is what we crave as humans. And we can’t get recognition if we don’t own what we do. You can totally get by copying and pasting and using solution after solution and abstraction upon abstraction. But, sooner or later, you will feel that you are not achieving or creating anything. Many developers who worked like that burned out quickly and stopped developing, stopped caring. And that is a big loss as you might be the person to come up with the next great idea that changes things for a lot of us. That’s an option you shouldn’t rob yourself of. Don’t be a full stackoverflow developer. You deserve to be better.
Last Thursday we had our weekly call about the Reps program, where we talk about what’s going on in the program and what Reps have been doing during the last week.
- Webmaker tools.
- Featured events.
- What’s Council up to this week?
- Whistler Recap take two.
Shoutouts to the Uganda Community, all Moslem Mozillians and Nuke.Webmaker tools
Bobby (@secretrobotron) joined that call to remind that there are plans to take down Appmaker & Popcorn Maker, but simply “move” Thimble & Goggles. Don’t expect these to be accessible after August 31st.
They are building a way to users to access their old makes — they will all be stored indefinitely.
Reach out to Bobby if you have more questions.Featured events
- MozFest East Africa (17th to 19th). Kampala, Uganda.
- Hack in India (18th, 19th). Bangalore, India.
- OSCON (21st, 24th). Portland, USA.
- Campus Party Mexico (22nd to 26th). Zapopan, Mexico
- Campus Party Recife (22nd to 26th). Olinda, Brazil.
Don’t forget to add your event to Discourse, and share some photos, so it can be shared on Reps Twitter account.
What’s up with Council this week?
These are some of the initiatives Council is working this week:
- Brainstorm about recognition.
- RoTM June blog post in the works.
- Mentor selection process 2.0 (shared on reps-general)
- Blog post about Reps work at Whistler Work Week.
We have some videos and articles about this Work Week that we recommend to check.
- Videos and keynotes at Air mozilla (some require mozillians NDA)
- Participation team summary blog post.
More blog posts about Whistler:
Don’t forget to comment about this call on Discourse and we hope to see you next week!
Earlier this week, I landed some changes to the Firefox development environment that aggressively make mach prompt to run mach mercurial-setup. Full details in bug 1182677.
As expected, the change resulted in a fair amount of whining and bemoaning among various Firefox developers. I wanted to take some time to explain why we moved forward, even though we knew not everyone would like the feature.
My official job title at Mozilla is Developer Productivity Engineer. My job is to make you do your job better.
I've been an employee at Mozilla for four years and in that time I've witnessed a surprising lack of understanding around version control tools. (I don't think Mozilla is significantly different from most other companies here.) I find that a significant number of people are practicing sub-optimal version control workflows because they don't know any better (common) or because they are unwilling to change from learned habits.
Furthermore, Mercurial is a highly customizable tool. A lot of Mozillians have spent a lot of time developing useful extensions to Mercurial that enable Mozillians to use Mercurial more effectively and to thus become more productive. The latest epic time-saving hack is Nick Alexander's work to make Fennec build 80% faster by having deep integration with version control.
mach mercurial-setup is almost two years old. Yet, when assisting my fellow Mozillians with Mercurial issues, my "have you run mach mercurial-setup?" question is still often met with blank stares followed by "wait, there's a mach mercurial-setup?!" What's even more frustrating is people wrongly believing that Mercurial can't do things like rebasing and then spreading misinformation about the lackings of Mercurial. (Mercurial has many advanced features disabled out of the box so new users don't footgun themselves.)
Just like Firefox would be irrelevant if it didn't have millions of users, your awesome tool is mostly irrelevant if you are its only user. That's why when I hear of someone say they created an amazing tool for themselves or modified a third party tool without sending the improvements upstream, my blood pressure rises a little. It rises because here this person did something awesome and they or some limited subset of people who happened to be following the person on Twitter or reading their blog at that point in time managed to a) know about the tool b) take the effort to install it. The uptake rate is insanely low and return on investment for that tool is low. It results in duplication of effort. I find this painfully frustrating because I want everyone to have easy access to the best tools available. This requires that tools are well advertised and easy to install and use.
The primary goal of mach mercurial-setup is to make it super easy for anyone to have an optimal Mercurial experience. It was apparent to me that despite mach mercurial-setup existing, numerous people didn't know it existed or weren't using it. Your awesome tool isn't very awesome unless people are using it. And a lot of the awesome tools people have built around Mercurial at Mozilla weren't being utilized and lots of productivity wins were thus being unrealized. Forcefully pushing mach mercurial-setup onto people is thus an attempt to unlock unrealized productivity wins and to make people happier about the state of their tools.
I'm not thrilled that mach's prompting to run mach mercurial-setup is as disruptive as it is. It's bad user experience. I know better. But, (and this is explained somewhat in the bug), other solutions are more complicated and have other gotchas. The current, invasive implementation was the easiest to implement and has the biggest bang for the buck in terms of adoption. We knew people would complain about it. But from my perspective, it was do this or do nothing. And nothing hadn't been very effective. So we did something.
There has been lots of feedback about the change this week. Most surprising to me is the general sentiment of "I don't want something automatically changing my hgrc file." I find this surprising because mach mercurial-setup puts the user firmly in control by prompting before doing anything, thus respecting user choice and avoiding gotchas and unwanted changes. It's clear this property needs to be advertised a bit more so people aren't scared to run mach mercurial-setup and don't spread fear, uncertainty, and doubt about the tool to others. (I also find it somewhat surprising people would think this in the first place: I'd like to think we'd implicitly trust most Mozillians to implement tools that respect user choice and don't do malicious things.)
Like all software, things can and will change. The user experience of this new feature isn't terrific. We'll iterate on it. If you want to help enact change, please file a bug in Core :: mach (for now) and we'll go from there.
Thank you for your patience and your understanding.
I’m writing a couple of blog posts today. This first is a belated note about my work on CRM for MoFo, and how I ended up doing this.
Slides from my presentation to MoFo on our All Staff call in June.
In the second quarter of the year, my Metrics work was pretty quiet while we were prototyping the new Webmaker Android App, and the Learning Networks team was in planning mode for Mozilla Clubs. There was some strategic work to do, but at this stage in the product life-cycle, data-driven decision making isn’t a useful tool. I never actually ran out of things to do, but was keen to spend my time on things that had the most impact.
So I was looking around for projects to help with. Talking to David Ascher, I explained that the projects that engaged me most were the complex ones that combined the needs and views of many different teams. This was also a moment of realisation for me that this was true of every job I’ve held. I like connecting things, including differing points of view.
The MoFo CRM project has been on the table(s) for a while now, but it never gained momentum for legitimate organisational reasons. All our teams needed some form of CRM, but even those with the biggest requirements didn’t have spare capacity to supply CRM tools to the rest of the teams. The more a team tried to coordinate with others, the more complex it was to solve for their own use case. It was everyone’s problem, and no-one’s problem.
So my proposal was to have a ‘product manager’ to look after CRM as an internal service to the org; Centralise ownership of the complexity rather than making it everyone’s problem. That way teams can think about the benefits of using the CRM rather than the complexity of building it. And after reviewing the plan with our Ops leadership, I picked up this task.
It’s been a couple of month’s since then, and I’ve had hundreds of hours of conversations with people across Mozilla about CRM. The project is living up to my request of ‘complex’, but I’m also pleased we’ve started doing the work. Although CRM includes more than it’s fair share of ‘Enterprise IT’, we’re keeping our workflow inline with the agile methods we apply to our own products and projects.
But it’s a difficult project to track, with many plates that need to keep spinning. I noticed this most after being offline with my family for two weeks then coming back to work. It took me a few days to get up to speed on each of the CRM pieces. So this week I’ve been working on documentation that’s easier to follow.
The project is now split into seven projects, and the current status of each, and the next steps with links to github issues for tracking and discussion can now all be found in one place. Building on Matt Thomson’s hard work organizing many Mozilla things, I’m using this wiki/etherpad combo as my working doc: CRM Plan of Record.
At the end of the day on July 14th, 2015, the certificate that Rust’s buildbot slaves were using to communicate with the buildmaster expired. This broke things. The problem started at midnight on July 15th, and was only fully resolved at the end of July 16th. Much of the reason for this outage’s duration was that I was learning about Buildbot as I went along.
Here’s how the outage got resolved, just in case anyone (especially future-me) finds themself Googling a similar problem.
David Baron put this up in Mozilla’s San Francisco office a while back:
This is cute way of saying that writing safe concurrent code is, at present, rocket science. This is unfortunate, because the future of computing is shaping up to be all about concurrency. Fundamental engineering constraints like power usage are steering microprocessor manufacturers away from single-core architectures. If the fastest chips have N cores, a mostly-single-threaded program can only harness (1/N)th of the available resources. As N grows (and it is growing), parallel programs will win.
We want to win, but we’ll never get there if it’s rocket science (despite the industry-leading density of rocket scientists hacking on Gecko). Buggy multi-threaded code creates race conditions, which are the most dangerous and time-consuming class of bugs in software. And while we have some new superpowers to help us react when they inevitably occur, debugging racey code is still incredibly costly. To succeed, we need to prevent races from happening in the first place.
Why do races occur? Opinions differ, but I argue the following:
Races are endemic to most large software projects because the traditional synchronization primitives are inadequate for large-scale systems.
Let me explain.
Small-scale systems are easy to build and maintain. So long as the details can all fit in the heads of a small number of programmers, it’s relatively easy to shuffle things around to meet requirements and verify that all the pieces interact properly.
Large-scale systems are a different story. Many cooks in many interdependent kitchens necessitate strong, assertable rules that allow programmers to reason about the unknowable. These rules provide a baseline level of order, but to be truly useful, they need to be predictable: different programmers should be able to invent similar or identical rules by deriving them from a small set of core principles, such that everyone can make reasonable predictions about the high-level behavior of code they haven’t read.
Software engineering at this level is an art, whose core mission is to find the right abstraction - one that naturally offers guidance and solutions for the problems that need to be solved (especially the ones that don’t exist yet). The wrong abstraction is painful and error-prone. The right one is a never ending stream of goodness from which all answers flow.
Locks don’t lend themselves to these sorts of elegant principles. The programmer needs to scope the lock just right so as to protect the data from races, while simultaneously avoiding (a) the deadlocks that arise from overlapping locks and (b) the erasure of parallelism that arise from megalocks. The resulting invariants end up being documented in comments:// These variables are protected by monitor X: ... // These variables are only accessed on thread Y: ...
And so on. When that code is undergoing frequent changes by multiple people, the chances of it being correct and the comments being up to date are slim.
There’s a familiar story that has repeated itself many times throughout Gecko’s history:
- Engineering leadership sees benefits to accessing some component on multiple threads, and kicks off an effort to make it thread-safe.
- The component becomes incredibly complex and difficult to maintain. Quality suffers, engineers avoid touching it, and improvements slow to a trickle.
- The component is now plagued with problems. The owner comes up with an elegant new design that solves most of the problems, but needs to forbid multi-threaded access to make it work. This is deemed a good trade-off by everyone, and the component becomes non-thread-safe once again.
At first glance, this constant retreat from thread-safety by seasoned programmers looks pretty grim for a multi-core future. However, these programmers aren’t fleeing concurrency itself - they’re fleeing concurrent access to the same data. That is to say, safe and scalable parallelism is achievable by minimizing or eliminating concurrent access to shared mutable state.
In this approach, threads own their data, and communicate with message-passing. This is easier said than done, because the language constructs, primitives, and design patterns for building system software this way are still in their infancy. Rust is designed from the ground up to facilitate this, and uses its type system to enforce single ownership of data. We’re already using some Rust in Gecko, but we’re not going to be rid of C++ anytime soon. So it’s critical to explore ways to incrementally add safe concurrency in C++.
During the first half of this year, I did a tour of duty with the Multimedia Playback Team to help rebuild the heavily-threaded decoding and playback pipeline to be less racey. To solve the problems I encountered there, I built some new tools and primitives that ended up being game-changers in our ability to easy write easy and correct concurrent code.
More on that next time.
Yesterday, the European Union moved one step closer to enacting real net neutrality across the continent. The European Parliament’s Industry, Research, and Energy Committee (ITRE) approved an agreement on the Telecom Single Market Regulation (TSM), after drawn out negotiations between the three EU policymaking branches: the Parliament, the Council, and the Commission. This draft legislation includes proposed rules specifying that all traffic must be treated equally as well as rules prohibiting paid prioritization and blocking.
The ITRE Committee will vote in the fall to formally adopt the text and it will then move to the full Parliament plenary for a final vote. However, amendments can be offered before both the ITRE vote and the plenary vote, and the European Council (the body representing EU member states) must also ratify the final text before it becomes law.
While the current rules are ambiguous in places and give significant deference to national regulatory authorities, overall this is a significant step to protect the open Internet in Europe. We urge European policymakers to finish strong and enact clear, enforceable rules against blocking, discrimination, and fast lanes.
After years of negotiations, the E.U. Telecom Single Market Regulation (which includes proposed net neutrality rules) is nearing completion. If passed, the Regulation will be binding on all E.U. member states. The policymakers – the three European governmental bodies: the Parliament, the Commission, and the Council – are at a crossroads: implement real net neutrality into law, or permit net discrimination and in doing so threaten innovation and competition. We urge European policymakers to stand strong, adopt clear rules to protect the open Internet, and set an example for the world.
At Mozilla, we’ve taken a strong stance for real net neutrality, because it is central to our mission and to the openness of the Internet. Just as we have supported action in the United States and in India, we support the adoption of net neutrality rules in Europe. Net neutrality fundamentally protects competition and innovation, to the benefit of both European Internet users and businesses. We want an Internet where everyone can create, participate, and innovate online, all of which is at risk if discriminatory practices are condoned by law or through regulatory indifference.
The final text of European legislation is still being written, and the details are still gaining shape. We have called for strong, enforceable rules against blocking, discrimination, and fast lanes are critical to protecting the openness of the Internet. To accomplish this, the European Parliament needs to hold firm to its five votes in the last five years for real net neutrality. Members of the European Parliament must resist internal and external pressures to build in loopholes that would threaten those rules.
Two issues stand out as particularly important in this final round of negotiations: specialized services and zero-rating. On the former, specialized services – or “services other than Internet access services” – represent a complex and unresolved set of market practices, including very few current ones and many speculative future possibilities. While there is certainly potential for real value in these services, absent any safeguards, such services risk undermining the open Internet. It’s important to maintain a baseline of robust access, and prevent relegating the open Internet to a second tier of quality.
Second, earlier statements from the E.U. included language that appeared to endorse zero-rating business practices. Our view is that zero-rating as currently implemented in the market is not the right path forward for the open Internet. However, we do not believe it is necessary to address this issue in the context of the Telecom Single Market Regulation. As such, we’re glad to see such language removed from more recent drafts and we encourage European policymakers to leave it out of the final text.
The final text that emerges from the European process will set a standard not only for Europe but for the rest of the world. It’s critical for European policymakers to stand with the Internet and get it right.
Chris Riley, Head of Public Policy
Jochai Ben-Avie, Internet Policy Manager
School year 2014-2015 is ending. It’s time for a brief report.Session Restore
As I announced last year, I am mostly inactive on Session Restore these days. However, I am quite happy to have landed « Bug 883609 – Make Backups Useful ». This has considerably improved the resilience of Session Restore against a variety of accidents.
Besides this, I have mostly been reviewing and mentoring a few contributors.
For Q3, I will try to help Allasso (one of our contributors) land bug 906076, which we hope can improve a lot the startup speed for users with many tabs or tab groups.
My biggest code contribution for the past 6 months is Performance Monitoring. Firefox Nightly now has a module (PerformanceStats.jsm/nsIPerformanceStats) dedicated to monitoring the performance of Firefox, webpages and add-ons in real-time. While there are a number of improvements I wish to land, this is already powerful enough to implement about:performance (a top-like utility for Firefox), the slow add-on watcher (which has progressively grown into something actually useful), and slow add-on Telemetry.
I am currently working on making measures faster and decreasing even further the number of false alerts, as well as ensuring that everything works nicely with e10s. Oh, and I have a new UX design for about:performance which I hope you will like.
Also, I have passed on the data to the AMO team so that they can start publishing the performance of add-ons to their authors.
I have been less active on Async Tooling recently, in large part because most of the tooling we needed has landed, and the rest is now in the hands of the DevTools team. My main contribution has been DOM Promise Uncaught Error monitoring, which was both one of the blockers to port our code from Promise.jsm to DOM Promise, and a primitive necessary for the DevTools team.
My second contribution was modifying our (then) reference implementation of Promise, all our test suites and quite a number of individual tests to handle the case of uncaught asynchronous errors. I have had relatively little feedback on this, but this serves me almost for every single patch I write.
Other than that, I have landed the PromiseWorker, which is designed to make using ChromeWorkers simpler, and I have both landed and mentored a number of improvements to Sqlite.jsm, in particular error-reporting and clean async shutdown, as well as maintenance fixes to OS.File.
I do not have specific plans for the near future of Async Tooling.
One year ago, I joined the effort to overhaul Places, our implementation of bookmarks, history, keywords, etc. Unfortunately, this effort is far from over, as all the participants (starting with my reviewer) keep being preempted for higher-priority work. However, we did manage to land a number of improvements. I contributed History.jsm, a (not complete yet) reimplementation of the History API, with a nicer API and off the main thread database access.
I also refactored Places to handle asynchronous shutdown sequence.
In the near future, I plan to finish and land a non-blocking reimplementation of the Forget button (and Sanitize dialog). I do not have other short-term plans for Places.
Shutdown has kept me quite busy this past 12 months. On one side, AsyncShutdown has been improved, made easier to use and debug post-mortem, and extended to support async shutdown of C++-based features. On another side, I have implemented a Dashboard to track AsyncShutdown timeouts and trends, which has saved my life a few times – in particular, when I fixed the crashes caused by Avast, and when I helped pinpoint and fix the topcrashers for Firefox 33.
Also, I landed the Shutdown Terminator, which turns shutdown hangs into actionable crashes, and also lets us track the duration of successful shutdowns (hint: if it takes more than 10 seconds, you’re as good as crashed).
I do not have short-term plans for Shutdown, although if I find time, I might try and make it crash a bit faster.
Community also ate plenty of my time. My main involvement was mentoring a number of bugs (I lost track of the number, probably a 10-20) and welcoming new potential contributors both on #introduction and through the contribute form. As a note, while mentoring is almost always a pleasure, welcoming new contributors is a huge time sink with very little return on effort. More on this in another blog post.
I also dedicated time and effort to Teaching Open-Source to University students, with mixed results. Students who had signed up by themselves gave me great feedback, while students who had been assigned to the course without having any choice did not prove as pleasant to work with. While I hope to reproduce the experience eventually, this will not be with the same University.
The French Firefox OS launch, going to present (and actually sell) the first Firefox OS phones to entirely non-technical crowds, was also an interesting experience. I don’t know if it was useful in the end, but it was certainly fun.
Finally, while I haven’t found a good place to mention it, this year will be remembered also for Je Suis Charlie, both the initial terrorist attacks and the entirely predictable law on mass surveillance that just passed in France.
In the near future, I plan to continue mentoring bugs, but I will be less active on the contribute form – rather, I am lending a hand to the Participation team’s effort to replace the contribute form with something much better.
It is with a heavy heart that I’m announcing the resignation of my position at Mozilla. Last month marked my 5th year here, and over that time I’ve met some of the most intelligent and driven people in the world. I’m proud to have known you and worked alongside you these years.
I am leaving my responsibilities in the capable hands of my teammates. Although I will no longer be here, the work will still get done.
I’d like to thank all of you who helped me along the way. In particular, the release engineering team for introducing me to the reality of operations at an impressive scale. I’d also like to thank IT for teaching me how large of a scope an org can have, and for civilizing this operations cowboy. I also owe a great appreciation and shout-out to my teammates in Developer Services (especially fubar and hwine) who have had my back through some rough outages.
Lastly, I’d like to thank my managers for giving me direction and always keeping me on course:
Post-Mozilla I’ll be moving on to other software development and operations work. Since free software is one of my passions you’ll certainly see me around. If you’re curious as to what I’m up to next feel free to send me a private message.
Feel free to reach out to me on IRC, Facebook, Twitter, or in meatspace. If you see me at a conference, don’t hesitate to come say hello. My personal email address is email@example.com.
My last day will be Friday (2015-07-17).
Senior Systems Administrator, Developer Services
Making the web more private (maybe) -- from refer(r)er policies Franziskus Kiefer Introduces referrer attributes for navigation and embedding elements that allow fine grained control...
Anhad Jai Singh talks about why migrations don't suck if done properly, how to do them, and what you gain from them.
2 San Francisco interns will be presenting the projects they worked on this summer. Franziskus Kiefer Making the web more private (maybe) -- from refer(r)er...
Making the web more private (maybe) -- from refer(r)er policies Franziskus Kiefer Introduces referrer attributes for navigation and embedding elements that allow fine grained control...
Anhad Jai Singh talks about why migrations don't suck if done properly, how to do them, and what you gain from them.
Michael Layzell, a Mozilla intern working in Toronto talks about eradicating Flash from GitHub, preparing permissions for the future, and ensuring the correctness of our...
Michael Layzell a Mozilla intern working in Toronto talks about eradicating Flash from GitHub, preparing permissions for the future, and ensuring the correctness of our...