Tomorrow morning the first ever Ascend Project kicks off in Portland, OR. I just completed a month-long vacation where we drove from San Francisco out to the Georgian Bay, Ontario (with a few stops along the way including playing hockey in the Cleveland Gay Games) and back again through the top of the US until we arrived here in Portland. I’m staying in this city for 6 weeks, will be going in to the office *every* day, and doing everything I can to guide & mentor 20 people in their learning on becoming open source contributors.
Going to do my best to write about the experience as this one is all about learning what works and what doesn’t in order to iterate and improve the next pilot which will take place in New Orleans in 2015. It’s been almost a year since I first proposed this plan and got the OK to go for it. See http://ascendproject.org for posts on the process so far and for updates by the participants.
Last week I convened a small, cross-functional team for a half hour debrief of the work we’d done together on last month’s Net Neutrality trainings and tweetchat. The trainings and tweetchat were largely successful efforts, but this debrief was to discuss the process of working together.
Here’s how we did it:
- First I sent around an etherpad with some questions. There was a section for populating a timeline of the entire process from conception to completion. And there were sections for capturing what worked well, and what people felt could be improved upon.
- As people added their thoughts to the etherpad, it became clear to me that a Vidyo chat would be useful. There were differences of opinion and indications of tension that I felt ought to be surfaced and discussed.
- Everyone took 30 minutes out of their busy schedules to meet over Vidyo, which I totally appreciated! I started the meeting by stating my goal which was to reach a shared agreement about two or three concrete things we would try to do more of or less of in the future.
- I would have loved to have had a full hour, as I felt we were just starting to surface the real issues near the end of the call. It felt a little strange to have to cut off the conversation right when we were getting into it.
- In the short time we had, we were able to touch on what I think were probably the most salient points from the pad, and everyone had a chance to speak. We also identified four concrete things to do differently in the future. By those measures, I think the debrief was successful.
- Some additional takeaways were shared via email after the call, and I think everyone is committed to making this the start of an ongoing process of continuous improvement.
I called this a “debrief” because it was a relatively unstructured conversation looking back at the end of a project. In my mind, a debrief is one flavor of a larger category of what I’d call “retrospecting behaviors.”
Here are some thoughts about what makes a good retrospective:
You don’t need to save retrospecting for the end. Retrospectives are different from post-mortems in this way. You can retrospect at any point during a project, and, in fact, for teams that work together consistently, retrospectives can be baked into your regular working rhythm.
First thing’s first: start with a neutral timeline. It’s amazing how much we can forget. Spend a couple minutes re-creating an objective timeline of what happened leading up to the retrospective. Use calendars, emails, blog posts, etc. to re-create the major milestones that occurred.
Bring data. If possible, the facilitator should bring data or solicit data from the team. Data can include so many things! Here are just a few examples:
- Quantitative and qualitative measures of success.
- Data about how long things took to finish.
- Subjective experiences: each team member’s high point and low point. One word or phrase from each team member describing their experience.
Be ready for the awkward. For a breakthrough to happen, you often have to go through something uncomfortable first. No one should feel unsafe or attacked, of course, but transformation happens when people have the courage to speak and hear painful truths. Not every retrospective will feel like a group therapy session, but surfacing tensions in productive, solution-oriented ways is good for teams.
Despite their name, retrospectives are about the future. The outcome of any retrospective (whether it’s a team meeting, or 5 minutes of solo thinking time at your desk) should be at least one specific thing you’d like to do differently in the future. Make it visible to you and your teammates.
A “Do Differently” is a specific and immediately actionable experiment. Commit to trying something different just for a week. Because the risk is low (it’s just a week!), you can try something pretty dramatic. Choose something you can start right away. “Let’s try using Trello for a week” or “Let’s see if having a 10-minute check-in each morning reduces confusion.”
Retrospectives often also inspire one-time actions and new rules. One-time actions are things like, “We need to do a CRM training for the team” or “We should update our list of vendors because no one knew who to call when we ran into trouble.” New rules are things like, “We should start every project with a kick-off meeting, no matter how small the project is.”
Both one-time actions and new rules are important, and should be captured and assigned a responsible person. But they are not the same as “Do Differentlys” which are meant to create a culture of experimentation that is necessary for continuous improvement.
It’s not about how well you followed a process; it’s about how well the process is serving the goals. This is another difference between retrospectives and post-mortems. Whereas in a post-mortem, you might be discussing what you did “right” and “wrong” (i.e. how well you adhered to some agreed upon rules or norms), in a retrospective you discuss what “worked” and “didn’t work” (which might lead to changing those norms).
Celebrate. Retrospectives are occasions to recognize the good as well as the bad. I won’t lie. Some of my favorite retrospectives involved cake.
What would you add to or change about the above list?
Just a quick note to let folks know that the Developer Services team continues to make improvements on Mozilla’s Mercurial server. We’ve set up a status page to make it easier to check on current status.
As we continue to improve monitoring and status displays, you’ll always find the “latest and greatest” on this page. And we’ll keep the page updated with recent improvements to the system. We hope this page will become your first stop whenever you have questions about our Mercurial server.
A few years ago we deployed a landfill for AMO – a place where people could play around with the site without affecting our production install and developers could easily get some data to import into their local development environment.
I think the idea was sound (it was modeled after Bugzilla’s landfill) and it was moderately successful but it never grew like I had hoped and even after being available for so long, it had very little usable data in it. It could help you get a site running, but if you wanted to, say, test pagination you’d still need to create enough objects to actually have more than one page.
A broader and more scalable alternative is a script which can make as many or as few objects in the database as you’d like. Want 500 apps? No problem. Want 10000 apps in one category so you can test how categories scale? Sure. Want those 10000 apps owned by a single user? That should be as easy as running a command line script.
That’s the theory anyway. The idea is being explored on the wiki for the Marketplace (and AMO developers are also in the discussion). If you’re interested in being involved or seeing our progress watch that wiki page or join #marketplace on irc.mozilla.org.
First, a personal note:
Holy frickity-frak! It’s September!?
Okay, back to business. My work this week was all over the place. Got tons done but, of course, not what I meant to do. That said, I did actually make progress on the stuff I’d planned to do this week, so that’s something, anyway.
I love this job. The fact that I start my week expecting one awesome thing, and find myself doing two totally different awesome things instead, is pretty freaking cool.What I did this week
- Filed bug 1061624 about the new page editing window lacking a link to the Tagging Guide next to the tag edit area.
- Followed up on some tweets reporting problems with MDN content; made sure the people working on that material knew about the issues at hand, and shared reassurances that we’re on the problem.
- Tweaked the Toolbox page to mention where full-page screenshots are captured in both locations where the feature is described (instead of just the first place). Also added additional tags to the page.
- Had a lot of discussions, both by video and by email and IRC, about planning and procedures for documentation work. A new effort is underway to come up with a standard process.
- Submitted my proposal for changes to our documentation process to Ali, who will be collating this input from all the staff writers and producing a full proposal.
- Checked the MDN Inbox: it was empty.
- Experimentation with existing WebRTC examples.
- Moved some WebRTC content to its new home on MDN.
- Filed bug 1062538, which suggests that there be a way to close the expanded title/TOC editor on MDN, once it’s been expanded.
- Fixed the parent page links for the older WebAPI docs; somehow they all believed their parent to be in the Polish part of MDN.
- Corrected grammar in the article about HTMLMediaElement, and updated the page’s tags.
- Filed a bug about search behavior in the MDN header, but it was a duplicate.
- Discovered a privacy issue bug and filed it. A fix is already forthcoming.
- bz told me that previewing changes to docs in the API reference results in an internal service error; I did some experimenting, then filed bug 1062856 for him. I also pinged mdn-drivers since it seems potentially serious.
- Discovered an extant, known bug in media streaming which prevents me from determining the dimensions of the video correctly from script. This is breaking many samples for WebRTC.
- Went through all pages with KumaScript errors (there were only 10). All but one were fixed with a shift-refresh. The last one had a typo in a macro call and worked fine after I fixed the error.
- Expanded on Florian’s Glossary entry about endianness by adding info on common platforms and processors for each endianness.
- Filed bug 1063560 about search results claiming to be for English only when your search was for locale=*.
- Discovered and filed bug 1063582 about MDN edits not showing up until you refresh after saving. This had been fixed at one point but has broken again very recently.
- Started designing a service to run on Mozilla’s PaaS platform to host the server side of MDN samples. My plan is nifty and I’ll share more about it when I’m done putting rough drafts together.
- Extended discussions with MDN dev team about various issues and bugs.
- Helped with the debugging of a Firefox bug I filed earlier in the week.
- #mdndev planning meeting
- 1:1 with Jean-Yves
- 1:1 meeting with Ali
- Writers’ staff meeting
- Compatibility Data monthly meeting
- #mdndev weekly review meeting
- Web API documentation meeting; only myself, Jean-Yves, and Florian attended but it was still a viable conversation.
A good, productive week, even if it didn’t involve the stuff I expected to do. That may be my motto: I did a lot of things I didn’t expect to do.
Making implicit information explicit allows us to grow. We are able to recognize and add to something that works well, while focusing less on what doesn’t work well. Being explicit allows us to talk about something we do and/or experience – it allows this information to be shared and understood by others. When we focus on value and impact, we must be explicit in order to understand what is happening.
During my work on the Community Building Team (CBT) at Mozilla, I have been exposed to several themes of how the team works when success happens. Intrinsically, these are the agreed upon ways by which we do our work. Extrinsically, these are the principles by which we do our work.
I cannot claim to be the single voice for these principles on our team – that would be not Mozilla-like. However, these are things I have been exposed to by working with and reading about the work of all members of the team.
- Build Understanding – Demonstrate competence. Seek first to understand. Every engagement is different. We care about people and doing the right thing for them. In order to best help them, we are curious.
- Build Connections – Be a catalyst for connection. Our team has a broad reach in the organizations. Sometimes the best way we can build is by connecting what is already there.
- Build Clarity – This is important when bringing more people into a project. We seek to navigate through the confusion to create clarity for us, our partners and the community.
- Build Trust – This is about having someone’s back. It’s important that the people we work with know that we are in this with them, together.
- Build Pilots – Our work is not a one size fits all. We care about the best solution so we test our assumptions to see what works and build from there.
- Build Win-Win – Focus on mutual benefit. We engage in win-win partnerships because our success is dependent on others. More people can only sustainably come into a project when it’s mutually beneficial. We want to make our partners look good.
Having these principles allows others people and teams to understand how the CBT works and what things are a valued when doing that work. It allows allow members of the team to have a toolkit to reference when entering into a new engagement and builds a level of consistency about interaction – creating clear expectations for others. All this leads to the sustainable success of the CBT.
I’ve places these into a nice PDF format below.
I’ve decided to start a blog series documenting my workflow for performance investigation. Let me know if you find this useful and I’ll try to make this a regular thing. I’ll update this blog to track the progress made by myself, and anyone who wants to jump in and help.
I wanted to start with the b2g unlock animation. The animation is O.K. but not great and is core to the phone experience. I can notice a delay from the touch up to the start of the animation. I can notice that the animation isn’t smooth. Where do we go from here? First we need to quantity how things stand.Measuring The Starting Point
The first thing is to grab a profile. From the profile we can extract a lot of information (we will look at it again in future parts). I run the following command:./profile.sh start -p b2g -t GeckoMain,Compositor
*unlock the screen, wait until the end of the animation*
*open profile_captured.sym in http://people.mozilla.org/~bgirard/cleopatra/*
This results in the following profile. I recommend that you open it and follow along. The lock animation starts from t=7673 and runs until 8656. That’s about 1 second. We can also note that we don’t see any CPU idle time so we’re working really hard during the unlock animation. Things aren’t looking great from a first glance.
I said that there was a long delay at the start of the animation. We can see a large transaction at the start near t=7673. The first composition completes at t=7873. That mean that our unlock delay is about 200ms.
Now let’s look at how the frame rate is for this animation. In the profile open the ‘Frames’ tab. You should see this (minus my overlay):
Alright so our starting point is:
Unlock delay: 200ms
Frame Uniformity: ~25 FPS, poor uniformityNext step
In part 2 we’re going to discuss the ideal implementation for a lock screen. This is useful because we established a starting point in part 1, part 2 will establish a destination.
Of of my first tasks in my new role as a Developer Productivity Engineer is to help make Mozilla's Mercurial server better. Many of the awesome things we have planned rely on features in newer versions of Mercurial. It's therefore important for us to upgrade our Mercurial server to a modern version (we are currently running 2.5.4) and to keep our Mercurial server upgraded as time passes.
There are a few reasons why we haven't historically upgraded our Mercurial server. First, as anyone who has maintained high-availability systems will tell you, there is the attitude of if it isn't broken, don't fix it. In other words, Mercurial 2.5.4 is working fine, so why mess with a good thing. This was all fine and dandy - until Mercurial started falling over in the last few weeks.
But the blocker towards upgrading that I want to talk about today is systems verification. There has been extreme caution around upgrading Mercurial at Mozilla because it is a critical piece of Mozilla's infrastructure and if the upgrade were to not go well, the outage would be disastrous for developer productivity and could even jeopardize an emergency Firefox release.
As much as I'd like to say that a modern version of Mercurial on the server would be a drop-in replacement (Mercurial has a great committment to backwards compatibility and has loose coupling between clients and servers such that upgrading servers should not impact clients), there is always a risk that something will change. And that risk is compounded by the amount of custom code we have running on our server.
The way you protect against unexpected changes is testing. In the ideal world, you have a robust test suite that you run against a staging instance of a service to validate that any changes have no impact. In the absence of testing, you are left with fear, uncertainty, and doubt. FUD is an especially horrible philosophy when it comes to managing servers.
Unfortunately, we don't really have a great testing infrastructure for Mozilla's Mercurial server. And I want to change that.Reproducing the Server Environment
When writing tests, it is important for the thing being tested to be as similar as possible to the real thing. This is why so many people have an aversion to mocking: every time you alter the test environment, you run the risk that those differences from reality will mask changes seen in the real environment.
So, it makes sense that a good first goal for creating a test suite against our Mercurial server should be to reproduce the production server and environment as closely as possible.
I'm currently working on a Vagrant environment that attempts to reproduce the official environment as closely as possible. It starts one virtual machine for the SSH/master server. It starts a separate virtual machine for the hgweb/slave servers. The virtual machines are booting CentOS. This is different than production, where we run RHEL. But they are similar enough (and can share the same packages) that the differences shouldn't matter too much, at least for now.Using Puppet
In production, Mozilla is using Puppet to manage the Mercurial servers. Unfortunately, the actual Puppet configs that Mozilla is running are behind a firewall, mainly for security reasons. This is potentially a huge setback for my reproducibility effort, as I'd like to have my virtual machines use the same exact Puppet configs as whats used in production so the environments match as closely as possible. This would also save me a lot of work from having to reinvent the wheel.
Fortunately, Ben Kero has extracted the Mercurial-relevant Puppet config files into a standalone repository. Apparently that repository gets rolled into the production Puppet configs periodically. So, my virtual machines and production can share the same Mercurial Puppet files. Nice!
It wasn't long after starting to use the standalone Puppet configs that I realized this would be a rabbit hole. This first manifests in the standalone Puppet code referencing things that exist in the hidden Mozilla Puppet files. So the liberation was only partially successful. Sad panda.
So, I'm now in the process of creating a fake Mozilla Puppet environment that mimics the base Mozilla environment (from the closed repo) and am modifying the shared Puppet Mercurial code to work with both versions. This is a royal pain, but it needs to be done if we want to reproduce production and maintain peace of mind that test results reflect reality.
Because reproducing runtime environments is important for reproducing and solving bugs and for testing, I call on the maintainers of Mozilla's closed Puppet repository to liberate it from behind its firewall. I'd like to see a public Puppet configuration tree available for all to use so that anyone anywhere can reproduce the state of a server or service operated by Mozilla to within reasonable approximation. Had this already been done, it would have saved me hours of work. As it stands, I'm reverse engineering systems and trying to cobble together understanding of how the Mozilla Puppet configs work and what parts of them can safely be ignored to reproduce an approximate testing environment.
Along that vein, I finally got access to Mozilla's internal Puppet repository. This took a few meetings and apparently a lot of backroom chatter was generated - "developer's don't normally get access, oh my!" All I wanted was to see how systems are configured so I can help improve them. Instead, getting access felt like pulling teeth. This feels like a major roadblock towards productivity, reproducibility, and testing.
Facebook gives its developers access to most production machines and trusts them to not be stupid. I know we (Mozilla) like to hold ourselves to a high standard of security and privacy. But not giving developers access to the configurations for the systems their code runs on feels like a very silly policy. I hope Mozilla invests in opening up this important code and data, if not to the world, at least to its trusted employees.
Anyway, hopefully I'll soon have a Vagrant environment that allows people to build a standalone instance of Mozilla's Mercurial server. And once that's in place, I can start writing tests that basic services and workflows (including repository syncing) work as expected. Stay tuned.
I’ve said it before and I stick by it: conferences stand and fall with the enthusiasm of the organisers. And it is a joy for someone like me who does spend a lot of time at conferences to see a new one be a massive success from the get-go.
Yesterday was the Coldfront conference in Copenhagen, Denmark. A one day conference organised by Kenneth Auchenberg, @Danielovich (and of course a well-chosen team of people). It was very rewarding to work with him to give the closing keynote of the inaugural edition of this event.
And, amazingly enough, the video is out, too:
I am sad that because of other commitments I had to miss the first talks, but here are my main impressions of the event:
- I love the pragmatism of it – one track, good break times, a very simple and straight-forward web site and no push to “download the app of this event”.
- The location – a program cinema – had great seating, working WiFi (with a few hickups but the hotel next door also had available WiFi that worked in the first rows) and very adequate facilities.
- The projector and audio set up was great and the switch from speaker to speaker worked flawlessly.
- All talks were streamed on the web
- Even a last minute speaker cancellation didn’t quite disturb the event (thanks for the reminder Steen H. Rasmussen)
- Instead of keeping people perched up inside, the breaks had coffee available for self-service and the food and branded ice cream was served outside the building in the street. This was also the spot for the beers and cupcakes after the event and the final venue was just down the road.
- The after party was in a beer place that has over 40 beers on tab and the open bar lasted well till after midnight. Nobody got blindly drunk or misbehaved – it actually felt more like a beer tasting experience than a drink-up. There was a lot of seating and no loud music to discourage or hinder communication
- All the videos of the talks were already available on the day or the day after. I managed to see myself whilst my head was still hurting from the party (and my lack of sleep) the night before.
- Elisabeth Irgens did a great job doing live sketch notes of each talk and uploading them immediately to Twitter.
- The audience was very well behaved and it was a very inviting and inspiring environment to share information in. Good mix of people with various backgrounds.
- Whilst there was a bit of sponsorship being shown on the big screen and there were sponsor booths in the foyer all of it was very low-key and appeared utterly in context. No sales weasels or booth babes there. The sponsors sent their geeks to talk to geeks.
- I felt very well looked after – the organisers paid my flights and hotel and the communication with the speakers as to where to be when was only a handful of emails. Things just fell in place and there was no hesitance to make sure everybody gets there in time.
- It is very worth while to watch the recordings of the talk. All of them were very high quality. Personally, I was most impressed with Guillermo Rauch“’s How to build the modern, optimistic and reactive user interface, we all want.”
All in all, this was a conference that was as pragmatic and spot-on as Kenneth is when you talk to him. It felt very good and I was very much reminded of the first Fronteers event. This is one to watch, let’s see what happens next.
As of today, I have a new role and title at Mozilla: Developer Productivity Engineer. I'll be reporting to Laura Thomson as a member of the Developer Services team.
I have an immediate goal to make our version control work better. This includes making Try scale and helping out with the deployment of ReviewBoard. After that, I'm not entirely sure. But Autoland and Firefox build system improvements have been discussed.
I'm really excited to be in this new role. If someone were to give me a clean slate and tell me to design my own job role, I think I'd answer with something very similar to the role I am now in. I am passionate about tools and enabling people to become more productive. I have little doubt I'll thrive in this new role.
Vor zwei Tagen war ich in Berlin auf der MobileTechCon und hielt neben der Eröffnungskeynote am zweiten Tag auch einen Vortrag über den aktuellen Stand von Firefox OS.
Da das Publikum den Vortrag gerne auf Deutsch haben wollte, hatte ich kurzfristig umgeschwenkt und ihn dann auf sowas wie Deutsch gehalten.
Hier sind die Slides und die Screencasts. Der erste ist nur vom Vortrag, der zweite beinhaltet auch die Fragen und Antworten mit ein paar Beispielen wie man zum Beispiel die Developer Tools im Firefox verwenden kann, was together.js ist und wozu das gut ist und ein paar weitere “Schmankerln des offenen Netzes”.
Das alles is sehr ungeschnitten und war mehr oder minder im Moment geändert, daher kann es sein das da auch ungezogene Worte mit dabei sind. Die Slides sind auf Slideshare erhältlich.
Den halbstündigen Vortrag gibt es hier als Screencast zu sehen:
Wer den ganzen Vortrag mit Fragen und Antworten hören will, gibt es hier die ganze Stunde als Screencast.
“xkcd: volume 0” by Randall Munroe
What can I say? After all the years of reading xkcd.com, buying the book seemed like an obvious “huh, how did I not buy this already” moment.
This was a great wander down memory lane. I found a great many of my favorite xkcd comics, including bobby-drop-tables (therapeutic for anyone with an apostrophe in their surname!), locked-out-of-house, i’m-compiling-code and “the one that makes every Release Engineer I know cringe“.
Somehow, there were even some I’d never seen before, a very happy discovery: chess-coaster (which in turn inspired real life http://xkcd.com/chesscoaster!), the why-i’m-barred-from-speaking-at-crypto-conferences series, girls-on-the-internet, ninjas-vs-stallman, counting sheep…
All in all, a great fun read, and I found the “extra” sidebar cartoons equally fun… especially the yakshaver! If you like xkcd, and don’t already have this book, go get it.
ps: He’s got a new book coming out in a few days, a book tour in progress, and a really subtle turtles-all-the-way-down comic which nudges about the new book… if you look *really* closely! I’m looking forward to getting my hands on it!
We have been working hard to develop initial multi-screen capabilities within Firefox for Android Beta. Now, supported video content from Web pages you visit can be sent to and viewed on a second screen, with a new ‘send to device’ video sending feature. This feature is now available for testing.
Users now have even more control over their Web experience and can enhance the way they view video content by sending it to a larger screen. They can play, pause and close videos directly from within Firefox for Android via the Media Control Bar, which appears at the bottom of the phone’s screen when a video is being sent to a device. The Media Control Bar will stay visible as long as the video is playing, even as you change tabs or visit new Web pages.
To help users identify that the video they are watching in Firefox for Android can be sent to their connected media streaming device, a ‘send to’ indicator will appear (after any ads have finished) on the playback controls bar for the video.
Clicking this indicator will bring up a list of connected streaming media devices. Users can then select the device they want to send the video to for viewing on a big screen. A second ‘send to’ indicator will then appear in the URL bar to remind users that content from this Web page is being sent to a device.
How to Get Started
To test this feature on Roku or Chromecast, follow these simple instructions:
1. Install Firefox for Android Beta if you haven’t already.
2. Make sure Roku or Chromecast is set-up on a nearby TV and is running on the same WiFi network as your Android phone.
3. If streaming to a Roku, add the Firefox channel to the channel list – instructions from Roku on how to add a new channel are here
4. Go to a site like CNN.com and look for a video on the homepage. Once you start playing a supported video (after any ads have finished playing), the above ‘send to’ icon will appear over the video controls indicating that it can be sent to a nearby streaming device.
5. You can send the video you are watching to a nearby media streaming device by tapping on the video and selecting ‘send to’ from the video controls or touching the ‘send-to’ icon in the URL bar. Both actions will automatically launch the Firefox channel on Roku or activate Chromecast for streaming and send the video to a nearby TV.
- So long as the device receiving the video supports the same video format being viewed on Firefox for Android (e.g. MP4 for Roku), it will play.
- Some websites hide or customize the HTML5 video controls and some override the video playback menu too. This can make sending a video to a compatible device like Roku a little tricky, but the simple fallback is to start playing the video in the web page. If the video is in MP4 format and Firefox for Android Beta can find your Roku, a “send to device” indicator will appear in the URL Bar. Just tap on that to send the video to your Roku.
Support for sending videos to compatible devices like Roku and Chromecast is currently in pre-release. We need your help to test this exciting new feature. Please do remember to share your feedback and file any bugs. Happy testing!
For more information:
I know lots of people are very anxious to see Mozilla’s new code-review tool. It’s been a harrowing journey, but we are finally knocking out our last few blockers for initial deployment (see tracking bug 1021929). While we sort those out, here’s something to whet your palate: a walk through the new review work flow.
Next Wednesday, a number of organizations and tech companies are rallying together for a Day of Action to protect net neutrality. Mozilla is proud to join the effort.
Protecting net neutrality is a top priority for Mozilla. We believe that regardless of who is sending and receiving it, ISPs treating data equally is vital to a healthy, vibrant and open Web. During the Day of Action, we will encourage the Mozilla community to amplify its voice and send a clear message to the U.S. Congress about what is at stake if net neutrality is weakened.
In advance of the Day of Action, we will also host a reddit AMA to raise awareness and understanding of the issue so people can take informed action. Please join us in the AMA on Tuesday, September 9, from 12-1pm PT by visiting https://www.reddit.com/r/IAmA/.
This is a critical time in the evolution of the Web. Decisions about net neutrality will determine whether the Web continues to be a powerful, shared resource for innovation, opportunity and learning. A long-awaited decision from the FCC is imminent, so let’s join together to send a strong message to policy makers: Protect Net Neutrality.
Primarily what I did during Wikimania was chew on pens.
However, I also gave some talks.
The first one was on Creative Commons 4.0, with Kat Walsh. While targeted at Wikimedians, this may be of interest to others who want to learn about CC 4.0 as well.
Second one was on Open Source Hygiene, with Stephen LaPorte. This one is again Wikimedia-specific (and I’m afraid less useful without the speaker notes) but may be of interest to open source developers more generally.
The final one was on sharing; video is below (and I’ll share the slides once I figure out how best to embed the notes, which are pretty key to understanding the slides):