Announcing Mozilla’s Equal Rating Innovation Challenge, a $250,000 contest including expert mentorship to spark new ways to connect everyone to the Internet.
At Mozilla, we believe the Internet is most powerful when anyone — regardless of gender, income, or geography — can participate equally. However the digital divide remains a clear and persistent reality. Today more than 4 billion people are still not online, according to the World Economic Forum. That is greater than 55% of the global population. Some, who live in poor or rural areas, lack the infrastructure. Fast wired and wireless connectivity only reaches 30% of rural areas. Other people don’t connect because they don’t believe there is enough relevant digital content in their language. Women are also less likely to access and use the Internet; only 37% access the Internet versus 59% of men, according to surveys by the World Wide Web Foundation.
Access alone, however, is not sufficient. Pre-selected content and walled gardens powered by specific providers subvert the participatory and democratic nature of the Internet that makes it such a powerful platform. Mitchell Baker coined the term equal rating in a 2015 blog post. Mozilla successfully took part in shaping pro-net neutrality legislation in the US, Europe and India. Today, Mozilla’s Open Innovation Team wants to inject practical, action-oriented, new thinking into these efforts.
This is why we are very excited to launch our global Equal Rating Innovation Challenge. This challenge is designed to spur innovations for bringing the members of the Next Billion online. The Equal Rating Innovation Challenge is focused on identifying creative new solutions to connect the unconnected. These solutions may range from consumer products and novel mobile services to new business models and infrastructure proposals. Mozilla will award US$250,000 in funding and provide expert mentorship to bring these solutions to the market.
We seek to engage entrepreneurs, designers, researchers, and innovators all over the world to propose creative, engaging and scalable ideas that cultivate digital literacy and provide affordable access to the full diversity of the open Internet. In particular, we welcome proposals that build on local knowledge and expertise. Our aim is to entertain applications from all over the globe.
The US$250,000 in prize monies will be split in three categories:
- Best Overall (key metric: scalability)
- Best Overall Runner-up
- Most Novel Solution (key metric: experimental with potential high reward)
This level of funding may be everything a team needs to go to market with a consumer product, or it may provide enough support to unlock further funding for an infrastructure project.
The official submission period will run from 1 November to 6 January. All submissions will be judged by a group of external experts by mid January. The selected semifinalists will receive mentorship for their projects before they demo their ideas in early March. The winners will be announced at the end of March 2017.
We have also launched www.equalrating.com, a website offering educational content and background information to support the challenge. On the site, you will find the 3 key frameworks that may be useful for building understanding of the different aspects of this topic. You can read important statistics that humanize this issue, and see how connectivity influences gender dynamics, education, economics, and a myriad of other social issues. The reports section provides further depth to the different positions of the current debate. In the coming weeks, we will also stream a series of webinars to further inform potential applicants about the challenge details. We hope these webinars also provide opportunities for dialogue and questions.
Connecting the unconnected is one of the greatest challenges of our time. No one organization or effort can tackle it alone. Spread the word. Submit your ideas to build innovative and scalable ways to bring Internet access to the Next Billion — and the other billions, as well. Please join us in addressing this grand challenge.
Bringing the Power of the Internet to the Next Billion and Beyond was originally published in Mozilla Open Innovation on Medium, where people are continuing the conversation by highlighting and responding to this story.
Tor Project and Mozilla Making It Harder for Malware to Unmask Users
But the Tor Project, the nonprofit that maintains the Tor software, and the team behind Mozilla's Firefox, have quietly been working on improvements that, they say, should make such attacks more difficult. By tweaking how the browser connects to the ...
By design, taskcluster workers are very flexible and user-input-driven. This allows us to put CI task logic in-tree, which means developers can modify that logic as part of a try push or a code commit. This allows for a smoother, self-serve CI workflow that can ride the trains like any other change.
However, a secure release workflow requires certain tasks to be less permissive and more auditable. If the logic behind code signing or pushing updates to our users is purely in-tree, and the related checks and balances are also in-tree, the possibility of a malicious or accidental change being pushed live increases.
Enter scriptworker. Scriptworker is a limited-purpose taskcluster worker type: each instance can only perform one type of task, and validates its restricted inputs before launching any task logic. The scriptworker instances are maintained by Release Engineering, rather than the Taskcluster team. This separates roles between teams, which limits damage should any one user's credentials become compromised.scriptworker 0.8.0
The past several releases have included changes involving the chain of trust. Scriptworker 0.8.0 is the first release that enables gpg key management and chain of trust signing.
An upcoming scriptworker release will enable upstream chain of trust validation. Once enabled, scriptworker will fail fast on any task or graph that doesn't pass the validation tests.
Weekly project updates from the Mozilla Connected Devices team.
This is a weekly call with some of the Reps to discuss all matters about/affecting Reps and invite Reps to share their work with everyone.
The presence and participation of women in STEM is on the rise thanks to the efforts of many across the globe, but still has many...
So, Aria Stewart tweeted two questions and statement the other day:
Programmers: can you describe change over time in a system? How completely?
— Aria Stewart (@aredridel) October 10, 2016
I wanted to discuss this idea as it pertains to debugging and strategies I’ve been employing more often lately. But lets examine the topic at face value first, describing change over time in a system. The abstract “system” is where a lot of the depth in this question comes from. It could be talking about the computer you’re on, the code base you work in, data moving through your application, an organization of people, many things fall into a system of some sort, and time acts upon them all. I’m going to choose the data moving through a system as the primary topic, but also talk about code bases over time.
Another part of the question that keeps it quite open is the lack of “why”, “what”, or “how” in it. This means we could discuss why the data needs to be transformed in various ways, why we added a feature or change some code, why an organization is investing in writing software at all. We could talk about what change at each step in a data pipeline, what changes have happened in a given commit, or what goals were accomplished each month by some folks. Or, the topic could be how a compiler changes the data as it passes through, how a programmer sets about making changes to a code base, or how an organization made its decisions to go in the directions it did. All quite valid and this is but a fraction of the depth in this simple question.
Let’s talk about systems a bit. At work, we have a number of services talking via a messaging system and a relational database. The current buzz phrase for this is “micro services” but we also called them “service oriented architectures” in the past. My previous job, I worked in a much smaller system that had many components for gathering data, as well as a few components for processing that data and sending it back to the servers. Both of these systems shared common attributes which most other systems also must cope with: events that provide data to be processed happen in functionally random order, that data is fed into processors who then stage data to be consumed by other parts of the system.
When problems arise in systems like these, it can be difficult to tell what piece is causing disruption. The point where the data changes from healthy to problematic may be a few steps removed from the layer that the problem is detected in. Also, sometimes the data is good enough to only cause subtle problems. At the start of the investigation, all you might know is something bad happened in the past. It is especially at these points when we need the description of the change that should happen to our data over time, hopefully with as much detail as possible.
The more snarky among us will point out the source code is what is running so why would you need some other description? The problem often isn’t that a given developer can’t understand code as they read it, though that may be the case. Rather, I find the problem is that code is meant to handle so many different cases and scenarios that the exact slice that I care about is often hard to track. Luckily our brains are build up these mental models that we can use to traverse our code, eliminating blocks of code because we intuitively “know” the problem couldn’t be in them, because we have an idea of how our code should work. Unfortunately, it is often at the mental model part where problems arise. The same tricks we use to read faster and then miss errors in our own writing are what can cause problems when understanding why a system is working in some way we didn’t expect.
Mental models are often incomplete due to using libraries, having multiple developers on a project, and the ravages of time clawing away at our memory. In some cases the mental model is just wrong. You may have intended to make a change but forgot to actually do it, maybe you read some documentation in a different way than intended, possibly you made a mistake while writing the code such as a copy/paste error or off by 1 in a loop. It doesn’t really matter the source of the flaw though, because when we’re hunting a bug. The goal is to find what the flaw in both the code and the mental model are so it can be corrected, then we can try to identify why the model got out of wack in the first place.
Can we describe change in a system over time? Probably to some reasonable degree of accuracy, but likely not completely. How does all of this tie into debugging? The strategy I’ve been practicing when I hit these situations is particularly geared around the idea that my mental model and the code are not in agreement. I shut off anything that might interrupt a deep focus time such as my monitors and phone, then gather a stack of paper and a pen. I write down the reproduction steps at whatever level of detail they were given to me to use as a guide in the next process.
I then write out every single step along the path that the data will take as it appears in my mental model, preferably in order. This often means a number of arrows as I put in steps I forgot. Because I know the shape data and the reproduction steps, I can make assumptions like “we have an active connection to the database.” Assumptions are okay at this point, I’m just building up a vertical slice of the system and how it affects a single type of data. Once I’ve gotten a complete list of event on the path, I then start the coding part. I go through and add log lines that line up with list I made, or improve them when I see there is already some logging at a point. Running the code periodically to make sure my new code hasn’t caused any issues and that my mental model still holds true.
The goal of this exercise isn’t necessarily to bring the code base into alignment with my mental model, because my mental model may be wrong for a reason. But because there is a bug, so rarely am I just fixing my mental model, unless of course I discover the root cause and have to just say “working as intended.” As I go through, I make notes on my paper mental model where things vary, often forgotten steps make their way in now. Eventually, I find some step that doesn’t match up, at that point I probably know how to solve the bug, but usually I keep going, correcting the bug in the code, but continuing to analyze the system against my mental model.
I always keep going until I exhaust the steps in my mental model for a few reasons. First, since there was at least one major flaw in my mental model, there could be more, especially if that first one was obscuring other faults. Second, this is an opportunity to update my mental model with plenty of work already like writing the list and building any tools that were needed to capture the events. Last, the sort of logging and tools I build for validating my mental model, are often useful in the future when doing more debugging, so completing the path can make me better prepared for next time.
If you found this interesting, give this strategy a whirl. If you are wondering what level of detail I include in my event lists, commonly I’ll fill 1-3 pages with one event per line and some lines scratched out or with arrows drawn in the middle. Usually this documentation gets obsolete very fast. This is because it is nearly as detailed as the code, and only a thin vertical slice for very specific data, not the generalized case. I don’t try to save it or format it for other folks’ consumption. The are just notes for me.
I think this strategy is a step toward fulfilling the statement portion of Aria’s tweet, “Practice this.” One of the people you need to be concerned with the most when trying to describe change in a system, is yourself. Because if you can’t describe it to yourself, how are you ever going to describe it to others?
git branch -D deletes a Git branch. Yet someone on IRC asked, “I accidentaly got a git branch named -D. How do I delete it?”. I took this as a personal challenge to create and nuke a -D branch myself, to explore this edge case of one of my favorite tools.Making a branch with an illegal name
You create a branch in Git by typing git branch branchname. If you type git branch -D, the -D will be passed as an argument to the program by your shell, because your shell knows that all things starting with - are arguments.
You can tell your shell “I just mean a literal -, not an argument” by escaping it, like git branch \-D. But Git sees what we’re up to, and won’t let that fly. It complains fatal: '-D' is not a valid branch name.. So even when we get the string -D into Git, the porcelain spits it right back out at us.
But since this is Unix and Everything’s A File(TM), I can create a branch with a perfectly fine name to get through the porcelain and then change it later. If I was at the Git wizardry level of Emily Xie I could just write the files into .git without the intermediate step of watching the porcelain do it first, but I’m not quite that good yet.
So, let’s make a branch with a perfectly fine name in a clean repo, then swap things around under the hood:$ mkdir dont $ cd dont $ git init $ git commit --allow-empty -am "initial commit" [master (root-commit) da1f6b6] initial commit $ git branch * master $ git checkout -b dashdee switched to a new branch 'dashdee' $ git branch * dashdee master $ grep -ri dashdee .git/ .git/HEAD:ref: refs/heads/dashdee .git/logs/HEAD:da1f6b67446e83a456c4aeaeef1e256a8531640e da1f6b67446e83a456c4aeaeef1e256a8531640e E. Dunham <email@example.com> 1476402564 -0700 checkout: moving from master to dashdee $ find -name dashdee ./.git/refs/heads/dashdee ./.git/logs/refs/heads/dashdee
OK, so we’ve got this dashdee branch. Time to give it the name we’ve wanted all along:$ find .git -type f -print0 | xargs -0 sed -i 's/dashdee/\-D/g' $ mv .git/refs/heads/dashdee .git/refs/heads/-D $ mv .git/logs/refs/heads/dashdee .git/logs/refs/heads/-D Look what you’ve done...
Is this what you wanted?:$ git branch * -D master
You are really on a branch named -D now. You have snuck around the guardrails, though they were there for a reason:$ git commit --allow-empty -am "noooo" [-D 18dac23] noooo Try to make it go away $ git branch -D -D fatal: branch name required
It won’t give up that easily! You can’t escape:$ git branch -D \-D fatal: branch name required $ git branch -D '-D' fatal: branch name required $ git branch -D '\-D' error: branch '\-D' not found.
Notice the two categories of issue we’re hitting: In the first 2 examples, the shell was eating our branch name and not letting it through to Git. In the third case, we threw so many escapes in that Bash passed a string other than -D through to Git.
As an aside, I’m using Bash for this. Other shells might be differently quirky:$ echo $0 bash $ bash --version GNU bash, version 4.3.46(1)-release (x86_64-pc-linux-gnu) Copyright (C) 2013 Free Software Foundation, Inc. License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html> This is free software; you are free to change and redistribute it. There is NO WARRANTY, to the extent permitted by law. Succeed at making it go away
Bash lets me nuke the branch with:$ git branch -D ./-D Deleted branch ./-D (was broken). $ git branch master
However, if your shell is not so eaily duped into passing a string starting with - into a program, you can also fix things by manually removing the file that the branch -D command would have removed for you:$ rm .git/refs/heads/-D $ git branch master Clean up $ cd .. $ rm -rf dont
Please don’t do this kind of silly thing to any repo you care about. It’s just cruel.