mozilla

Mozilla Nederland LogoDe Nederlandse
Mozilla-gemeenschap

Abonneren op feed Mozilla planet
Planet Mozilla - http://planet.mozilla.org/
Bijgewerkt: 34 min 23 sec geleden

Giorgos Logiotatidis: Build and Test against Docker Images in Travis

vr, 17/06/2016 - 13:40

The road towards the absolute CI/CD pipeline goes through building Docker images and deploying them to production. The code included in the images gets unit tested, both locally during development and after merging in master branch using Travis.

But Travis builds its own environment to run the tests on which could be different from the environment of the docker image. For example Travis may be running tests in a Debian based VM with libjpeg version X and our to-be-deployed docker image runs code on-top of Alpine with libjpeg version Y.

To ensure that the image to be deployed to production is OK, we need to run the tests inside that Docker image. That still possible with Travis with only a few changes to .travis.yml:

Sudo is required sudo: required

Start by requesting to run tests in a VM instead of in a container.

Request Docker service: services: - docker

The VM must run the Docker daemon.

Add TRAVIS_COMMIT to Dockerfile (Optional) before_install: - docker --version - echo "ENV GIT_SHA ${TRAVIS_COMMIT}" >> Dockerfile

It's very useful to export the git SHA of HEAD as a Docker environment variable. This way you can always identify the code included, even if you have .git directory in .dockerignore to reduce size of the image.

The resulting docker image gets also tagged with the same SHA for easier identification.

Build the image install: - docker pull ${DOCKER_REPOSITORY}:last_successful_build || true - docker pull ${DOCKER_REPOSITORY}:${TRAVIS_COMMIT} || true - docker build -t ${DOCKER_REPOSITORY}:${TRAVIS_COMMIT} --pull=true .

Instead of pip installing packages, override the install step to build the Docker image.

Start by pulling previously built images from the Docker Hub. Remember that Travis will run each job in a isolate VMs therefore there is no Docker cache. By pulling previously built images cache gets seeded.

Travis' built-in cache functionality can also be used, but I find it more convenient to push to the Hub. Production will later pull from there and if debug is needed I can pull the same image locally too.

Each docker pull is followed by a || true which translates to "If the Docker Hub doesn't have this repository or tag it's OK, don't stop the build".

Finally trigger a docker build. Flag --pull=true will force downloading the latest versions of the base images, the ones from the FROM instructions. For example if an image is based on Debian this flag will force Docker to download the latest version of Debian image. The Docker cache has been already populated so this is not superfluous. If skipped the new build could use an outdated base image which could have security vulnerabilities.

Run the tests script: - docker run -d --name mariadb -e MYSQL_ALLOW_EMPTY_PASSWORD=yes -e MYSQL_DATABASE=foo mariadb:10.0 - docker run ${DOCKER_REPOSITORY}:${TRAVIS_COMMIT} flake8 foo - docker run --link mariadb:db -e CHECK_PORT=3306 -e CHECK_HOST=db giorgos/takis - docker run --env-file .env --link mariadb:db ${DOCKER_REPOSITORY}:${TRAVIS_COMMIT} coverage run ./manage.py test

First start mariadb which is needed for the Django tests to run. Fork to the background with -d flag. The --name flag makes linking with other containers easier.

Then run flake8 linter. This is run after mariadb - although it doesn't depend on it - to allow some time for the database to download and initialize before it gets hit with tests.

Travis needs about 12 seconds to get MariaDB ready which is usually more than the time the linter runs. To wait for the database to become ready before running the tests, run Takis. Takis waits for the container named mariadb to open port 3306. I blogged in detail about Takis before.

Finally run the tests making sure that the database is linked using --link and that environment variables needed for the application to initialize are set using --env-file.

Upload built images to the Hub deploy: - provider: script script: bin/dockerhub.sh on: branch: master repo: foo/bar

And dockerhub.sh

docker login -e "$DOCKER_EMAIL" -u "$DOCKER_USERNAME" -p "$DOCKER_PASSWORD" docker push ${DOCKER_REPOSITORY}:${TRAVIS_COMMIT} docker tag -f ${DOCKER_REPOSITORY}:${TRAVIS_COMMIT} ${DOCKER_REPOSITORY}:last_successful_build docker push ${DOCKER_REPOSITORY}:last_successful_build

The deploy step is used to run a script to tag images, login to Docker Hub and finally push those tags. This step is run only on branch: master and not on pull requests.

Pull requests will not be able to push to Docker Hub anyway because Travis does not include encrypted environment variables to pull requests and therefore, there will be no $DOCKER_PASSWORD. In the end of the day this is not a problem because you don't want pull requests with arbitrary code to end up in your Docker image repository.

Set the environment variables

Set the environment variables needed to build, test and deploy in env section:

env: global: # Docker - DOCKER_REPOSITORY=example/foo - DOCKER_EMAIL="foo@example.com" - DOCKER_USERNAME="example" # Django - DEBUG=False - DISABLE_SSL=True - ALLOWED_HOSTS=* - SECRET_KEY=foo - DATABASE_URL=mysql://root@db/foo - SITE_URL=http://localhost:8000 - CACHE_URL=dummy://

and save them to .env file for the docker daemon to access with --env-file

before_script: - env > .env

Variables with private data like DOCKER_PASSWORD can be added through Travis' web interface.

That's all!

Pull requests and merges to master are both tested against Docker images and successful builds of master are pushed to the Hub and can be directly used to production.

You can find a real life example of .travis.yml in the Snippets project.

Categorieën: Mozilla-nl planet

Mozilla Addons Blog: Multi-process Firefox and AMO

vr, 17/06/2016 - 11:28

In Firefox 48, which reaches the release channel on August 1, 2016, multi-process support (code name “Electrolysis”, or “e10s”) will begin rolling out to Firefox users without any add-ons installed.

In preparation for the wider roll-out to users with add-ons installed, we have implemented compatibility checks on all add-ons uploaded to addons.mozilla.org (AMO).

There are currently three possible states:

  1. The add-on is a WebExtension and hence compatible.
  2. The add-on has marked itself in the install.rdf as multi-process compatible.
  3. The add-on has not marked itself compatible, so the state is currently unknown.

If a new add-on or a new version of an old add-on is not multi-process compatible, a warning will be shown in the validation step. Here is an example:

image01

In future releases, this warning might become more severe as the feature nears full deployment.

For add-ons that fall into the third category, we might implement a more detailed check in a future release, to provide developers with more insight into the “unknown” state.

After an add-on is uploaded, the state is shown in the Developer Hub. Here is an example:

image00

Once you verify that your add-on is compatible, be sure to mark it as such and upload a new version to AMO. There is documentation on MDN on how to test and mark your add-on.

If your add-on is not compatible, please head to our resource center where you will find information on how to update it and where to get help. We’re here to support you!

Categorieën: Mozilla-nl planet

John O'Duinn: RelEng Conf 2016: Call for papers

vr, 17/06/2016 - 04:27

(Suddenly, its June! How did that happen? Where did the year go already?!? Despite my recent public silence, there’s been a lot of work going on behind the scenes. Let me catchup on some overdue blogposts – starting with RelEngConf 2016!)

We’ve got a venue and a date for this conference sorted out, so now its time to start gathering presentations, speakers and figuring out all the other “little details” that go into making a great, memorable, conference. This means two things:

1) RelEngCon 2016 is now accepting proposals for talks/sessions. If you have a good industry-related or academic-focused topic in the area of Release Engineering, please have a look at the Release Engineering conference guidelines, and submit your proposal before the deadline of 01-jul-2016.

2) Like all previous RelEng Conferences, the mixture of attendees and speakers, from academia and battle-hardened industry, makes for some riveting topics and side discussions. Come talk with others of your tribe, swap tips-and-gotchas with others who do understand what you are talking about and enjoy brainstorming with people with very different perspectives.

For further details about the conference, or submitting proposals, see http://releng.polymtl.ca/RELENG2015/html/index.html. If you build software delivery pipelines for your company, or if you work in a software company that has software delivery needs, I recommend you follow @relengcon, block off November 18th, 2016 on your calendar and book your travel to Seattle now. It will be well worth your time.

I’ll be there – and look forward to seeing you there!
John.

Categorieën: Mozilla-nl planet

Michael Comella: Enhancing Articles Through Hyperlinks

vr, 17/06/2016 - 02:00

When reading an article, I often run into a problem: there are links I want to open but now is not a good time to open them. Why is now not a good time?

  • If I open them and read them now, I’ll lose the context of the article I’m currently reading.
  • If I open them in the background now and come back to them later, I won’t remember the context that this page was opened from and may not remember why it was relevant to the original article.

I prototyped a solution – at the end of an article, I attach all of the links in the article to the end of the article with some additional context. For example, from my Android thread annotations post:

links with context at the end of an article

To remember why I wanted to open the link, I provide the sentence the link appeared in.

To see if the page is worth reading, I access the page the link points to and include some of its data: the title, the host name, and a snippet.

There is more information we can add here as well, e.g. a “trending” rating (a fake implementation is pictured), a favicon, or a descriptive photo.

And vice versa

You can also provide the original article’s context on a new page after you click a link:

context from where this page was opened

This context can be added for more than just articles.

Shout-out to Chenxia & Stefan for independently discovering this idea and a context graph brainstorming group for further fleshing this out.

Note: this is just a mock-up – I don’t have a prototype for this.

Themes

The web is a graph. In a graph, we can access new nodes, and their content, by traversing backwards or forwards. Can we take advantage of this relationship?

Alan Kay once said, people largely use computers “as a convenient way of getting at old media…” This is prevalent on the web today – many web pages fill fullscreen browser windows that allow you to read the news, create a calendar, or take notes, much like we can with paper media. How can we better take advantage of the dynamic nature of computers? Of the web?

Can we mix and match live content from different pages? Can we find clever ways to traverse & access the web graph?

This blog & prototype provide two simple examples of traversing the graph and being (ever so slightly) more dynamic: 1) showing the context of where the user is going to go and 2) showing the context of where they came from. Wikipedia (with a certain login-needed feature enabled) has another interesting example when mousing over a link:

wikipedia link mouse-over shows next page pop-up

They provide a summary and an image of the page the hyperlink will open. Can we use this technique, and others, to provide more context for hyperlinks on every page on the web?

To summarize, perhaps we can solve user problems by considering:

  • The web as a graph – accessing content backwards & forwards from the current page
  • Computers & the web as a truly dynamic medium, with different capabilities than their print predecessors

For the source and a chance to run it for yourself, check out the repository on github.

Categorieën: Mozilla-nl planet

Mozilla Reps Community: RepsNext – Introduction Video

do, 16/06/2016 - 17:26

At the 2015 Reps Leadership Meeting in Paris it became clear that the program was ready for “a version 2”. As the Reps Council had recently become a formal part of Mozilla Leadership, it was time to bring the program to the next level. Literally building on that idea, the RepsNext initiative was born.

Since then several working groups were formed to condense reflections on the past and visions for the future into new program proposals.

At our last Council meetup from 14-17 April 2016 in Berlin we recorded interviews with Council and Peers explaining RepsNext and summarizing our current status.

You can find a full transcript at the end of this blog post. Thanks to Yofie for editing the video!

Please share this video broadly, creating awareness for the exciting future of the Reps program.

 

Getting involved

We will focus our work at the London All Hands from June 12th to June 17th to work on open questions around the working groups. We will share our outcomes and open up for discussions after that. For now, there are several discussions to jump in and shape the future of the Reps program:

Additionally, you can help out and track our Council efforts on the Reps GitHub repository.

 

Moving beyond RepsNext

It took us a little more than a year to come up with this “new release” of the Reps program. For the future we plan to take smaller steps improving the program beyond RepsNext. So expect experiments and tweaks arriving in smaller bits and with a higher clockspeed (think Firefox Rapid Release Model).

 

Video transcript

Question: What is RepsNext?

[Arturo] I think we have reached a point of maturity in the program that we need to reinvent ourselves to be adaptors of Mozilla’s will and to the modern times.

Question: How will the Reps program change?

[Pierros] What we’re really interested in and picking up as a highlight are the changes on the governance level. There are a couple of things that are coming. The Council has done really fanstastic work on bringing up and framing really interesting conversations around what RepsNext is, and PeersNext as a subset of that, and how do we change and adapt the leadership structure of Mozilla Reps to be more representative of the program that we would like to see.

[Brian] The program will still remain a grassroots program, run by volunteers for volunteers.

[Henrik] We’ve been working heavily on it in various working groups over the last year, developed a very clear understanding of the areas that need work and actually got a lot of stuff done.

[Konstantina] I think that the program has a great future ahead of it. We’re moving to a leadership body where our role is gonna be to empower the rest of the volunteer community and we’re gonna try to minimize the bureacracy that we already have. So the Reps are gonna have the same resources that they had but they are gonna have tracks where they can evolve their leadership skills and with that empower the volunteer communities. Reps is gonna be the leadership body for the volunteer community and I think that’s great. We’re not only about events but we’re something more and we’re something the rest of Mozilla is gonna rely on when we’re talking about volunteers.

Question: What’s important about this change?

[Michael] We will have the Participation team’s support to have meetings together, to figure out the strategy together.

[Konstantina] We are bringing the tracks where we specialize the Reps based on their interest.

Question: Why do we need changes?

[Christos] There is the need of that. There is the need to reconsider the mentoring process, reconsidering budgets, interest groups inside of Reps. There is a need to evolve Reps and be more impactful in our regions.

Question: Is this important for Mozilla?

[Arturo] We’re going to have mentors and Reps specialized in their different contribution areas.

Question: How is RepsNext helping local communities?

[Guillermo] Our idea, what we’re planning with the changes on RepsNext is to bring more people to the program. More people is more diversity, so we’re trying to find new people, more people with new interests.

Question: What excites you about RepsNext?

[Faisal] We have resources for different types of community, for example if somebody needs hardware or somebody training material, a variety of things not just what we used to have. So it will open up more ways on how we can support Reps for more impactful events and making events more productive.

Categorieën: Mozilla-nl planet

QMO: Firefox 48 Beta 3 Testday, June 24th

do, 16/06/2016 - 14:03

Hello Mozillians,

We are happy to announce that next Friday, June 24th, we are organizing Firefox 48 Beta 3 Testday. We’ll be focusing our testing on the New Awesomebar feature, bug verifications and bug triage. Check out the detailed instructions via this etherpad.

No previous testing experience is required, so feel free to join us on #qa IRC channel where our moderators will offer you guidance and answer your questions.

Join us and help us make Firefox better! See you on Friday!

Categorieën: Mozilla-nl planet

Tanvi Vyas: Contextual Identities on the Web

do, 16/06/2016 - 12:39

The Containers Feature in Firefox Nightly enables users to login to multiple accounts on the same site simultaneously and gives users the ability to segregate site data for improved privacy and security.

We all portray different characteristics of ourselves in different situations. The way I speak with my son is much different than the way I communicate with my coworkers. The things I tell my friends are different than what I tell my parents. I’m much more guarded when withdrawing money from the bank than I am when shopping at the grocery store. I have the ability to use multiple identities in multiple contexts. But when I use the web, I can’t do that very well. There is no easy way to segregate my identities such that my browsing behavior while shopping for toddler clothes doesn’t cross over to my browsing behavior while working. The Containers feature I’m about to describe attempts to solve this problem: empowering Firefox to help segregate my online identities in the same way I can segregate my real life identities.

With Containers, users can open tabs in multiple different contexts – Personal, Work, Banking, and Shopping.  Each context has a fully segregated cookie jar, meaning that the cookies, indexeddb, localStorage, and cache that sites have access to in the Work Container are completely different than they are in the Personal Container. That means that the user can login to their work twitter account on twitter.com in their Work Container and also login to their personal twitter on twitter.com in their Personal Container. The user can use both mail accounts in side-by-side tabs simultaneously. The user won’t need to use multiple browsers, an account switcher[1], or constantly log in and out to switch between accounts on the same domain.

User logged into work twitter account in Work Container and personal twitter account in Personal Container, simulatenously in side-by-side tabs

Simultaneously logged into Personal Twitter and Work Twitter accounts.

Note that the inability to efficiently use “Contextual Identities” on the web has been discussed for many years[2]. The hard part about this problem is figuring out the right User Experience and answering questions like:

  • How will users know what context they are operating in?
  • What if the user makes a mistake and uses the wrong context; can the user recover?
  • Can the browser assist by automatically assigning websites to Containers so that users don’t have to manage their identities by themselves?
  • What heuristics would the browser use for such assignments?

We don’t have the answers to all of these questions yet, but hope to start uncovering some of them with user research and feedback. The Containers implementation in Nightly Firefox is a basic implementation that allows the user to manage identities with a minimal user interface.

We hope to gather feedback on this basic experience to see how we can iterate on the design to make it more convenient, elegant, and usable for our users. Try it out and share your feedback by filling out this quick form or writing to containers@mozilla.com.

FAQ How do I use Containers?

You can start using Containers in Nightly Firefox 50 by opening a New Container Tab. Go the File Menu and select the “New Container Tab” option. (Note that on Windows you need to hit the alt key to access the File Menu.) Choose between Personal, Work, Shopping, and Banking.

Use the File Menu to access New Container Tab, then choose between Personal, Work, Banking, and Shopping.

Notice that the tab is decorated to help you remember which context you are browsing in. The right side of the url bar specifies the name of the Container you are in along with an icon. The very top of the tab has a slight border that uses the same color as the icon and Container name. The border lets you know what container a tab is open in, even when it is not the active tab.

User interface for the 4 different types of Container tabs

You can open multiple tabs in a specific container at the same time. You can also open multiple tabs in different containers at the same time:

User Interface when multiple container tabs are open side-by-side

2 Work Containers tabs, 2 Shopping Container tabs, 1 Banking Container tab

Your regular browsing context (your “default container”) will not have any tab decoration and will be in a normal tab. See the next section to learn more about the “default container”

Containers are also accessible via the hamburger menu. Customize your hamburger menu by adding in the File Cabinet icon. From there you can select a container tab to open. We are working on adding more access points for container tabs; particularly on long-press of the plus button.

User Interface for Containers Option in Hamburger Menu

How does this change affect normal tabs and the site data already stored in my browser?

The containers feature doesn’t change the normal browsing experience you get when using New Tab or New Window. The normal tab will continue to access the site data the browser has already stored in the past. The normal tab’s user interface will not change. When browsing in the normal context, any site data read or written will be put in what we call the “default container”.

If you use the containers feature, the different container tabs will not have access to site data in the default container. And when using a normal tab, the tab won’t have access to site data that was stored for a different container tab. You can use normal tabs along side other containers:

User Interface when 2 normal tabs are open, next to 2 Work Container tabs and 1 Banking Container tab

2 normal tabs (“Default Container tabs”), 2 Work Container tabs, 1 Banking Container tab

What browser data is segregated by containers?

In principle, any data that a site has read or write access to should be segregated.

Assume a user logins into example.com in their Personal Container, and then loads example.com in their Work Container. Since these loads are in different containers, there should be no way for the example.com server to tie these two loads together. Hence, each container has its own separate cookies, indexedDB, localStorage, and cache.

Assume the user then opens a Shopping Container and opens the History menu option to look for a recently visited site. example.com will still appear in the user’s history, even though they did not visit example.com in the Shopping Container. This is because the site doesn’t have access to the user’s locally stored History. We only segregate data that a site has access to, not data that the user has access to. The Containers feature was designed for a single user who has the need to portray themselves to the web in different ways depending on the context in which they are operating.

By separating the data that a site has access to, rather than the data that a user has access to, Containers is able to offer a better experience than some of the alternatives users may be currently using to manage their identities.

Is this feature going to be in Firefox Release?

This is an experimental feature in Nightly only. We would like to collect feedback and iterate on the design before the containers concept goes beyond Nightly. Moreover, we would like to get this in the hands of Nightly users so they can help validate the OriginAttribute architecture we have implemented for this feature and other features. We have also planned a Test Pilot study for the Fall.

To be clear, this means that when Nightly 50 moves to Aurora/DevEdition 50, containers will not be enabled.

How do users manage different identities on the web today?

What do users do if they have two twitter accounts and want to login to them at the same time? Currently, users may login to one twitter account using their main browser, and another using a secondary browser. This is not ideal, since then the user is running two browsers in order to accomplish their tasks.

Alternatively, users may open a Private Browsing Window to login to the second twitter account. The problem with this is that all data associated with Private Browsing Windows is deleted when they are closed. The next time the user wants to use their secondary twitter account, they have to login again. Moreover, if the account requires two factor authentication, the user will always be asked for the second factor token, since the browser shouldn’t remember that they had logged in before when using Private Browsing.

Users may also use a second browser if they are worried about tracking. They may use a secondary browser for Shopping, so that the trackers that are set while Shopping can’t be associated with the tasks on their primary browser.

Can I disable containers on Nightly?

Yes, by following these steps:

  1. Open a new window or tab in Firefox.
  2. Type about:config and press enter.
  3. You will get to a page that asks you to promise to be careful. Promise you will be.
  4. Set the privacy.userContext.enabled preference to false.
Can I enable containers on a version of Firefox that is not Nightly?

Although the privacy.userContext.enabled preference described above may be present in other versions of Firefox, the feature may be incomplete, outdated, or buggy. We currently only recommend enabling the feature in Nightly, where you’ll have access to the newest and most complete version.

How is Firefox able to Compartmentalize Containers?

An origin is defined as a combination of a scheme, host, and port. Browsers make numerous security decisions based on the origin of a resource using the same-origin-policy. Various features require additional keys to be added to the origin combination. Examples include the Tor Browser’s work on First Party Isolation, Private Browsing Mode, the SubOrigin Proposal, and Containers.

Hence, Gecko has added additional attributes to the origin called OriginAttributes. When trying to determine if two origins are same-origin, Gecko will not only check if they have matching schemes, hosts, and ports, but now also check if all their OriginAttributes match.

Containers adds an OriginAttribute called userContextId. Each container has a unique userContextId. Stored site data (i.e. cookies) is now stored with a scheme, host, port, and userContextId. If a user has https://example.com cookies with the userContextId for the Shopping Container, those cookies will not be accessible by https://example.com in the Banking Container.

Note that one of the motivations in enabling this feature in Nightly is to help ensure that we iron out any bugs that may exist in our OriginAttribute implementation before features that depend on it are rolled out to users.

How does Containers improve user privacy and security?

The Containers feature offers users some control over the techniques websites can use to track them. Tracking cookies set while shopping in the Shopping Container won’t be accessible to sites in the Personal Container. So although a tracker can easily track a user within their Shopping Container, they would have to use device fingerprinting techniques to link that tracking information with tracking information from the user’s Personal Container.

Containers also offers the user a way to compartmentalize sensitive information. For example, users could be careful to only use their Banking Container to log into banking sites, protecting themselves from potential XSS and CSRF attacks on these sites. Assume a user visits attacker.com in an non-banking-container. The malicious site may try to use a vulnerability in a banking site to obtain the user’s financial data, but wouldn’t be able to since the user’s bank’s authentication cookies are shielded off in a separate container that the malicious site can’t touch.

Is there any chance that a tracker will be able to track me across containers?

There are some caveats to data separation with Containers.

The first is that all requests by your browser still have the same IP address, user agent, OS, etc. Hence, fingerprinting is still a concern. Containers are meant to help you separate your identities and reduce naive tracking by things like cookies. But more sophisticated trackers can still use your fingerprint to identify your device. The Containers feature is not meant to replace the Tor Browser, which tries to minimize your fingerprint as much as possible, sometimes at the expense of site functionality. With Containers, we attempt to improve privacy while still minimizing breakage.

There are also some bugs still open related to OriginAttribute separation. Namely, the following areas are not fully separated in Containers yet:

  • Some favicon requests use the default container cookies even when you are in a different container – Bug 1277803
  • The about:newtab page makes network requests to recently visited sites using the default container’s cookies even when you are in a different container – Bug 1279568
  • Awesome Bar search requests use the default container cookies even when you are in a different container – Bug 1244340
  • The Forget About Site button doesn’t forget about site data from Container tabs – Bug 1238183
  • The image cache is shared across all containers – Bug 1270680

We are working on fixing these last remaining bugs and hope to do so during this Nightly 50 cycle.

How can I provide feedback?

I encourage you to try out the feature and provide your feedback via:

Thank you

Thanks to everyone who has worked to make this feature a reality! Special call outs to the containers team:

Andrea Marchesini
Kamil Jozwiak
David Huseby
Bram Pitoyo
Yoshi Huang
Tim Huang
Jonathan Hao
Jonathan Kingston
Steven Englehardt
Ethan Tseng
Paul Theriault

Footnotes

[1] Some websites provide account switchers in their products. For websites that don’t support switching, users may install addons to help them switch between accounts.
[2] http://www.ieee-security.org/TC/W2SP/2013/papers/s1p2.pdf, https://blog.mozilla.org/ladamski/2010/07/contextual-identity/
[3] Containers Slide Deck

Categorieën: Mozilla-nl planet

David Burns: The final major player is set to ship WebDriver

do, 16/06/2016 - 11:28

It was nearly a year ago that Microsoft shipped their first implementation of WebDriver. I remember being so excited as I wrote a blog post about it.

This week, Apple have said that they are going to be shipping a version of WebDriver that will allow people to drive Safari 10 in macOS. In the release notes they have created safari driver that will be shipping with the OS.

In addition to new Web Inspector features in Safari 10, we are also bringing native WebDriver support to macOS. https://t.co/PfwmkRIBIV

— WebKit (@webkit) June 15, 2016

If you have ever wondered why this is important? Have a read of my last blog post. In Firefox 47 Selenium caused Firefox to crash on startup. The Mozilla implementation of WebDriver, called Marionette and GeckoDriver, would never have hit this problem because test failures and crashes like this would lead to patches being reverted and never shipped to end users.

Many congratulations to the Apple team for making this happen!

Categorieën: Mozilla-nl planet

Yunier José Sosa Vázquez: Conoce los complementos destacados para junio

do, 16/06/2016 - 04:04

Como cada mes, les traemos de primera mano los complementos recomendamos para los usuarios de Firefox.

Ver entrada en el blog de los Addons »

Categorieën: Mozilla-nl planet

Anjana Vakil: I want to mock with you

do, 16/06/2016 - 02:00

This post brought to you from Mozilla’s London All Hands meeting - cheers!

When writing Python unit tests, sometimes you want to just test one specific aspect of a piece of code that does multiple things.

For example, maybe you’re wondering:

  • Does object X get created here?
  • Does method X get called here?
  • Assuming method X returns Y, does the right thing happen after that?

Finding the answers to such questions is super simple if you use mock: a library which “allows you to replace parts of your system under test with mock objects and make assertions about how they have been used.” Since Python 3.3 it’s available simply as unittest.mock, but if you’re using an earlier Python you can get it from PyPI with pip install mock.

So, what are mocks? How do you use them?

Well, in short I could tell you that a Mock is a sort of magical object that’s intended to be a doppelgänger for some object in your code that you want to test. Mocks have special attributes and methods you can use to find out how your test is using the object you’re mocking. For example, you can use Mock.called and .call_count to find out if and how many times a method has been called. You can also manipulate Mocks to simulate functionality that you’re not directly testing, but is necessary for the code you’re testing. For example, you can set Mock.return_value to pretend that an function gave you some particular output, and make sure that the right thing happens in your program.

But honestly, I don’t think I could give a better or more succinct overview of mocks than the Quick Guide, so for a real intro you should go read that. While you’re doing that, I’m going to watch this fantastic Michael Jackson video:

Oh you’re back? Hi! So, now that you have a basic idea of what makes Mocks super cool, let me share with you some of the tips/tips/trials/tribulations I discovered when starting to use them.

Patches and namespaces

tl;dr: Learn where to patch if you don’t want to be sad!

When you import a helper module into a module you’re testing, the tested module gets its own namespace for the helper module. So if you want to mock a class from the helper module, you need to mock it within the tested module’s namespace.

For example, let’s say I have a Super Useful helper module, which defines a class HelperClass that is So Very Helpful:

# helper.py class HelperClass(): def __init__(self): self.name = "helper" def help(self): helpful = True return helpful

And in the module I want to test, tested, I instantiate the Incredibly Helpful HelperClass, which I imported from helper.py:

# tested.py from helper import HelperClass def fn(): h = HelperClass() # using tested.HelperClass return h.help()

Now, let’s say that it is Incredibly Important that I make sure that a HelperClass object is actually getting created in tested, i.e. that HelperClass() is being called. I can write a test module that patches HelperClass, and check the resulting Mock object’s called property. But I have to be careful that I patch the right HelperClass! Consider test_tested.py:

# test_tested.py import tested from mock import patch # This is not what you want: @patch('helper.HelperClass') def test_helper_wrong(mock_HelperClass): tested.fn() assert mock_HelperClass.called # Fails! I mocked the wrong class, am sad :( # This is what you want: @patch('tested.HelperClass') def test_helper_right(mock_HelperClass): tested.fn() assert mock_HelperClass.called # Passes! I am not sad :)

OK great! If I patch tested.HelperClass, I get what I want.

But what if the module I want to test uses import helper and helper.HelperClass(), instead of from helper import HelperClass and HelperClass()? As in tested2.py:

# tested2.py import helper def fn(): h = helper.HelperClass() return h.help()

In this case, in my test for tested2 I need to patch the class with patch('helper.HelperClass') instead of patch('tested.HelperClass'). Consider test_tested2.py:

# test_tested2.py import tested2 from mock import patch # This time, this IS what I want: @patch('helper.HelperClass') def test_helper_2_right(mock_HelperClass): tested2.fn() assert mock_HelperClass.called # Passes! I am not sad :) # And this is NOT what I want! # Mock will complain: "module 'tested2' does not have the attribute 'HelperClass'" @patch('tested2.HelperClass') def test_helper_2_right(mock_HelperClass): tested2.fn() assert mock_HelperClass.called

Wonderful!

In short: be careful of which namespace you’re patching in. If you patch whatever object you’re testing in the wrong namespace, the object that’s created will be the real object, not the mocked version. And that will make you confused and sad.

I was confused and sad when I was trying to mock the TestManifest.active_tests() function to test BaseMarionetteTestRunner.add_test, and I was trying to mock it in the place it was defined, i.e. patch('manifestparser.manifestparser.TestManifest.active_tests').

Instead, I had to patch TestManifest within the runner.base module, i.e. the place where it was actually being called by the add_test function, i.e. patch('marionette.runner.base.TestManifest.active_tests').

So don’t be confused or sad, mock the thing where it is used, not where it was defined!

Pretending to read files with mock_open

One thing I find particularly annoying is writing tests for modules that have to interact with files. Well, I guess I could, like, write code in my tests that creates dummy files and then deletes them, or (even worse) just put some dummy files next to my test module for it to use. But wouldn’t it be better if I could just skip all that and pretend the files exist, and have whatever content I need them to have?

It sure would! And that’s exactly the type of thing mock is really helpful with. In fact, there’s even a helper called mock_open that makes it super simple to pretend to read a file. All you have to do is patch the builtin open function, and pass in mock_open(read_data="my data") to the patch to make the open in the code you’re testing only pretend to open a file with that content, instead of actually doing it.

To see it in action, you can take a look at a (not necessarily great) little test I wrote that pretends to open a file and read some data from it:

def test_nonJSON_file_throws_error(runner): with patch('os.path.exists') as exists: exists.return_value = True with patch('__builtin__.open', mock_open(read_data='[not {valid JSON]')): with pytest.raises(Exception) as json_exc: runner._load_testvars() # This is the code I want to test, specifically to be sure it throws an exception assert 'not properly formatted' in json_exc.value.message Gotchya: Mocking and debugging at the same time

See that patch('os.path.exists') in the test I just mentioned? Yeah, that’s probably not a great idea. At least, I found it problematic.

I was having some difficulty with a similar test, in which I was also patching os.path.exists to fake a file (though that wasn’t the part I was having problems with), so I decided to set a breakpoint with pytest.set_trace() to drop into the Python debugger and try to understand the problem. The debugger I use is pdb++, which just adds some helpful little features to the default pdb, like colors and sticky mode.

So there I am, merrily debugging away at my (Pdb++) prompt. But as soon as I entered the patch('os.path.exists') context, I started getting weird behavior in the debugger console: complaints about some ~/.fancycompleterrc.py file and certain commands not working properly.

It turns out that at least one module pdb++ was using (e.g. fancycompleter) was getting confused about file(s) it needs to function, because of checks for os.path.exists that were now all messed up thanks to my ill-advised patch. This had me scratching my head for longer than I’d like to admit.

What I still don’t understand (explanations welcome!) is why I still got this weird behavior when I tried to change the test to patch 'mymodule.os.path.exists' (where mymodule.py contains import os) instead of just 'os.path.exists'. Based on what we saw about namespaces, I figured this would restrict the mock to only mymodule, so that pdb++ and related modules would be safe - but it didn’t seem to have any effect whatsoever. But I’ll have to save that mystery for another day (and another post).

Still, lesson learned: if you’re patching a commonly used function, like, say, os.path.exists, don’t forget that once you’re inside that mocked context, you no longer have access to the real function at all! So keep an eye out, and mock responsibly!

Mock the night away

Those are just a few of the things I’ve learned in my first few weeks of mocking. If you need some bedtime reading, check out these resources that I found helpful:

I’m sure mock has all kinds of secrets, magic, and superpowers I’ve yet to discover, but that gives me something to look forward to! If you have mock-foo tips to share, just give me a shout on Twitter!

Categorieën: Mozilla-nl planet

Chris Ilias: Who uses voice control?

do, 16/06/2016 - 00:55

Voice control (Siri, Google Now, Amazon Echo, etc.) is not a very useful feature to me, and wonder if I’m in the minority.

Why it is not useful:

  • I live with other people.
    • Sometimes one of the people I live with or myself may be sleeping. If someone speaks out loud to the TV or phone, that might wake the other up.
    • Even when everyone is awake, that doesn’t mean we are together. It annoys me when someone talks to the TV while watching basketball. I don’t want to find out how annoying it would be to listen to someone in another room tell the TV or their phone what to do.
  • I work with other people.
    • If I’m having lunch, and a co-worker wants to look something up on his/her phone, I don’t want to hear them speak their queries out loud. I actually have coworkers that use their phones as boomboxes to listen to music while eating lunch, as if no-one else can hear it, or everyone has the same taste in music, or everyone wants to listen to music at all during lunch.

The only times I use Siri are:

  • When I am in the car.
  • When I am speaking with others in a social setting, like a pub, and we want to look something up pertaining to the conversation.
  • When I’m alone

When I saw Apple introduce tvOS, the dependence on Siri turned me off from upgrading my Apple TV.

Am I in the minority here?

I get the feeling I’m not. I cannot recall anyone I know using Siri for other anything than entertainment with friends. Controlling devices with your voice in public must be Larry David’s worst nightmare.

Categorieën: Mozilla-nl planet

Jen Kagan: day 18: the case of the missing vine api and the add-on sdk

wo, 15/06/2016 - 23:17

i’m trying to add vine support to min-vid and realizing that i’m still having a hard time wrapping my head around the min-vid program structure.

what does adding vine support even mean? it means that when you’re on vine.co, i want you to be able to right click on any video on the website, send it to the corner of your browser, and watch vines in the corner while you continue your browsing.

i’m running into a few obstacles. one obstacle is that i can’t find an official vine api. what’s an api? it stands for “application programming interface” (maybe? i think? no, i’m not googling) and i don’t know the official definition, but my unofficial definition is that the api is documentation i need from vine about how to access and manipulate content they have on their website. i need to be able to know the pattern for structuring video URLs. i need to know what functions to call in order to autoplay, loop, pause, and mute their videos. since this doesn’t exist in an official, well-documented way, i made a gist of their embed.js file, which i think/hope maybe controls their embedded videos, and which i want to eventually make sense of by adding inline comments.

another obstacle is that mozilla’s add-on sdk is really weirdly structured. i wrote about this earlier and am still sketching it out. here’s what i’ve gathered so far:

  • the page you navigate to in your browser is a page. with its own DOM. this is the page DOM.
  • the firefox add-on is made of content scripts (CS) and page scripts (PS).
  • the CS talks to the page DOM and the PS talks to the page DOM, but the CS and the PS don’t talk to each other.
  • with min-vid, the CS is index.js. this controls the context menu that comes up when you right click, and it’s the thing that tells the panel to show itself in the corner.
  • the two PS’s in min-vid are default.html and controls.js. the default.html PS loads the stuff inside the panel. the controls.js PS lets you control the stuff that’s in the panel.

so far, i can get the vine video to show up in the panel, but only after i’ve sent a youtube video to the panel. i can’t the vine video to show up on its own, and i’m not sure why. this makes me sad. here is a sketch:

FullSizeRender-1

Categorieën: Mozilla-nl planet

Doug Belshaw: Why we need 'view source' for digital literacy frameworks

wo, 15/06/2016 - 20:01

Apologies if this post comes across as a little jaded, but as someone who wrote their doctoral thesis on this topic, I had to stifle a yawn when I saw that the World Economic Forum have defined 8 digital skills we must teach our children.

World Economic Forum - digital skills

In a move so unsurprising that it’s beyond pastiche, they’ve also coined a new term:

Digital intelligence or “DQ” is the set of social, emotional and cognitive abilities that enable individuals to face the challenges and adapt to the demands of digital life.

I don’t mean to demean what is obviously thoughtful and important work, but I do wonder how (and who!) came up with this. They’ve got an online platform which helps develop the skills they’ve identified as important, but it’s difficult to fathom why some things were included and others left out.

An audit-trail of decision-making is important, as it reveals both the explicit and implicit biases of those involved in the work, as well as lazy shortcuts they may have taken. I attempted to do this in my work as lead of Mozilla’s Web Literacy Map project through the use of a wiki, but even that could have been clearer.

What we need is the equivalent of ‘view source’ for digital literacy frameworks. Specifically, I’m interested in answers to the following 10 questions:

  1. Who worked on this?
  2. What’s the goal of the organisation(s) behind this?
  3. How long did you spend researching this area?
  4. What are you trying to achieve through the creation of this framework?
  5. Why do you need to invent a new term? Why do very similar (and established) words and phrases not work well in this context?
  6. How long is this project going to be supported for?
  7. Is your digital literacy framework versioned?
  8. If you’ve included skills, literacies,and habits of mind that aren’t obviously 'digital’, why is that?
  9. What were the tough decisions that you made? Why did you come down on the side you did?
  10. What further work do you need to do to improve this framework?

I’d be interested in your thoughts and feedback around this post. Have you seen a digital literacy framework that does this well? What other questions would you add?

Note: I haven’t dived into the visual representation of digital literacy frameworks. That’s a whole other can of worms…

Get in touch! I’m @dajbelshaw and you can email me: hello@dynamicskillset.com

Categorieën: Mozilla-nl planet

Mozilla Open Policy & Advocacy Blog: A Step Forward for Net Neutrality in the U.S.

wo, 15/06/2016 - 16:55

We’re thrilled to see the D.C. Circuit Court upholding the FCC’s historic net neutrality order, and the agency’s authority to continue to protect Internet users and businesses from throttling and blocking. Protecting openness and innovation is at the core of Mozilla’s mission. Net neutrality supports a level playing field, critical to ensuring a healthy, innovative, and open Web.

Leading up to this ruling Mozilla filed a joint amicus brief with CCIA supporting the order, and engaged extensively in the FCC proceedings. We filed a written petition, provided formal comments along the way, and engaged our community with a petition to Congress. Mozilla also organized global teach-ins and a day of action, and co-authored a letter to the President.

We’re glad to see this development and we remain steadfast in our position that net neutrality is a critical factor to ensuring the Internet is open and accessible. Mozilla is committed to continuing to advocate for net neutrality principles around the world.

Categorieën: Mozilla-nl planet

Daniel Stenberg: No websockets over HTTP/2

wo, 15/06/2016 - 13:55

There is no websockets for HTTP/2.

By this, I mean that there’s no way to negotiate or upgrade a connection to websockets over HTTP/2 like there is for HTTP/1.1 as expressed by RFC 6455. That spec details how a client can use Upgrade: in a HTTP/1.1 request to switch that connection into a websockets connection.

Note that websockets is not part of the HTTP/1 spec, it just uses a HTTP/1 protocol detail to switch an HTTP connection into a websockets connection. Websockets over HTTP/2 would similarly not be a part of the HTTP/2 specification but would be separate.

(As a side-note, that Upgrade: mechanism is the same mechanism a HTTP/1.1 connection can get upgraded to HTTP/2 if the server supports it – when not using HTTPS.)

chinese-socket

Draft

There’s was once a draft submitted that describes how websockets over HTTP/2 could’ve been done. It didn’t get any particular interest in the IETF HTTP working group back then and as far as I’ve seen, there has been very little general interest in any group to pick up this dropped ball and continue running. It just didn’t go any further.

This is important: the lack of websockets over HTTP/2 is because nobody has produced a spec (and implementations) to do websockets over HTTP/2. Those things don’t happen by themselves, they actually require a bunch of people and implementers to believe in the cause and work for it.

Websockets over HTTP/2 could of course have the benefit that it would only be one stream over the connection that could serve regular non-websockets traffic at the same time in many other streams, while websockets upgraded on a HTTP/1 connection uses the entire connection exclusively.

Instead

So what do users do instead of using websockets over HTTP/2? Well, there are several options. You probably either stick to HTTP/2, upgrade from HTTP/1, use Web push or go the WebRTC route!

If you really need to stick to websockets, then you simply have to upgrade to that from a HTTP/1 connection – just like before. Most people I’ve talked to that are stuck really hard on using websockets are app developers that basically only use a single connection anyway so doing that HTTP/1 or HTTP/2 makes no meaningful difference.

Sticking to HTTP/2 pretty much allows you to go back and use the long-polling tricks of the past before websockets was created. They were once rather bad since they would waste a connection and be error-prone since you’d have a connection that would sit idle most of the time. Doing this over HTTP/2 is much less of a problem since it’ll just be a single stream that won’t be used that much so it isn’t that much of a waste. Plus, the connection may very well be used by other streams so it will be less of a problem with idle connections getting killed by NATs or firewalls.

The Web Push API was brought by W3C during 2015 and is in many ways a more “webby” way of doing push than the much more manual and “raw” method that websockets is. If you use websockets mostly for push notifications, then this might be a more convenient choice.

Also introduced after websockets, is WebRTC. This is a technique introduced for communication between browsers, but it certainly provides an alternative to some of the things websockets were once used for.

Future

Websockets over HTTP/2 could still be done. The fact that it isn’t done just shows that there isn’t enough interest.

Non-TLS

Recall how browsers only speak HTTP/2 over TLS, while websockets can also be done over plain TCP. In fact, the only way to upgrade a HTTP connection to websockets is using the HTTP/1 Upgrade: header trick, and not the ALPN method for TLS that HTTP/2 uses to reduce the number of round-trips required.

If anyone would introduce websockets over HTTP/2, they would then probably only be possible to be made over TLS from within browsers.

Categorieën: Mozilla-nl planet

David Burns: Selenium WebDriver and Firefox 47

wo, 15/06/2016 - 00:14

With the release of Firefox 47, the extension based version FirefoxDriver is no longer working. There was a change in Firefox that when Selenium started the browser it caused it to crash. It has been fixed but there is a process to get this to release which is slow (to make sure we don't break anything else) so hopefully this version is due for release next week or so.

This does not mean that your tests need to stop working entirely as there are options to keep them working.

Marionette

Firstly, you can use Marionette, the Mozilla version of FirefoxDriver to drive Firefox. This has been in Firefox since about 24 as we, slowly working against Mozilla priorities, getting it up to Selenium level. Currently Marionette is passing ~85% of the Selenium test suite.

I have written up some documentation on how to use Marionette on MDN

I am not expecting everything to work but below is a quick list that I know doesn't work.

  • No support for self-signed certificates
  • No support for actions
  • No support logging endpoint
  • I am sure there are other things we don't remember

It would be great if we could raise bugs.

Firefox 45 ESR or Firefox 46

If you don't want to worry about Marionette, the other option is to downgrade to Firefox 45, preferably the ESR as it won't update to 47 and will update in about 6-9 months time to Firefox 52 when you will need to use Marionette.

Marionette will be turned on by default from Selenium 3, which is currently being worked on by the Selenium community. Ideally when Firefox 52 comes around you will just update to Selenium 3 and, fingers crossed, all works as planned.

Categorieën: Mozilla-nl planet

Robert O'Callahan: Nastiness Works

di, 14/06/2016 - 23:59

One thing I experienced many times at Mozilla was users pressuring developers with nastiness --- ranging from subtle digs to vitriolic abuse, trying to make you feel guilty and/or trigger an "I'll show you! (by fixing the bug)" response. I know it happens in most open-source projects; I've been guilty of using it myself.

I particularly dislike this tactic because it works on me. It really makes me want to fix bugs. But I also know one shouldn't reward bad behavior, so I feel bad fixing those bugs. Maybe the best I can do is call out the bad behavior, fix the bug, and avoid letting that same person use that tactic again.

Perhaps you're wondering "what's wrong with that tactic if it gets bugs fixed?" Development resources are finite so every bug or feature is competing with others. When you use nastiness to manipulate developers into favouring your bug, you're not improving quality generally, you're stealing attention away from other issues whose proponents didn't stoop to that tactic and making developers a little bit miserable in the process. In fact by undermining rational triage you're probably making quality worse overall.

Categorieën: Mozilla-nl planet

Mitchell Baker: Expanding Mozilla’s Boards

di, 14/06/2016 - 22:48

This post was originally published on the Mozilla Blog.

Watch the presentation of Mozilla's Boards expansion on AirMozilla

Watch the presentation of Mozilla’s Boards expansion on AirMozilla

In a post earlier this month, I mentioned the importance of building a network of people who can help us identify and recruit potential Board level contributors and senior advisors. We are also currently working to expand both the Mozilla Foundation and Mozilla Corporation Boards.

The role of a Mozilla Board member

I’ve written a few posts about the role of the Board of Directors at Mozilla.

At Mozilla, we invite our Board members to be more involved with management, employees and volunteers than is generally the case. It’s not that common for Board members to have unstructured contacts with individuals or even sometimes the management team. The conventional thinking is that these types of relationships make it hard for the CEO to do his or her job. We feel differently. We have open flows of information in multiple channels. Part of building the world we want is to have built transparency and shared understandings.

We also prefer a reasonably extended “get to know each other” period for our Board members. Sometimes I hear people speak poorly of extended process, but I feel it’s very important for Mozilla.  Mozilla is an unusual organization. We’re a technology powerhouse with a broad Internet openness and empowerment mission at its core. We feel like a product organization to those from the nonprofit world; we feel like a non-profit organization to those from the Internet industry.

It’s important that our Board members understand the full breadth of Mozilla’s mission. It’s important that Mozilla Foundation Board members understand why we build consumer products, why it happens in the subsidiary and why they cannot micro-manage this work. It is equally important that Mozilla Corporation Board members understand why we engage in the open Internet activities of the Mozilla Foundation and why we seek to develop complementary programs and shared goals.

I want all our Board members to understand that “empowering people” encompasses “user communities” but is much broader for Mozilla. Mozilla should be a resource for the set of people who care about the open Internet. We want people to look to Mozilla because we are such an excellent resource for openness online, not because we hope to “leverage our community” to do something that benefits us.

These sort of distinctions can be rather abstract in practice. So knowing someone well enough to be comfortable about these takes a while. We have a couple of ways of doing this. First, we have extensive discussions with a wide range of people. Board candidates will meet the existing Board members, members of the management team, individual contributors and volunteers. We’ve been piloting ways to work with potential Board candidates in some way. We’ve done that with Cathy Davidson, Ronaldo Lemos, Katharina Borchert and Karim Lakhani. We’re not sure we’ll be able to do it with everyone, and we don’t see it as a requirement. We do see this as a good way to get to know how someone thinks and works within the framework of the Mozilla mission. It helps us feel comfortable including someone at this senior level of stewardship.

What does a Mozilla Board member look like

Job descriptions often get long and wordy. We have those too but, for the search of new Board members, we’ve tried something else this time: a visual role description.

Board member job description for Mozilla Foundation

Board member job description for Mozilla Corporation

Board member job description for Mozilla Foundation

Board member job description for Mozilla Foundation

Here is a short explanation of how to read these visuals:

  • The horizontal lines speaks to things that every Board member should have. For instance, to be a Board member, you have to care about the mission and you have to have some cultural sense of Mozilla, etc. They are a set of things that are important for each and every candidate. In addition, there is a set of things that are important for the Board as a whole. For instance, we could put international experience in there or whether the candidate is a public spokesperson. We want some of that but it is not necessary that every Board member has that.
  • In the vertical green columns, we have the particular skills and expertise that we are looking for at this point.
  • We would expect the horizontal lines not to change too much over time and the vertical lines to change depending on who joins the Board and who leaves.

I invite you to look at these documents and provide input on them. If you have candidates that you believe would be good Board members, send them to the boarddevelopment@mozilla.com mailing list. We will use real discretion with the names you send us.

We’ll also be designing a process for how to broaden participation in the process beyond other Board members. We want to take advantage of the awareness and the cluefulness of the organization. That will be part of a future update.

Categorieën: Mozilla-nl planet

The Mozilla Blog: Expanding Mozilla’s Boards

di, 14/06/2016 - 13:42
https://air.mozilla.org/moco-mofo-board-development/

Watch the presentation of Mozilla’s Boards expansion on AirMozilla

In a post earlier this month, I mentioned the importance of building a network of people who can help us identify and recruit potential Board level contributors and senior advisors. We are also currently working to expand both the Mozilla Foundation and Mozilla Corporation Boards.

The role of a Mozilla Board member

I’ve written a few posts about the role of the Board of Directors at Mozilla.

At Mozilla, we invite our Board members to be more involved with management, employees and volunteers than is generally the case. It’s not that common for Board members to have unstructured contacts with individuals or even sometimes the management team. The conventional thinking is that these types of relationships make it hard for the CEO to do his or her job. We feel differently. We have open flows of information in multiple channels. Part of building the world we want is to have built transparency and shared understandings.

We also prefer a reasonably extended “get to know each other” period for our Board members. Sometimes I hear people speak poorly of extended process, but I feel it’s very important for Mozilla.  Mozilla is an unusual organization. We’re a technology powerhouse with a broad Internet openness and empowerment mission at its core. We feel like a product organization to those from the nonprofit world; we feel like a non-profit organization to those from the Internet industry.

It’s important that our Board members understand the full breadth of Mozilla’s mission. It’s important that Mozilla Foundation Board members understand why we build consumer products, why it happens in the subsidiary and why they cannot micro-manage this work. It is equally important that Mozilla Corporation Board members understand why we engage in the open Internet activities of the Mozilla Foundation and why we seek to develop complementary programs and shared goals.

I want all our Board members to understand that “empowering people” encompasses “user communities” but is much broader for Mozilla. Mozilla should be a resource for the set of people who care about the open Internet. We want people to look to Mozilla because we are such an excellent resource for openness online, not because we hope to “leverage our community” to do something that benefits us.

These sort of distinctions can be rather abstract in practice. So knowing someone well enough to be comfortable about these takes a while. We have a couple of ways of doing this. First, we have extensive discussions with a wide range of people. Board candidates will meet the existing Board members, members of the management team, individual contributors and volunteers. We’ve been piloting ways to work with potential Board candidates in some way. We’ve done that with Cathy Davidson, Ronaldo Lemos, Katharina Borchert and Karim Lakhani. We’re not sure we’ll be able to do it with everyone, and we don’t see it as a requirement. We do see this as a good way to get to know how someone thinks and works within the framework of the Mozilla mission. It helps us feel comfortable including someone at this senior level of stewardship.

What does a Mozilla Board member look like

Job descriptions often get long and wordy. We have those too but, for the search of new Board members, we’ve tried something else this time: a visual role description.

Board member job description for Mozilla Corporation

Board member job description for Mozilla Corporation

Board member job description for Mozilla Foundation

Board member job description for Mozilla Foundation

Here is a short explanation of how to read these visuals:

  • The horizontal lines speaks to things that every Board member should have. For instance, to be a Board member, you have to care about the mission and you have to have some cultural sense of Mozilla, etc. They are a set of things that are important for each and every candidate. In addition, there is a set of things that are important for the Board as a whole. For instance, we could put international experience in there or whether the candidate is a public spokesperson. We want some of that but it is not necessary that every Board member has that.
  • In the vertical green columns, we have the particular skills and expertise that we are looking for at this point.
  • We would expect the horizontal lines not to change too much over time and the vertical lines to change depending on who joins the Board and who leaves.

I invite you to look at these documents and provide input on them. If you have candidates that you believe would be good Board members, send them to the boarddevelopment@mozilla.com mailing list. We will use real discretion with the names you send us.

We’ll also be designing a process for how to broaden participation in the process beyond other Board members. We want to take advantage of the awareness and the cluefulness of the organization. That will be part of a future update.

 

Categorieën: Mozilla-nl planet