Mozilla Nederland LogoDe Nederlandse

Firefox UX: Firefox Workflow User Research in Germany

Mozilla planet - to, 13/07/2017 - 14:37
Munich Public Transit (Photo: Gemma Petrie)

Last year, the Firefox User Research team conducted a series of formative research projects studying multi-device task continuity. While these previous studies broadly investigated types of task flows and strategies for continuity across devices, they did not focus on the functionality, usability, or user goals behind these specific workflows.

For most users, interaction with browsers can be viewed as a series of specific, repeatable workflows. Within the the idea of a “workflow” is the theory of “flow.” Flow has been defined as:

a state of mind experienced by people who are deeply involved in an activity. For example, sometimes while surfing the Net, people become so focused on their pursuit that they lose track of time and temporarily forget about their surroundings and usual concerns…Flow has been described as an intrinsically enjoyable experience.¹

As new features and service integrations are introduced to existing products, there is a risk that unarticulated assumptions about usage context and user mental models could create obstacles for our users. Our goal for this research was to identify these obstacles and gain a detailed understanding of the behaviors, motivations, and strategies behind current browser-based user workflows and related device or app-based workflows. These insights will help us develop products, services, and features for our users.

Primary Research Questions
  • How can we understand users’ current behaviors to develop new workflows within the browser?
  • How do workflows & “flow” states differ between and among different devices?
  • In which current browser workflows do users encounter obstacles? What are these obstacles?
  • Are there types of workflows for specific types of users and their goals? What are they?
  • How are users’ unmet workflow needs being met outside of the browser? And how might we meet those needs in the browser?

In order to understand users’ workflows, we employed a three-part, mixed method approach.


The first phase of our study was a twenty question survey deployed to 1,000 respondents in Germany provided by SSI’s standard international general population panel. We asked participants to select the Internet activities they had engaged in in the previous week. Participants were also asked questions about their browser usage on multiple devices as well as perceptions of privacy. We modeled this survey off of Pew Research Center’s “The Internet and Daily Life” study.

Experience Sampling

In the second phase, a separate group of 26 German participants were recruited from four major German cities: Cologne, Hamburg, Munich, and Leipzig. These participants represented a diverse range of demographic groups and half of the participants used Firefox as their primary browser on at least one of their devices. Participants were asked to download a mobile app called Paco. Paco cued participants up to seven times daily asking them about their current Internet activities, the context for it, and their mental state while completing it.

In-Person Interviews

In the final phase of the study, we selected 11 of the participants from the Experience Sampling segment from Hamburg, Munich, and Leipzig. Over the course of 3 weeks, we visited these participants in their homes and conducted 90 minute interview and observation sessions. Based on the survey results and experience sampling observations, we explored a small set of participants’ workflows in detail.

Product Managers participating in affinity diagramming in the Mozilla Toronto office. (Photo: Gemma Petrie)Field Team Participation

The Firefox User Research team believes it is important to involve a wide variety of staff members in the experience of in-context research and analysis activities. Members of the Firefox product management and UX design teams accompanied the research team for these in-home interviews in Germany. After the interviews, the whole team met in Toronto for a week to absorb and analyze the data collected from the three segments. The results presented here are based on the analysis provided by the team.


Based on our research, we define a workflow as a habitual, frequently employed set of discrete steps that users build into a larger activity. Users employ the tools they have at hand (e.g., tabs, bookmarks, screenshots) to achieve a goal. Workflows can also span across multiple devices, range from simple to technically sophisticated, exist across noncontinuous durations of time, and contain multiple decisions within them.

Example Workflow from Hamburg Participant #2

We observed that workflows appear to be naturally constructed actions to participants. Their workflows were so unconscious or self-evident, that participants often found it challenging to articulate and reconstruct their workflows. Examples of workflows include: Comparison shopping, checking email, checking news updates, and sharing an image with someone else.

Workflows Model

Based on our study, we have developed a general two-part model to illustrate a workflow.

Part 1: Workflows are constructed from discrete steps. These steps are atomic and include actions like typing in a URL, pressing a button, taking a screenshot, sending a text message, saving a bookmark, etc. We mean “atomic” in the sense that the steps are simple, irreducible actions in the browser or other software tools. When employed alone, these actions can achieve a simple result (e.g. creating a bookmark). Users build up the atomic actions into larger actions that constitute a workflow.

Part 2: Outside factors can influence the choices users make for both a whole workflow or steps within a workflow. These factors include software components, physical components, and pyscho/social/cultural factors.

Trying to find the Mozilla Berlin office. (Photo: Gemma Petrie)Factors Influencing Workflows

While workflows are composed from atomic building blocks of tools, there is a great deal more that influences their construction and adoption among users.

Software Components

Software components are features of the operating system, the browser, and the specs of web technology that allow users to complete small atomic tasks. Some software components also constrain users into limited tasks or are obstacles to some workflows.

The basic building blocks of the browser are the features, tools, and preferences that allow users to complete tasks with the browser. Some examples include: Tabs, bookmarks, screenshots, authentication, and notifications.

Physical Components

Physical components are the devices and technology infrastructure that inform how users interact with software and the Internet. These components employ software but it is users’ physical interaction with them that makes these factors distinct. Some examples include: Access to the internet, network availability, and device form factors.

Psycho/Social/Cultural Factors

Psycho/Social/Cultural influences are contextual, social, and cognitive factors that affect users’ approaches to and decisions about their workflows.

Participants use memory to fill in gaps in their workflows where technology does not support persistence. For example, when comparison shopping, a user has multiple tabs open to compare prices; the user is using memory to keep in mind prices from the other tabs for the same item.

Participants exercised control over the role of technology in their lives either actively or passively. For example, some participant believed that they received too many notifications from apps and services, and often did not understand how to change these settings. This experience eroded their sense of control over their technology and forced these participants to develop alternate strategies for regaining control over these interruptions. For others, notifications were seen as a benefit. For example, one of our Leipzig participants used home automation tools and their associated notifications on his mobile devices to give him more control over his home environment.

Other examples of psycho/social/cultural factors we observed included: Work/personal divides, identity management, fashion trends in technology adoption, and privacy concerns.

Using the Workflows Model

When analyzing current user workflows, the parts of the model should be cues to examine how the workflow is constructed and what factors influence its construction. When building new features, it can be helpful to ask the following questions to determine viability:

  • Are the steps we are creating truly atomic and usable in multiple workflows?
  • Are we supplying software components that give flexibility to a workflow?
  • What affect will physical factors have on the atomic components in the workflow?
  • How do psycho-social-cultural factors influence users’ choices about the components they are using in the workflow?
Hamburg Train Station (Photo: Gemma Petrie)Design Principles & Recommendations
  • New features should be atomic elements, not complete user workflows.
  • Don’t be prescriptive, but facilitate efficiency.
  • Give users the tools to build their own workflows.
  • While software and physical components are important, psycho/social/cultural factors are equally as important and influential over users’ workflow decisions.
  • Make it easy for users to actively control notifications and other flow disruptors.
  • Leverage online content to support and improve offline experiences.
  • Help users bridge the gap between primary-device workflows and secondary devices.
  • Make it easy for users to manage a variety of identities across various devices and services.
  • Help users manage memory gaps related to revisiting and curating saved content.
Future Research Phases

The Firefox User Research team conducted additional phases of this research in Canada, the United States, Japan, and Vietnam. Check back for updates on our work.


¹ Pace, S. (2004). A grounded theory of the flow experiences of Web users. International journal of human-computer studies, 60(3), 327–363.

Firefox Workflow User Research in Germany was originally published in Firefox User Experience on Medium, where people are continuing the conversation by highlighting and responding to this story.

Categorieën: Mozilla-nl planet

Air Mozilla: The Joy of Coding - Episode 105

Mozilla planet - wo, 12/07/2017 - 19:00

The Joy of Coding - Episode 105 mconley livehacks on real Firefox bugs while thinking aloud.

Categorieën: Mozilla-nl planet

Chris H-C: Latency Improvements, or, Yet Another Satisfying Graph

Mozilla planet - wo, 12/07/2017 - 18:33

This is the third in my ongoing series of posts containing satisfying graphs.

Today’s feature: a plot of the mean and 95th percentile submission delays of “main” pings received by Firefox Telemetry from users running Firefox Beta.

Screenshot-2017-7-12 Beta _Main_ Ping Submission Delay in hours (mean, 95th %ile)

We went from receiving 95% of pings after about, say, 130 hours (or 5.5 days) down to getting them within about 55 hours (2 days and change). And the numbers will continue to fall as more beta users get the modern beta builds with lower latency ping sending thanks to pingsender.

What does this mean? This means that you should no longer have to wait a week to get a decently-rigorous count of data that comes in via “main” pings (which is most of our data). Instead, you only have to wait a couple of days.

Some teams were using the rule-of-thumb of ten (10) days before counting anything that came in from “main” pings. We should be able to reduce that significantly.

How significantly? Time, and data, will tell. This quarter I’m looking into what guarantees we might be able to extend about our data quality, which includes timeliness… so stay tuned.

For a more rigorous take on this, partake in any of dexter’s recent reports on RTMO. He’s been tracking the latency improvements and possible increases in duplicate ping rates as these changes have ridden the trains towards release. He’s blogged about it if you want all the rigour but none of Python.


FINE PRINT: Yes, due to how these graphs work they will always look better towards the end because the really delayed stuff hasn’t reached us yet. However, even by the standards of the pre-pingsender mean and 95th percentiles we are far enough after the massive improvement for it to be exceedingly unlikely to change much as more data is received. By the post-pingsender standards, it is almost impossible. So there.

FINER PRINT: These figures include adjustments for client clocks having skewed relative to server clocks. Time is a really hard problem when even on a single computer and trying to reconcile it between many computers separated by oceans both literal and metaphorical is the subject of several dissertations and, likely, therapy sessions. As I mentioned above, for rigour and detail about this and other aspects, see RTMO.

Categorieën: Mozilla-nl planet

Air Mozilla: Weekly SUMO Community Meeting July 12, 2017

Mozilla planet - wo, 12/07/2017 - 18:00

Weekly SUMO Community Meeting July 12, 2017 This is the sumo weekly call

Categorieën: Mozilla-nl planet

Defending Net Neutrality: A Day of Action

Mozilla Blog - wo, 12/07/2017 - 05:59
Mozilla is participating in the Day of Action with a new podcast, video interviews with U.S. Senators, a special Firefox bulletin, and more


As always, Mozilla is standing up for net neutrality.

And today, we’re not alone. Hundreds of organizations — from the ACLU and GitHub to Amazon and Fight for the Future — are participating in a Day of Action, voicing loud support for net neutrality and a healthy internet.

“Mozilla is supporting the majority of Americans who believe the web belongs to individual users, without interference from ISP gatekeepers,” says Ashley Boyd, Mozilla’s VP of Advocacy. “On this Day of Action, we’re amplifying what millions of Americans have been saying for years: Net neutrality is crucial to a free, open internet.”

“We are fighting to protect net neutrality, again, because it’s crucial to the future of the internet,” says Denelle Dixon, Mozilla Chief Legal and Business Officer. “Net neutrality prohibits ISPs from engaging in prioritization, blocking or throttling of content and services online. As a result, net neutrality serves to enable free speech, competition, innovation and user choice online.”

The Day of Action is a response to FCC Commissioner Ajit Pai’s proposal to repeal net neutrality protections enacted in 2015. The FCC voted to move forward with Pai’s proposal in May; we’re currently in the public comment phase. You can read more about the process here.

Here’s how Mozilla is participating in the Day of Action — and how you can get involved, too:

Nine hours of public comments. Over the past few months, Mozilla has collected more than 60,000 comments from Americans in defense of net neutrality.

“The internet should be open for all and not given over to big business,” wrote one commenter. “Net neutrality protects small businesses and innovators who are just getting started,” penned another.

We’ll share all 60,000 comments with the FCC. But first, we’re reading a portion of them aloud in a nine-hour, net neutrality-themed spoken-word marathon.

And we’re showcasing the comments on Firefox, to inspire more Americans to stand up for net neutrality. When Firefox users open a new window today, a different message in support of net neutrality will appear in the “snippet,” the bulletin above and beneath the search bar.

It’s not too late to submit your own comment. Visit to add your voice.

A word from Senators Franken and Wyden. Senator Al Franken (D-Minnesota) and Senator Ron Wyden (D-Oregon) are two of the Senate’s leading voices for net neutrality. Mozilla spoke with both about net neutrality’s connection to free speech, competition, and innovation. Here’s what they had to say:

Stay tuned for more interviews with Congress members about the importance of net neutrality.

Comments for the FCC. Mozilla’s Public Policy team is finishing up comments to the FCC on the importance of enforceable net neutrality to ensure that voices are free to be heard. They will speak to how net neutrality fundamentally enables free speech, online competition and innovation, and user choice. Like our comments from 2010 and 2014, we will defend all users’ ability to create and consume online, and will defend the vitality of the internet. User rights should not be used in a political play.

Net neutrality podcast. We just released the second episode of Mozilla’s original podcast, IRL, which focuses on who wins — and who loses — if net neutrality is repealed. Listen to host Veronica Belmont explore the issue in depth with a roster of guests holding different viewpoints, from Patrick Pittaluga of Grubbly Farms (a maggot farming business in Georgia), to Jessica González of Free Press, to Dr. Roslyn Layton of the American Enterprise Institute.

Subscribe wherever you get your podcasts, or listen on our website.

Today, we’re amplifying the voices of millions of Americans. And we need your help: Visit to join the movement. The future of net neutrality — and the very health of the internet — depends on it.

Note: This blog was updated on July 12 at 2:30 p.m. ET to reflect the most recent number of public comments collected.

The post Defending Net Neutrality: A Day of Action appeared first on The Mozilla Blog.

Categorieën: Mozilla-nl planet

Mozilla Open Innovation Team: Reflection — Inclusive Organizing for Open Source Communities

Mozilla planet - ti, 11/07/2017 - 15:08

This, the second in a series of posts reporting findings from three months of research into the state of D&I in Mozilla’s communities. The current state of our communities is a mix, when it comes to inclusivity: we can do better, and this blog post is an effort to be transparent about what we’ve learned in working toward that goal.

If we are to truly unleash the full value of a participatory, open Mozilla, we need to personalize and humanize Mozilla’s value by designing diversity and inclusion into our strategy. To unlock the full potential of an open innovation model we need radical change in how we develop community and who we include.Photo credit: national museum of american history via / CC BY-NC

We learned that while bottom-up, or grassroots organizing enables ideas and vision to surface within Mozilla, those who succeed in that environment tend to be of the same backgrounds, usually the more dominant or privileged background, and often in tenured* community roles.

*tenured — Roles assigned by Mozilla, or legacy to the community that exist without timelines, or cycles for renewal/election.

Our research dove into many areas of inclusive organizing, and three clusters of recommendations surfaced:

1. Provide Organizational Support to Identity Groups

For a diversity of reasons including, safety, friendship, mentorship, advocacy and empowerment — identity groups serve both as a safe space, and as a springboard into, and out of greater community participation — for the betterment of both. Those in Mozilla’s Tech Speakers program, spoke very positively of the opportunity, responsibility and camaraderie provided by this program, which brings together contributors focused on building speaking skills for conferences.

Building a sense of belonging in small groups, no matter focus (Tech Speakers, Women of Color, Spanish Speaking), is an inclusion methodology.” — Non-Binary contributor, Europe

Womoz (Women of Mozilla) showed up in our research an example of an identity group trying to solve and overcome systemic problems faced by women in open source. However, without centralized goals or structured organizational support for the program, communities were left on their own to define the group, its goals, who is included, and why. As a result we found vastly different perspectives, and implementations of ‘Womoz’, from successful inclusion methodologies to exclusive or even relegatory groups:

  • As a ‘label’. ‘Catch-all’ for non- male non-binary contributors.
  • A way for women to gather that aligns with cultural traditions for convening of young women — and approved by families within specific cultural norms.
  • A safe, and empowered way to meet and talk with people of the same gender-identity about topics and problems specific to women.
  • A way to escape toxic communities where discussion and opportunity is dominated by men, or where women were purposely excluded.
  • An online community with several different channels for sharing advocacy (mailing list, Telegram group, website, wiki page).
  • As a stereotypical way to reference women-only contributing in specific areas (such as crafts, non-technical) or relegating women to those areas.

For identity groups to fully flourish and to avoid negative manifestations like tokenism and relegation, the recommendation is that organizational support (dedicated strategic planning, staffing, time, and resources) should be prioritized where leadership and momentum exists, and where diversity can thrive. Mozilla is already investing in pilot for identity with staff and core contributors which is exciting progress on what we’ve learned. The full potential is that identity groups act as ‘diversity launch pads’ into and out of the larger community for benefit of both.

2. Build Project-Wide Strategies for Toxic BehaviorPhoto by Paul Riismandel BY-NC-SA 2.0

Research exposed toxic behavior as a systemic issue being tackled in many different ways, and by both employees and community leaders truly dedicated to creating healthy communities.

Insights amplified external findings into toxic behavior, that shows toxic people tend to be highly productive thus masking the true impact of their behavior which deters, and excludes large numbers of people. Furthermore, the perceived value or reliance on the work of toxic individuals and regional communities was suspected to complicate, or deter and dilute intervention effectiveness.

Within in this finding we have three recommendations:

  1. Develop regionally-Specific strategies for community organizing. Global regions with laws and cultural norms that exclude, or limit opportunity to women and other marginalized groups demonstrate the challenge of a global-approach to open source community health. Generally, we found that those groups with cultural and economic privilege within a given local context were also the majority of the people who succeeded in participation. Where toxic communities, and individuals exist, the disproportionate representation of diverse groups was magnified to the point where standard approaches to community health were not successful. Specialized strategies might be very different per region, but with investment can amplify the potential of marginalized people within.
  2. Create an organization-wide strategy for tracking decisions, and outcomes of Community Participation Guidelines violations. While some project and project maintainers have been successful in banning and managing toxic behavior, there is no central mechanism for tracking those decisions for the benefit of others who may encounter and be faced with similar decisions, or the same people. We learned of at least one case where a banned toxic contributor surfaced in another area of the project without the knowledge of that team. Efforts in this vein including recent revisions to the guidelines and current efforts to train community leaders and build a central mechanism need to be continued and expanded.
  3. Build team strategies for managing conflict and toxic behavior. A range of approaches to community health surfaced success, and struggle managing conflict, and toxic behavior within projects, and community events. What appears to be important in successful moderation and resolution is that within project-teams there are designed specific roles, with training, accountability and transparency.
3. Develop Inclusive Community Leadership Models

In qualitative interviews and data analysis, ‘gatekeeping* showed up as one of the greatest blockers for diversity in communities.

Characteristics of gatekeeping in Mozilla’s communities included withholding opportunity and recognition (like vouching), purposeful exclusion, delay of, or limiting language translation of opportunity, and well as lying about the availability or existence of leadership roles, and event invitations/applications. We also heard a lot about nepotism, as well as both passive and deliberate banishment of individuals — especially women, who showed a sharp decrease in participation over time compared with men.

*Gatekeeping is noted as only one reason participation for women decreased, and we will expand on that in other posts.

Much of this data reinforced what we know about the myth of meritocracy, and that despite its promise, meritocracy actually enables toxic, exclusionary norms that prevent development of resilient, diverse communities.

“New people are not really welcomed, they are not given a lot of information on purpose.” (male contributor, South America)

As a result of these findings, we recommend all community leadership roles be designed with accountability for community health, inclusion and surfacing the achievement of others as core function. Furthermore, renewable cycles for community roles should be required, with designed challenges (like elections) for terms that extend beyond one year, with outreach to diverse groups for participation.

The long-term goal is that leadership begins a cultural shift in community that radiates empowerment for diverse groups who might not otherwise engage.

Our next post in this series ‘We See you — Reaching Diverse Audiences in FOSS’, will be published on July 21st. Until then, check out ‘Increasing Rusts Reach an initiative that emerged from a design sprint focused on this research.

Reflection — Inclusive Organizing for Open Source Communities was originally published in Mozilla Open Innovation on Medium, where people are continuing the conversation by highlighting and responding to this story.

Categorieën: Mozilla-nl planet

The Mozilla Blog: Mozilla Fully Paid Parental Leave Program Officially Rolls Out Worldwide

Mozilla planet - ti, 11/07/2017 - 14:59

For most countries around the world, school is out, and parents are reconnecting with their kids to enjoy road trips and long days. Many of our Mozilla employees have benefited from the expanded parental leave program we introduced last year to spend quality time with their families. The program offers childbearing parents up to 26 weeks of fully paid leave and non-childbearing (foster and adoptive parents, partners of childbearing) parents up to 12 weeks of fully paid leave.

This July, we completed the global roll out of the program making Mozilla a leader in the tech industry and among organizations with a worldwide employee base.

What makes Mozilla’s parental leave program unique

And sets us apart from other tech companies and other organizations:

  • 2016 Lookback: A benefit for employees who welcomed a child in the calendar year prior to the expanded benefit being rolled out.
  • Global Benefit: As a US-based company with employees all over the world, we chose to offer it to employees around the world — US, Canada, Belgium, Finland, France, Germany, the Netherlands, Spain, Sweden, UK, Australia, New Zealand, Taiwan.
  • Fully Paid Leave: For all parents, they’ll receive their full salary during that time.
What our Mozilla employees have to say:

“Our second son was born in January 2017. When I heard about the new policy that Mozilla will launch globally one month before, I first was not sure how that will work out with the statutory parental leave rules in Germany. But I have to say that I first enjoyed working with Rachel to work out all the details — and now I get enjoy a summer with my family. The second child has changed my life completely, it was hard to match work and family needs. I am grateful that I will have time to give back to my son and my family and grow even more closer together.”  Dominik Strohmeier, based in Berlin, Germany.  Two children, with second child born in 2017.

Chelsea Novak with baby

“Our daughter was born in 2016,” says Chelsea Novak, Firefox Editorial Lead. “When Mozilla announced this new parental leave policy we were excited for parents that were expecting in 2017, but a little sad that we missed out. Having Mozilla extend these new parental leave benefits to us was very generous and gave us some precious time with our family that we weren’t expecting.”  Chelsea and Matej Novak, both longtime Canadian Mozilla employees, based in Toronto. Two children, ages 1 and 3.




“I started with Mozilla in the beginning of 2016, and delivered my child that same year. When I first heard of the policy, I didn’t think the new parental leave would apply to me. Then, Rachel told me the good news. I was amazed that they would extend the parental leave policy to me so that I can take additional time off in 2017.  Mozilla is so generous to parents like myself to enjoy special moments like watching my daughter take her first steps or saying her first words.”   Jen Boscacci, based in Mountain View, California.  Two children, with second child born in 2016.


Maura Tuohy with baby

“Being able to take advantage of the 26 weeks of leave — and have the flexibility of when to take it — was an incredible gift for our family. Knowing that the company was so supportive made the experience as stress free as having a newborn can be! I’m so grateful to work for such a progressive and kind company — not just in policies but in culture and practice.”  Maura Tuohy, based in San Francisco.  Her first child was born in 2017.



This program helps us embrace and celebrate families of all kinds, whether its adoption and foster care, we expanded our support for both childbearing and non-childbearing parents, independent of gender or situation. We value our Mozilla employees, because juggling between work and family responsibilities is no easy feat.

The post Mozilla Fully Paid Parental Leave Program Officially Rolls Out Worldwide appeared first on The Mozilla Blog.

Categorieën: Mozilla-nl planet

Mozilla Fully Paid Parental Leave Program Officially Rolls Out Worldwide

Mozilla Blog - ti, 11/07/2017 - 14:59

For most countries around the world, school is out, and parents are reconnecting with their kids to enjoy road trips and long days. Many of our Mozilla employees have benefited from the expanded parental leave program we introduced last year to spend quality time with their families. The program offers childbearing parents up to 26 weeks of fully paid leave and non-childbearing (foster and adoptive parents, partners of childbearing) parents up to 12 weeks of fully paid leave.

This July, we completed the global roll out of the program making Mozilla a leader in the tech industry and among organizations with a worldwide employee base.

What makes Mozilla’s parental leave program unique

And sets us apart from other tech companies and other organizations:

  • 2016 Lookback: A benefit for employees who welcomed a child in the calendar year prior to the expanded benefit being rolled out.
  • Global Benefit: As a US-based company with employees all over the world, we chose to offer it to employees around the world — US, Canada, Belgium, Finland, France, Germany, the Netherlands, Spain, Sweden, UK, Australia, New Zealand, Taiwan.
  • Fully Paid Leave: For all parents, they’ll receive their full salary during that time.
What our Mozilla employees have to say:

“Our second son was born in January 2017. When I heard about the new policy that Mozilla will launch globally one month before, I first was not sure how that will work out with the statutory parental leave rules in Germany. But I have to say that I first enjoyed working with Rachel to work out all the details — and now I get enjoy a summer with my family. The second child has changed my life completely, it was hard to match work and family needs. I am grateful that I will have time to give back to my son and my family and grow even more closer together.”  Dominik Strohmeier, based in Berlin, Germany.  Two children, with second child born in 2017.

Chelsea Novak with baby

“Our daughter was born in 2016,” says Chelsea Novak, Firefox Editorial Lead. “When Mozilla announced this new parental leave policy we were excited for parents that were expecting in 2017, but a little sad that we missed out. Having Mozilla extend these new parental leave benefits to us was very generous and gave us some precious time with our family that we weren’t expecting.”  Chelsea and Matej Novak, both longtime Canadian Mozilla employees, based in Toronto. Two children, ages 1 and 3.




“I started with Mozilla in the beginning of 2016, and delivered my child that same year. When I first heard of the policy, I didn’t think the new parental leave would apply to me. Then, Rachel told me the good news. I was amazed that they would extend the parental leave policy to me so that I can take additional time off in 2017.  Mozilla is so generous to parents like myself to enjoy special moments like watching my daughter take her first steps or saying her first words.”   Jen Boscacci, based in Mountain View, California.  Two children, with second child born in 2016.


Maura Tuohy with baby

“Being able to take advantage of the 26 weeks of leave — and have the flexibility of when to take it — was an incredible gift for our family. Knowing that the company was so supportive made the experience as stress free as having a newborn can be! I’m so grateful to work for such a progressive and kind company — not just in policies but in culture and practice.”  Maura Tuohy, based in San Francisco.  Her first child was born in 2017.



This program helps us embrace and celebrate families of all kinds, whether its adoption and foster care, we expanded our support for both childbearing and non-childbearing parents, independent of gender or situation. We value our Mozilla employees, because juggling between work and family responsibilities is no easy feat.

The post Mozilla Fully Paid Parental Leave Program Officially Rolls Out Worldwide appeared first on The Mozilla Blog.

Categorieën: Mozilla-nl planet

Shing Lyu: Install Ubuntu 16.04 on ThinkPad 13 (2nd Gen)

Mozilla planet - ti, 11/07/2017 - 09:48

It has been a while since my last post. I’ve been busy for the first half of this year but now I got more free time. Hopefully I can get back to my usual pace of one post per months.

My old laptop (Inhon Carbonbook, discontinued) had a swollen battery. I kept using it for a few months but then the battery squeezed my keyboard so I can no longer type correctly. After some research I decided to buy the ThinkPad 13 model because it provides descent hardware for its price, and the weight (~1.5 kg) is acceptable. Every time I got a new computer the first thing is to get Linux up and running. So here are my notes on how to install Ubuntu Linux on it.

TL;DR: Everything works out of the box. Just remember to turn off secure boot and shrink the disk in Windows before you install.

Some assumptions Before Installing Ubuntu

First we need to clean up some disk space for Ubuntu. But if you are going to delete Windows completely you can skip the following steps.

  • (In Windows) Right click on the start menu, select “PowerShell (administrator)”, then run diskmgmt.msc.
  • Right click the C: disk and select the shrink disk option.

Before we can install Ubuntu, we need to disable the secure boot feature in BIOS.

  • Press Shift+restart to be able to get into BIOS, otherwise you’ll keep booting into Windows.
  • Press Enter followed by F1 to go into BIOS during boot.
  • Disable Secure Boot.

BIOS secure_boot secure_boot_sub_menu

UEFI boot seems to work with Ubuntu 16.04’s installer, so you can keep all the UEFI options enabled in the BIOS. Download the Ubuntu installation disk and use a bootable USD disk creator that supports UEFI, for example Rufus.

Installing Ubuntu

Installing Ubuntu should be pretty straightforward if you’ve done it before.

  • Go to BIOS again and set the USB drive as top priority boot medium.
  • Boot into Ubuntu, select “Install Ubuntu”.
  • Follow the installer wizard.
  • Select “Something else” when asked how to partition the disk.
  • Create a linux-swap partition of 4GB for 8GB of RAM. I followed Redhat’s suggestion, but there are different theories out there, so use your own judgement here.
  • Create a EXT4 main disk and set the mount point to /
  • After the installer finished, reboot. You should see the GRUB2 menu. The Windows options should also work without trouble.


What works?

Almost everything works out of the box. All the media keys, like volume control, brightness and WiFi and Bluetooth toggle is recognized by the built-in Unity desktop. But I am using i3 so I have to set them up myself, more on this later. The USB Type-C port is a nice surprise for me. It supports charging, so you can charge with any Type-C charger. I tested Apple’s Macbook charger and it works well. I also tested Apple’s and Dell’s Type-C multi-port adapter and both works, so I can plug my external monitor and my keyboard/mouse to it so it works like a docking station.


A side note, I’m also glad to find that Ubuntu now use the Fcitx IME by default. Most of the IME bugs I found in previous versions and ibus are now gone.

What doesn’t work?

Not that I’m aware of. The only complaint I have is that the Ethernet and Wi-Fi naming method has changed somehow (e.g. enp0s31f6 instead of eth0). But I believe that is something we can fix by software. People also complain that the power button and the hinge is not very sturdy. But I guess that’s the compromise you have to make for the relatively cheap price.


More on setting up media keys for i3 window manager

Since I use the i3 window manager, I don’t have Unity to handling all my media keys’ functionality. But it’s not hard to set them up using the following i3 config:

bindsym XF86AudioRaiseVolume exec amixer -q set Master 2dB+ unmute bindsym XF86AudioLowerVolume exec amixer -q set Master 2dB- unmute bindsym XF86AudioMute exec amixer -D pulse set Master toggle # Must assign device "pulse" to successfully unmute # Only two level of brightness bindsym XF86MonBrightnessUp exec xrandr --ouptut eDP-1 --brightness 0.8 bindsym XF86MonBrightnessDown exec xrandr --ouptut eDP-1 --brightness 0.5

The only drawback is that the LED indicator on the mute buttons mighty be out of sync with the real software state.


Overall, ThinkPad 13 is a descent laptop for its price range. Ubuntu 16.04 works out of the box. No sweat! If you are looking for a good Linux laptop, but can’t afford ThinkPad X1 Carbon or Dell’s XPS 13/15, ThinkPad 13 might be a good choice.

Categorieën: Mozilla-nl planet

Andy McKay: Manual review of add-ons

Mozilla planet - ti, 11/07/2017 - 09:00

As we start to expand WebExtension APIs beyond parity with Chrome, a common theme is appearing in bug comments when proposing new APIs. That theme is something like "we'll have to give add-ons using that API a special manual review".

Put simply, that's not happening. Either we feel comfortable with an API and everyone can use it, or we don't implement it. There won't be any special manual review process for WebExtensions for specific APIs.

Manual review has quite a few problems but bluntly, it costs Mozillians resources and time and upsets developers.

On the cost side, we've had to put an awful lot of developer and reviewer (both paid and volunteer) time into reviewing extensions. There's tools and sites supported by Mozilla to support the review process.

But more than that, loud and clear developers have told us they dislike the review process and complain about it. It causes delays and developers get upset when people (many of whom are volunteer) aren't able to turn around reviews within reasonable time scales.

Further, this makes it harder for developers because it forces developers to upload unobfuscated sources. Something that its getting harder and harder as webpack, browserify and other tools gain in popularity.

And finally manual review isn't perfect. It's hard to review code, look for all the possible security and policy problems and ensure that questionable API didn't do something we felt uncomfortable with.

Manual review has its place in Mozilla, but one thing we shouldn't be do is placing more burdens on the process. We should be aiming to streamline review and ease the burden on reviewers and developers.

The result is we've got to either say no to the API or find a way to make everyone comfortable with the API.

Categorieën: Mozilla-nl planet

Andy McKay: Mail filters

Mozilla planet - ti, 11/07/2017 - 09:00

Wil posted on his blog some mail filters he uses to cope with all the incoming mail. Here's a few of mine:

Highlight mentions on mentored bugs:

Matches: "X-Bugzilla-Mentors" Do this: Skip Inbox, Apply label "Bugzilla/Mentored"

Filter out intermittents:

Matches: "X-Bugzilla-Keywords: intermittent-failure" Do this: Skip Inbox, Apply label "Bugzilla/Intermittents"

Filter down by a specific product and component:

Matches: "X-Bugzilla-Product: Firefox" "X-Bugzilla-Component: Extension Compatibility" Do this: Skip Inbox, Apply label "Bugzilla/Extension Compat"
Categorieën: Mozilla-nl planet

This Week In Rust: This Week in Rust 190

Mozilla planet - ti, 11/07/2017 - 06:00

Hello and welcome to another issue of This Week in Rust! Rust is a systems language pursuing the trifecta: safety, concurrency, and speed. This is a weekly summary of its progress and community. Want something mentioned? Tweet us at @ThisWeekInRust or send us a pull request. Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub. If you find any errors in this week's issue, please submit a PR.

Updates from Rust Community News & Blog Posts Friends of the Forest

Our community likes to recognize people who have made outstanding contributions to the Rust Project, its ecosystem, and its community. These people are 'friends of the forest'.

Our this week's friend of the forest is Guillaume Gomez, whose influence is evident everywhere you look in Rust.

Crate of the Week

Sadly, no crate was nominated this week.

Submit your suggestions and votes for next week!

Call for Participation

Always wanted to contribute to open-source projects but didn't know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

If you are a Rust project owner and are looking for contributors, please submit tasks here.

Updates from Rust Core

113 pull requests were merged in the last week

New Contributors
  • boreeas
  • Kornel
  • oyvindln
Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

Final Comment Period

Every week the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now. This week's FCPs are:

New RFCs Style RFCs

Style RFCs are part of the process for deciding on style guidelines for the Rust community and defaults for Rustfmt. The process is similar to the RFC process, but we try to reach rough consensus on issues (including a final comment period) before progressing to PRs. Just like the RFC process, all users are welcome to comment and submit RFCs. If you want to help decide what Rust code should look like, come get involved!

The RFC style is now the default style in Rustfmt - try it out and let us know what you think!

An interesting issue:

Good first issues:

We're happy to mentor these, please reach out to us in #rust-style if you'd like to get involved

Upcoming Events

If you are running a Rust event please add it to the calendar to get it mentioned here. Email the Rust Community Team for access.

Rust Jobs

Tweet us at @ThisWeekInRust to get your job offers listed here!

Quote of the Week

Unsafe is your friend! It's maybe not a friend like you would invite to your sister's wedding, or the christening of her first-born child. But it's sort of the friend who lives in the country and has a pick-up truck and 37 guns. And so you might not want to hang out with them all the time, but if you need something blown up he is there for you.

Simon Heath on game development in Rust (at 38:35 in video).

Thanks to G2P and David Tolnay for the suggestion.

Submit your quotes for next week!

This Week in Rust is edited by: nasa42, llogiq, and brson.

Categorieën: Mozilla-nl planet

Niko Matsakis: Non-lexical lifetimes: draft RFC and prototype available

Mozilla planet - ti, 11/07/2017 - 06:00

I’ve been hard at work the last month or so on trying to complete the non-lexical lifetimes RFC. I’m pretty excited about how it’s shaping up. I wanted to write a kind of “meta” blog post talking about the current state of the proposal – almost there! – and how you could get involved with helping to push it over the finish line.


What can I say, I’m loquacious! In case you don’t want to read the full post, here are the highlights:

  • The NLL proposal is looking good. As far as I know, the proposal covers all major intraprocedural shortcomings of the existing borrow checker. The appendix at the end of this post talks about the problems that we don’t address (yet).
  • The draft RFC is available in a GitHub repository:
    • Read it over! Open issues! Open PRs!
    • In particular, if there is some pattern you think may not be covered, please let me know about it by opening an issue.
  • There is a working prototype as well:
    • The prototype includes region inference as well as the borrow checker.
    • I hope to expand it to become the normative prototype of how the borrow checker works, allowing us to easily experiment with extensions and modifications – analogous to Chalk.
Background: what the proposal aims to fix

The goal of this proposal is to fix the intra-procedural shortcomings of the existing borrow checker. That is, to fix those cases where, without looking at any other functions or knowing anything about what they do, we can see that some function is safe. The core of the proposal is the idea of defining reference lifetimes in terms of the control-flow graph, as I discussed (over a year ago!) in my introductory blog post; but that alone isn’t enough to address some common annoyances, so I’ve grown the proposal somewhat. In addition to defining how to infer and define non-lexical lifetimes themselves, it now includes an improved definition of the Rust borrow checker – that is, how to decide which loans are in scope at any particular point and which actions are illegal as a result.

When combined with RFC 2025, this means that we will accept two more classes of programs. First, what I call “nested method calls”:

impl Foo { fn add(&mut self, value: Point) { ... } fn compute(&self) -> Point { ... } fn process(&mut self) { self.add(self.compute()); // Error today! But not with RFC 2025. } }

Second, what I call “reference overwrites”. Currently, the borrow checker forbids you from writing code that updates an &mut variable whose referent is borrowed. This most commonly shows up when iterating down a slice in place (try it on play):

fn search(mut data: &mut [Data]) -> bool { loop { if let Some((first, tail)) = data.split_first_mut() { if is_match(first) { return true; } data = tail; // Error today! But not with the NLL proposal. } else { return false; } } }

The problem here is that the current borrow checker sees that data.split_first_mut() borrows *data (which has type [Data]). Normally, when you borrow some path, then all prefixes of the path become immutable, and hence borrowing *data means that, later on, modifying data in data = tail is illegal. This rule makes sense for “interior” data like fields: if you’ve borrowed the field of a struct, then overwriting the struct itself will also overwrite the field. But the rule is too strong for references and indirection: if you overwrite an &mut, you don’t affect the data it refers to. You can workaround this problem by forcing a move of data (e.g., by writing {data}.split_first_mut()), but you shouldn’t have to. (This issue has been filed for some time as #10520, which also lists some other workarounds.)

Draft RFC

The Draft RFC is almost complete. I’ve created a GitHub repository containing the text. I’ve also opened issues with some of the things I wanted to get done before posting it, though the descriptions are vague and it’s not clear that all of them are necessary. If you’re interested in helping out – please, read it over! Open issues on things that you find confusing, or open PRs with suggestions, typos, whatever. I’d like to make this RFC into a group effort.

The prototype

The other thing that I’m pretty excited about is that I have a working prototype of these ideas. The prototype takes as input individual .nll files, each of which contains a few struct definitions as well as the control-flow graph of a single function. The tests are aimed at demonstrating some particular scenario. For example, the borrowck-walk-linked-list.nll test covers the “reference overwrites” that I was talking about earlier. I’ll go over it in some detail to give you the idea.

The test begins with struct declarations. These are written in a very concise form because I was too lazy to make it more user-friendly:

struct List<+> { value: 0, successor: Box<List<0>> } // Equivalent to: // struct List<T> { // value: T, // successor: Box<List<T>> // }

As you can see, the type parameters are not named. Instead, we specify the variance (+ here means “covariant”). Within the function body, we reference type parameters via a number, counting backwards from the end of the list. Since there is only one parameter (T, in the Rust example), then 0 refers to T.

(In real life, this struct would use Option<Box<List<T>>>, but the prototype doesn’t model enums yet, so this is using a simplified form that is “close enough” from the point-of-view of the checker itself. We also don’t model raw pointers yet. PRs welcome!)

After the struct definitions, there are some let declarations, declaring the global variables:

let list: &'list mut List<()>; let value: &'value mut ();

Perhaps surprisingly, the named lifetimes like 'list and 'value correspond to inference variables. That is, they are not like named lifetimes in a Rust function – which are the one major thing I’ve yet to implement – but rather correspond to inference variables. Giving them names allows for us to add “assertions” (we’ll see one later) that test what results got inferred. You can also use '_ to have the parser generate a unique name for you if you don’t feel like giving an explicit one.

After the local variables, comes the control-flow graph declarations, as a series of basic-block declarations:

block START { list = use(); goto LOOP; }

Here, list = use() means “initialize list and use the (empty) list of arguments”. I’d like to improve this to support named function prototypes, but for now the prototype just has the idea of an ‘opaque use’. Basic blocks can optionally have successors, specified using goto.

One thing the prototype understands pretty well are borrows:

block LOOP { value = &'b1 mut (*list).value; list = &'b2 mut (*list); use(value); goto LOOP EXIT; }

An expression like &'b1 mut (*list).value borrows (*list).value mutably for the lifetime 'b1 – note that the lifetime of the borrow itself is independent from the lifetime where the reference ends up. Perhaps surprisingly, the reference can have a bigger lifetime than the borrow itself: in particular, a single reference variable may be assigned from multiple borrows in disjoint parts of the graph.

Finally, the tests support two kinds of assertions. First, you can mark a given line of code as being “in error” by adding a //! comment. There isn’t one in this example, but you can see them in other tests; these identify errors that the borrow checker would report. We can also have assertions of various kinds. These check the output from lifetime inference. This test has a single assertion:

assert LOOP/0 in 'b2;

This assertion specifies that the point LOOP/0 (that is, the start of the loop) is contained within the lifetime 'b2 – that is, we realize that the reference produced by (*list) may still be in use at LOOP/0. But note that this does not prevent us from reassigning list (nor borrowing (*list) This is because the new borrow checker is smart enough to understand that list has been reassigned in the meantime, and hence that the borrows from different loop iterations do not overlap.

Conclusion and how you can help

I think the NLL proposal itself is close to being ready to submit – I want to add a section on named lifetimes first, and add them to the prototype – but there is still lots of interesting work to be done. Naturally, reading and improving the RFC would be useful. However, I’d also like to improve the prototype. I would like to see it evolve into a more complete – but simplified – model of the borrow checker, that could serve as a good basis for analyzing the Rust type system and investigating extensions. Ideally, we would merge it with chalk, as the two complement one another: put together, they form a fairly complete model of the Rust type system (the missing piece is the initial round of type checking and coercion, which I would eventually like to model in chalk anyhow). If this vision interests you, please reach out! I have open issues on both projects, though I’ve not had time to write in tons of details – leave a comment if something sparks your interest, and I’d be happy to give more details and mentor it to completion as well.

Questions or comments?

Take it to internals!

Appendix: What the proposal won’t fix

I also want to mention a few kinds of borrow check errors that the current RFC will not eliminate – and is not intended to. These are generally errors that cross procedural boundaries in some form or another. For each case, I’ll give a short example, and give some pointers to the current thinking in how we might address it.

Closure desugaring. The first kind of error has to do with the closure desugaring. Right now, closures always capture local variables, even if the closure only uses some sub-path of the variable internally:

let get_len = || self.vec.len(); // borrows `self`, not `self.vec` self.vec2.push(...); // error: self is borrowed

This was discussed on an internals thread; as I commented there, I’d like to fix this by making the closure desugaring smarter, and I’d love to mentor someone through such an RFC! However, it is out of scope for this one, since it does not concern the borrow check itself, but rather the details of the closure transformation.

Disjoint fields across functions. Another kind of error is when you have one method that only uses a field a and another that only uses some field b; right now, you can’t express that, and hence these two methods cannot be used “in parallel” with one another:

impl Foo { fn get_a(&self) -> &A { &self.a } fn inc_b(&mut self) { self.b.value += 1; } fn bar(&mut self) { let a = self.get_a(); self.inc_b(); // Error: self is already borrowed use(a); } }

The fix for this is to refactor so as to expose the fact that the methods operate on disjoint data. For example, one can factor out the methods into methods on the fields themselves:

fn bar(&mut self) { let a = self.a.get();; use(a); }

This way, when looking at bar() alone, we see borrows of self.a and self.b, rather than two borrows of self. Another technique is to introduce “free functions” (e.g., get(&self.a) and inc(&mut self.b)) that expose more clearly which fields are operated upon, or to inline the method bodies. I’d like to fix this, but there are a lot of considerations at play: see this comment on an internals thread for my current thoughts. (A similar problem sometimes arises around Box<T> and other smart pointer types; the desugaring leads to rustc being more conservative than you might expect.)

Self-referential structs. The final limitation we are not fixing yet is the inability to have “self-referential structs”. That is, you cannot have a struct that stores, within itself, an arena and pointers into that arena, and then move that struct around. This comes up in a number of settings. There are various workarounds: sometimes you can use a vector with indices, for example, or the owning_ref crate. The latter, when combined with associated type constructors, might be an adequate solution for some uses cases, actually (it’s basically a way of modeling “existential lifetimes” in library code). For the case of futures especially, the ?Move RFC proposes another lightweight and interesting approach.

Categorieën: Mozilla-nl planet

Mozilla Marketing Engineering & Ops Blog: MozMEAO SRE Status Report - July 11, 2017

Mozilla planet - ti, 11/07/2017 - 02:00

Here’s what happened on the MozMEAO SRE team from July 5th - July 11th.

This weeks report is brief as the team is returning from the Mozilla San Francisco All Hands and vacation.

Current work Static site hosting Kubernetes
  • Our main applications are being moved to our new Frankfurt Kubernetes cluster.
Categorieën: Mozilla-nl planet

J.C. Jones: Cutting over Let's Encrypt's Statistics to Map/Reduce

Mozilla planet - mo, 10/07/2017 - 22:54

We're changing the methodology used to calculate the Let's Encrypt Statistics page, primarily to better cope with the growth of Let's Encrypt. Over the past several months it's become clear that the existing methodology is less accurate than we had expected, over-counting the number of websites using Let's Encrypt, and the number of active certificates. The new methodology is more easily spot-checked, and thus, we believe, is more accurate.

We're planning to soon cut-over all data in the Let's Encrypt statistics dataset used for the graphs, and use the new and more accurate data from 3 July 2017 onward. Because of this the data and graphs will show that between 2 and 3 July the count of Active Certificates will fall ~14%, and the count of Registered Domains and Fully-Qualified Domain Names each also fall by ~7%.

Growth Discontinuity

You can preview the new graphs at, as well as look at the old and new datasets, and a diff.

Shifting Methodology

These days I volunteer to process the Certificate Transparency logs for Let's Encrypt's statistics.

Previously, I used a tool to process Certificate Transparency logs and insert metadata into a SQL database, and then made queries against that SQL database to derive all of the statistics we display and use for Let's Encrypt's growth. As the database size has gotten larger, it has been increasingly expensive to maintain. The SQL methodology wasn't intended for long-term statistics, but as with most established infrastructure, it was difficult to carve out the time needed to revamp it.

The revamp is finally ready, using a Map/Reduce approach that can scale much further than a SQL database.

Why did the old way overcount?

Some of the domain overcounting appears to have been due to domains issued SAN-certificates sometimes not being purged when those certificates expire without being renewed. This only happens in cases where the domains are part of a SAN cert, and then the SAN cert is re-issued with a somewhat different set of domains. Those removed, while expired, were still counted. It appears that this seeming-edge case happened quite a lot for some hosting providers.

The active certificate overcounting is in-part due to timing of new certificates being added during nightly maintenance being essentially double-counted. Jacob pointed out that if Let's Encrypt had average issuance, for every hour maintenance takes, the active certificate count would inflate by ~5%. Maintenance with the SQL code took between 1 and 4 hours to complete each night, so this could easily account for the discrepancy in the active certificate count.

There are likely other more subtle counting errors, too.

How do we know it's better now?

The nature of the new Map/Reduce effort produces discrete lists of domains for each issuance day, which are more easily inspected for debugging, so I feel more confident in it. These domain lists are also available as (huge) datasets (which I should move to S3 for performance reasons) at Line counts in the "FQDN" and "RegDom" lists should match the figures for the most recent day's entry. At least, so far they have...

Reprocessing historical logs?

It's technically possible to re-process the historical data in Certificate Transparency for Let's Encrypt to ensure more accuracy, but I've not yet decided whether I will do this. All the data and software is public, so others could perform this effort, if desired.

Technical Details

The SQL effort moved around through 2016 from various hosting providers to get the best deal on RAM to keep the growing database in check, ultimately moving to Amazon's RDS last winter. A single db.r3.large RDS instance is handling the size well, but is quite expensive for this use case.

The new Map/Reduce effort is currently on a single Amazon m4.xlarge EC2 instance with 150 GB of disk space to hold the 90 days of raw CT log information, the daily summary files, and the 90-day summaries that populate the statistics. This EC2 instance only needs to run about 2 hours a day to catch-up on Certificate Transparency, and then produce the resulting data set. When it needs to scale upward again, I'll likely move to an Apache Spark cluster.

We'll see how fast Let's Encrypt needs it. :)

(Also posted at

Categorieën: Mozilla-nl planet

Air Mozilla: Mozilla Weekly Project Meeting, 10 Jul 2017

Mozilla planet - mo, 10/07/2017 - 20:00

Mozilla Weekly Project Meeting The Monday Project Meeting

Categorieën: Mozilla-nl planet

Mozilla Reps Community: Reps Mobilizer Experiment

Mozilla planet - mo, 10/07/2017 - 15:25

During the second quarter of 2017, and in order to understand how to better identify, recruit and support mobilizers, we decided to run a small experiment with a reduced set of existing “best in class” mobilizers and walk with them during their work supporting technical communities.


Reps program is a program for core mobilizers, who create, grow, sustain and engage communities around Mozilla projects. There are still improvement areas in order to become  a state of the art mobilizer program, so we wanted to identify which are these areas and which are the changes we can implement.


Bob Chao (Taiwan) – WebVR

Long time contributors, Bob has been empowering and growing different Mozilla related communities in Taiwan, more recently Rust and WebVR.

Full Report


Srushtika Neelakantam (India) – WebVR

Deeply involved with the WebVR community since its formation, Srushtika has been empowering the local community in India for a few years now. She has even wrote a book about WebVR.

Full Report


Daniele Scasciafratte (Italy) – WebExtensions

Extremely involved contributors, Daniele has been supporting the community in Italy for many years. He has been key to develop the first Addons activity for the MozActivate campaign.

Full Report

Vigneshwer Dhinakaran (India) – Rust

He has been key for the formation and growth of the Rust community in India, he is author of a book about the technology.

Full Report


Process Overview

We decided to use a human centered design approach to test this hypothesis. Each project started with a research phase followed by multiple iterations of potential solutions. Each iteration involved testing, reflecting on the learnings and iterating on the approach.

Overall main learnings
  1. A certain degree of understanding of the technology is needed for the mobilizer to be truly effective and understand the communities.
  2. It’s key to devote enough time to do research and understand the local environment and the potential contributors needs
    1. After a solid research, we can start thinking on which are and implement the best channels for communications between the community (sync and async) as well as information distribution (announces, materials…)
  3. There are two important areas when working with technical communities:
    1. Getting people excited about the tech and the community
    2. Keeping people engaged after the main activities take place. The top priority should be designing for the follow-up instead of the activity.
  4. Establishing a direct conversation between mobilizers and the functional area staff is key for having a correct direction and impactful outputs.
    1. Teams that work with more closed tools (slack) presented a bigger challenge

As a result of these learnings we will evaluate a set of recommendations to improve the Reps program and we will share with some early ideas soon on the Reps discourse.

Thank you Vigneshwer, Daniele, Srushtika and Bob, your work is an inspiration to all Reps and to the rest of Mozilla, you have demostrated strong leadership and an impact-oriented strategy thinking that will help others to follow your steps.

Categorieën: Mozilla-nl planet

Mozilla Open Design Blog: Zilla Slab: A common language through a shared font

Mozilla planet - mo, 10/07/2017 - 13:00

How do the rules change when you design in the open? That is a question I asked myself many times as I prepared to join the Mozilla brand team back in November 2016. Having never worked in an open source company, I knew that I would need to prepare myself for a mental shift.

Brand design systems are often managed tightly by internal creative teams, with strict guidelines. The guidelines,  are shared like canons, with outside agencies. At Mozilla, we pride ourselves on our relationship to a passionate community of volunteers who outnumber our employees 10 to 1. With this community, strict control doesn’t work, but we still need to provide a design system that includes their views and results in better designs. Here’s a conversation discussing the process with Yuliya Gorlovetsky, Mozilla’s Associate Creative Director and font expert Peter Biľak of Typotheque who helped with the font design.

Tools not rules

Logo refinement from Johnson Banks and Fontsmith.

Yuliya: When I joined the Mozilla brand team, the new identity was close to completion. The concept relied heavily on typography, and Johnson Banks had chosen a slab serif for the logo. Slab serifs are the less common and less well known serifs. They are often bold, sharper, and have strong personality. We saw an opportunity for Mozilla to build on this by creating a custom font to support the new identity system – a distinct typeface for the identity refresh that we planned to open source for all to use.

Jordan Gushwa, a Creative Lead at Mozilla Foundation, and I both agreed that adding an elegant yet flexible slab serif would help us refine the brand. It would allow us to design experiences that lead with typography and written messages, akin to experiences that you might find in lifestyle magazines or design publications when the opportunity presented itself. Mozilla is investing in content creation, as it becomes more important today to tell the stories about how we stay safe and connected on the web.

To develop the Zilla Slab, we worked with Peter Biľak and Nikola Djurek from Typotheque. We respected their work and valued their expertise in multi-language support. I remember that Peter was surprised during our first call when I said we were interested in Slab Serifs that would be at home in top notch magazines or web publications. He happily added those options to our font explorations.

Peter: Slab serif typefaces, sometimes called Egyptians, are usually angular and robust in construction. They hint at a technological underpinning, with aesthetics that are characteristic of the early 19th century Industrial Revolution.

When Yuliya called us at Typotheque, the decision had already been made to use a Slab serif typeface. I wanted to understand the rationale and possibly consider alternatives. Quickly I came to understand that this was a well-informed decision, looking for less explored areas of typeface categorisation.

Yuliya: Peter pulled together multiple options of baseline slab serifs from his catalog and showed us how they could evolve to support our logo design needs. Looking for a font that could flex from display to body copy, we quickly settled on Tesla as our base font. Tesla had a good balance of original details but also the evenness that would support a variety of subject matter.


Getting into the details

At the start, the main focus was was on the logotype. Peter kept us accountable by illustrating the impact that the decisions we made for the logotype letters would have on the full font family. When showing different logo options, he would extract the letters from the logo and show how the different design decisions would show up in other letters in a typeface.

This way we could consider side by side, not only the logo, but the full typeface we would have. It was a hard line to walk. You want to just focus on the logo, because that is the combination of letters that will appear time and time again, but since we wanted the logo to connect to the font, we had to find a place of compromise. We needed to make decisions that equally supported both, the logo and the font. Most of the exploration played out in the three main letters: m, z, and a. It’s amazing how many people will ask if they can stop coming to meetings when you talk through 10 different ‘a’ options.

Peter: We looked at various Slab Serif models. Earlier models proposed by Johnson Banks, which were typical heavy geometric Slabs, and then we looked at Sans serif typefaces and considered turning them into Slabs. From our collection we looked at Irma Text Slab, Charlie, Lumin, Zico, before settling on Tesla Slab as a starting point. Perhaps too much detailed information here? Yuliya was intrigued by models  that exhibited unusual traits —  shifted axes of contrast of thick and thin strokes, based on cursive writing rather than geometric construction, or even an asymmetric serif structure. The lowercase ‘a’ is a more complex letter that provides clues to how other letters may look in a full exploration of the logotype. We wanted to bring an angled stroke to the top of the ‘a’, mimicking the slashes of the internet protocol. The other letters would then need to follow the same construction principles.

Yuliya: We were able to narrow it down to two design directions:

  1. A dressed down simplified slab font with a more geometric “a”. Geometric fonts are created from basic shapes, such as straight, monolinear lines and circular shapes. They lack ornamentation, and rarely appear with serifs. This “a”, that we considered, was more round, upright, and had no serif on the end.
  2. A more serifed font that had a more humanist “a” with a very pronounced serif. Humanist serifs are the very first kind of Roman typeface. The letters were originally drawn with a pen held at a consistent angle, creating a consistent visual rhythm. This “a”, that we considered, had a very pronounced two-level serif, that was tilted at a slight angle to respond to the “/”  that came before it.


Peter: Most of the Slab typefaces are static and geometric in construction. “Static” refers here to the axis of contrast. When the axis is 0 degree, typefaces are usually described as static. When there is an angle, they became ‘dynamic’. We experimented with injecting more humanistic values into the traditional Slab model, with the help of more calligraphic stroke terminations, which not only improve legibility but create a more flowing rhythm of letter shapes.

Yuliya: After much debate and 35 rounds of review, we had narrowed the directions down to 2 top choices. We then guerilla tested on folks, we showed the 2 directions to some folks within Mozilla, and some colleagues and friends outside of Mozilla and asked which one they gravitated towards and why. We also asked folks what emotions the different directions inspired in them. These conversation gave us the insight to proceed with the more humanist version, but to simplify the serif on the “a”. The result was an overall simplification of the font, which we lovingly call Zilla Slab. We think that Zilla Slab is a casual and contemporary slab serif that still has a good amount of quirk.

We launched the logo in the middle of January, and applied it to just a few assets; web headers, signage around the office and some print applications. After the launch, Peter and his team continued to work with us through the details of the full font family. Peter regularly shared the progress, which we in turn shared with many designers and fellow type enthusiasts across Mozilla. Turns out a lot of folks across Mozilla self identify as type nerds!

Sophisticated italics

Italics came next, and they are graceful and sharp. As a humanist slab, the Zilla Slab italics are closer to a true italics and add a softness to the overall typography system. There are moments when I’m reviewing work and I have to pause and stare at the curved slant detail in the v, w, x, and y of the italics letters.

Peter: Italics are generally not used for longer texts.  The function of italics is to emphasise short passages, so they offer more space for expression. Since we aimed to make Zilla Slab a more affable typeface with humanistic elements, the Italics offered an opportunity to go fully in that direction. We based the Italic not on a slanted Roman, but on a cursive broad-nib writing style. Diagonals in italic often break the rhythm of writing, so we introduced curved diagonals that work well with cursive italic by maintaining a smoother flow.


Building in the highlight effect

Yuliya: Johnson Banks originally modeled the highlight effect for the identity system on the functional act of highlighting a piece of type with your mouse on a screen or within software, or code inspector.

From the original guideline from Johnson Banks. The red rectangle is the size of the colon rectangle in the logo.


We continued to expand and ask people for feedback on the identity system internally. As we put it to use we quickly discovered that typing out the words and manually drawing out the highlight box to contain those letters not only took a lot of effort, but also created visual inconsistencies. Sizing the letters and the highlight box separately and having them come together in the same way time and time again, requires a lot of math and visual tuning. Following our idea of focusing on tools that enable people rather than rules that restrict, we asked Typotheque to create a true highlight version of Zilla Slab. The highlight weight would show the counter shapes between letters rather than the letterforms themselves. These additional weights of Zilla Slab make it easy for anyone to contribute to the brand identity and not to be limited by design software or design knowledge. Building the logic and rules into the font makes it a seamless part of the system.

Peter: Crafting a wordmark or a logo is usually a different process from developing a functional typeface. as Wordmarks can create more context specific design and brand solutions, which may not work well in a font. With Zilla Slab, we worked on a wordmark and the typeface at the same time, and had to anticipate how the wordmark features could be translated to other glyphs not present in the original wordmark. This extended possibly to other writing scripts.

Typing “Mozilla” in Zilla Slab Highlight Bold


Yuliya: The final touch on making the font a completely functional tool for the brand started by asking Peter if it would be possible to automate Zilla Slab so that it would be possible to type out the Mozilla logo using the bold highlight weight of the Zilla Slab. This would allow browsers and native applications to  turn “Mozilla” automatically into “moz://a” in specific cases. This would free people from needing to place and attach a static image version of the file, and frees us from managing those files. A logo for an open source company, typed out in its open source font.

Peter: Since the Zilla font is used to create the new Mozilla logo, we included the OpenType substitution feature triggered by the Ligature function that replaces the ‘ill’ letters by ‘://”. This should only replace the “ill” when intended —  not in a word ‘illustration’, but always in mozilla or ‘Mozilla’, for example.


Looking Ahead

Design work from the design team. Special attribution to Patrick Crawford.


Over five months of development, as Peter’s team worked through the font details, our design team worked in parallel to test Zilla Slab in a broad range of design applications. It has proven to be as unique and flexible as we had hoped it would be. Zilla Slab has taken its place as the unifying component of our design system. After all, it’s through typography that we see language, and at Mozilla we all have a lot to say.

The roll out of Zilla Slab has also helped to unite our different design teams and give all of our Mozilla contributors a shared visual voice. We are launching Zilla Slab with support for 70 European Latin based languages, and we can’t wait to continue our work with Typotheque and localize it to additional alphabets.

This is one of our first shared tools within our identity system. We will keep adding more tools, writing about them, and designing these tools in the open with you.

Download Zilla Slab on Github or Google Fonts.

The post Zilla Slab: A common language through a shared font appeared first on Mozilla Open Design.

Categorieën: Mozilla-nl planet

Mozilla VR Blog: Easily customized environments using the Aframe-Environment-Component

Mozilla planet - mo, 10/07/2017 - 12:23
Easily customized environments using the Aframe-Environment-Component

Get a fresh and new environment for your A-Frame demos and experiments with the aframe-environment component!

Just include the aframe-environment-component.min.js component in your html file, add an <a-entity environment></a-entity> to your <a-scene>, and voila!

Easily customized environments using the Aframe-Environment-Component

<html> <head> <script src="path/to/aframe.js"></script> <script src="path/to/aframe-environment-component.js"></script> </head> <body> <a-scene> <a-entity environment><a-entity> </a-scene> </body> </html>

The component generates a new environment with presets for lights and geometry. These presets can be easily customized by using the inspector (ctrl + alt + i) and tweaking the individual values until you find the look you like. Presets are a combination of property values that define a particular style, they are a starting point that you can later customize:

<a-entity environment="preset: goldmine; sunPosition: 1 5 -2; groundColor: #742"><a-entity>

You can view and try all the presets from the aframe-environment-component example page.

And of course, the component is fully customizable without a preset:

<a-entity environment="skyType: gradient; skyColor: #1d7444; horizonColor: #7ae0e0; groundTexture: checkerboard; groundColor: #523c60; groundColor2: #544264; dressing: cubes; dressingAmount: 15; dressingColor: #7c5c45"></a-entity>

TIP: If you are using the inspector and are happy with the look of your environment, open your browser's dev tools (F12) and copy the latest parameters from the console.

Customizing your environment

The environment component defines four different aspects of the scene: lighting, sky, ground terrain and dressing objects.

Lighting and mood

The lighting in your scene is easily adjusted by changing the sunPosition property. Scene objects will subtly receive a bounce light from the ground, and the color of the fog will also change to match the sky color at the horizon.

Easily customized environments using the Aframe-Environment-Component

To fully control the lighting of the scene, you can disable the environment lights with lighting: none, and you can set lighting: point if you want a point light instead of a distant light for the sun.

Add realism to your scene by adding shadows toggling on the shadow parameter and adding the shadow component on objects that should cast shadows onto the ground. Learn more about A-Frame shadows.

Sky and atmosphere

The 200m radius sky dome can have a basic color, a top-down gradient, or a realistic looking atmospheric look by using skyType: atmosphere sky type. Lowering the sun near or below the horizon will give you a starry night sky.

Ground terrain

The ground is a flat subdivided plane that can be deformed to various different terrain patterns like hills, canyons, or spikes. The appearance can also be customized by its texture and colors.

The center play area where the player is initially positioned is always flat, so nobody will get buried ;)

The grid property will add a grid texture to the ground and can be adjusted to different colors and patterns.

Dressing objects

A sky and ground with nothing more could be a little too simple sometimes. The environment component includes many families of objects that can be used to spice up your scene, including cubes, pyramids, towers, mushrooms and more. Among other parameters, you can adjust their variation using dressingVariance, or the ratio of objects that will be inside or outside the play area with dressingOnPlayArea.

All dressing objects share the same material and are all merged in one single geometry for better performance.

Easily customized environments using the Aframe-Environment-Component

Further customization

To see the full list of parameters of the component, check out GitHub's aframe-environment-component repository.

Help make this component better

We could use your help!

  • File github issues
  • Create a new preset
  • Share your presets! So anyone can copy/paste and even try live
  • Create new dressing geometries
  • Create new procedural textures
  • Create new ground types
  • Create new grid styles

Feel free to send a pull request to the repository!

Performance considerations

The main idea of this component is to have a complete and visually interesting environment by including a single Javascript file, with no extra includes or dependencies. This requires that assets have to be included into the Javascript or (in most cases) generated procedurally . Despite of the computing time and increased file size, both options are normally faster than requesting and waiting for additional textures or model files.

Apart from the parameter dressingAmount, there is not much difference among different parameters in terms of performance.

Categorieën: Mozilla-nl planet

Christian Heilmann: Debugging JavaScript – console.loggerheads?

Mozilla planet - sn, 08/07/2017 - 19:35

The last two days I ran a poll on Twitter asking people what they use to debug JavaScript.

  • console.log() which means you debug in your editor and add and remove debugging steps there
  • watches which means you instruct the (browser) developer tools to log automatically when changes happen
  • debugger; which means you debug in your editor but jump into the (browser) developer tools
  • breakpoints which means you debug in your (browser) developer tools

The reason was that having worked with editors and developer tools in browsers, I was curious how much either are used. I also wanted to challenge my own impression of being a terrible developer for not using the great tools we have to the fullest. Frankly, I feel overwhelmed with the offerings and choices we have and I felt that I am pretty much set in my ways of developing.

Developer tools for the web have been going leaps and bounds in the last years and a lot of effort of browser makers goes into them. They are seen as a sign of how important the browser is. The overall impression is that when you get the inner circle of technically savvy people excited about your product, the others will follow. Furthermore, making it easier to build for your browser and giving developers insights as to what is going on should lead to better products running in your browser.

I love the offerings we have in browser developer tools these days, but I don’t quite find myself using all the functionality. Turns out, I am not alone:

The results of 3970 votes in my survey where overwhelmingly in favour of console.log() as a debugging mechanism.

Twitter pollPoll results: 67% console, 2% watches, 15% debugger and 16% breakpoints.

Both the Twitter poll and its correlating Facebook post had some interesting reactions.

  • As with any too simple poll about programming, a lot of them argued with the questioning and rightfully pointed out that people use a combination of all of them.
  • There was also a lot of questioning why alert() wasn’t an option as this is even easier than console().
  • There was quite some confusion about debugger; – seems it isn’t that common
  • There was only a small amount of trolling – thanks.
  • There was also quite a few mentions of how tests and test driven development makes debugging unimportant.

There is no doubt that TDD and tests make for less surprises and are good development practice, but this wasn’t quite the question here. I also happily discard the numerous mentions of “I don’t make mistakes”. I was pretty happy to have had only one mention of document.write() although you do still see it a lot in JavaScript introduction courses.

What this shows me is a few things I’ve encountered myself doing:

  • Developers who’ve been developing in a browser world have largely been conditioned to use simple editors, not IDEs. We’ve been conditioned to use a simple alert() or console.log() in our code to find out that something went wrong. In a lot of cases, this is “good enough”
  • With browser developer tools becoming more sophisticated, we use breakpoints and step-by-step debugging when there are more baffling things to figure out. After all, console.log() doesn’t scale when you need to track various changes. It is, however, not our first go-to. This is still adding something in our code, rather than moving away from the editor to the debugger
  • I sincerely hope that most of the demands for alert() were in a joking fashion. Alert had its merits, as it halted the execution of JavaScript in a browser. But all it gives you is a string and a display of [object object] is not the most helpful.
Why aren’t we using breakpoint debugging?

There should not be any question that breakpoint debugging in vastly superior to simply writing values into the console from our code:

  • You get proper inspection of the whole state and environment instead of one value
  • You get all the other insights proper debuggers give you like memory consumption, performance and so on
  • It is a cleaner way of development. All that goes in your code is what is needed for execution. You don’t mix debugging and functionality. A stray console.log() can give out information that could be used as an attack vector. A forgotten alert() is a terrible experience for our end users. A forgotten “debugger;” or breakpoint is a lot less likely to happen as it does pause execution of our code. I also remember that in the past, console.log() in loops had quite a performance impact of our code.

Developers who are used to an IDE to create their work are much more likely to know their way around breakpoint debugging and use it instead of extra code. I’ve been encountering a lot of people in my job that would never touch a console.log() or an alert() since I started working in Microsoft. As one response of the poll rightfully pointed out it is simpler:

It's even longer to write console.log than to put a breakpoint...

— Chen Eshchar (@cheneshchar) July 6, 2017

So, why do we then keep using console logging in our code rather than the much more superior way of debugging code that our browser tooling gives us?

I think it boils down to a few things:

  • Convenience and conditioning – we’ve been doing this for years, and it is easy. We don’t need to change and we feel familiar with this kind of back and forth between editor and browser
  • Staying in one context – we write our code in our editors, and we spent a lot of time customising and understanding that one. We don’t want to spend the same amount of work on learning debuggers when logging is good enough
  • Inconvenience of differences in implementation – whilst most debuggers work the same there are differences in their interfaces. It feels taxing to start finding your way around these.
  • Simplicity and low barrier of entry – the web became the big success it is by being independent of platform and development environment. It is simple to show a person how to use a text editor and debug by putting console.log() statements in your JavaScript. We don’t want to overwhelm new developers by overloading them with debugger information or tell them that they need a certain debugging environment to start developing for the web.

The latter is the big one that stops people embracing the concept of more sophisticated debugging workflows. Developers who are used to start with IDEs are much more used to breakpoint debugging. The reason is that it is built into their development tools rather than requiring a switch of context. The downsides of IDEs is that they have a high barrier to entry. They are much more complex tools than text editors, many are expensive and above all they are huge. It is not fun to download a few Gigabyte for each update and frankly for some developers it is not even possible.

How I started embracing breakpoint debugging

One thing that made it much easier for me to embrace breakpoint debugging is switching to Visual Studio Code as my main editor. It is still a light-weight editor and not a full IDE (Visual Studio, Android Studio and XCode are also on my machine, but I dread using them as my main development tool) but it has in-built breakpoint debugging. That way I have the convenience of staying in my editor and I get the insights right where I code.

For a node.js environment, you can see this in action in this video:

Are hackable editors, linters and headless browsers the answer?

I get the feeling that this is the future and it is great that we have tools like Electron that allow us to write light-weight, hackable and extensible editors in TypeScript or just plain JavaScript. Whilst in the past the editor you use was black arts for web developers we can now actively take part in adding features to them.

I’m even more a fan of linters in editors. I like that Word tells me I wrote terrible grammar by showing me squiggly green or red underlines. I like that an editor flags up problems with your code whilst you code it. It seems a better way to teach than having people make mistakes, load the results in a browser and then see what went wrong in the browser tools. It is true that it is a good way to get accustomed to using those and – let’s be honest – our work is much more debugging than coding. But by teaching new developers about environments that tell them things are wrong before they even save them we might turn this ratio around.

I’m looking forward to more movement in the editor space and I love that we are able to run code in a browser and get results back without having to switch the user context to that browser. There’s a lot of good things happening and I want to embrace them more.

We build more complex products these days – for good or for worse. It may be time to reconsider our development practices and – more importantly – how we condition newcomers when we tell them to work like we learned it.

Categorieën: Mozilla-nl planet